text
stringlengths
256
16.4k
I will expand the answer provided by @DavidKetcheson. First the equations are rewritten as a hyperbolic system of first-order conservation laws: $$q_t + \nabla \cdot F(q) = 0$$ or $$q_t + A q_x + B q_y + C q_z = 0$$ Where $q$ is a state vector formed with the components of the stress tensor $(\sigma_{11}, \sigma_{22}, \sigma_{33}, \sigma_{12}, \sigma_{23}, \sigma_{13})$ and components of the velocity vector $(u, v, w)$. $$q = \begin{pmatrix}\sigma_{11}\\ \sigma_{22}\\ \sigma_{33}\\ \sigma_{12}\\ \sigma_{23}\\ \sigma_{13}\\ u\\ v\\ w \end{pmatrix} \enspace ,$$ $$A = \begin{pmatrix}0 & 0 & 0 & 0 & 0 & 0 & {c}_{11} & {c}_{16} & {c}_{15}\\ 0 & 0 & 0 & 0 & 0 & 0 & {c}_{12} & {c}_{26} & {c}_{25}\\ 0 & 0 & 0 & 0 & 0 & 0 & {c}_{13} & {c}_{36} & {c}_{35}\\ 0 & 0 & 0 & 0 & 0 & 0 & {c}_{14} & {c}_{46} & {c}_{45}\\ 0 & 0 & 0 & 0 & 0 & 0 & {c}_{15} & {c}_{56} & {c}_{55}\\ 0 & 0 & 0 & 0 & 0 & 0 & {c}_{16} & {c}_{66} & {c}_{56}\\ \frac{1}{\rho} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \frac{1}{\rho} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & \frac{1}{\rho} & 0 & 0 & 0\end{pmatrix} \enspace ,$$ $$B = \begin{pmatrix}0 & 0 & 0 & 0 & 0 & 0 & {c}_{16} & {c}_{12} & {c}_{14}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{26} & {c}_{22} & {c}_{24}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{36} & {c}_{23} & {c}_{34}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{46} & {c}_{24} & {c}_{44}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{56} & {c}_{25} & {c}_{45}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{66} & {c}_{26} & {c}_{46}\\0 & 0 & 0 & \frac{1}{\rho} & 0 & 0 & 0 & 0 & 0\\0 & \frac{1}{\rho} & 0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & \frac{1}{\rho} & 0 & 0 & 0 & 0\end{pmatrix} \enspace ,$$ $$C = \begin{pmatrix}0 & 0 & 0 & 0 & 0 & 0 & {c}_{15} & {c}_{14} & {c}_{13}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{25} & {c}_{24} & {c}_{23}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{35} & {c}_{34} & {c}_{33}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{45} & {c}_{44} & {c}_{34}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{55} & {c}_{45} & {c}_{35}\\0 & 0 & 0 & 0 & 0 & 0 & {c}_{56} & {c}_{46} & {c}_{36}\\0 & 0 & 0 & 0 & 0 & \frac{1}{\rho} & 0 & 0 & 0\\0 & 0 & 0 & 0 & \frac{1}{\rho} & 0 & 0 & 0 & 0\\0 & 0 & \frac{1}{\rho} & 0 & 0 & 0 & 0 & 0 & 0\end{pmatrix} \enspace .$$ In order to compute the speeds of the problem (as described above) we need to form the matrix $\hat{A}(n_1, n_2, n_3) = n_1 A + n_2 B + n_3 C$, where $\mathbb{n}=(n_1, n_2, n_3)$ is a unit vector and determines the direction of propagation. To find the CFL condition it is necessary to solve $$\max_{(\theta, \phi)} \max_i \gamma_i(\theta, \phi) $$ where $(\theta, \phi)$ are spherical angles and $\gamma_i$ are the eigenvalues of the matrix $\hat{A}(\theta, \phi)$. Based on this, and the answer provided by @DavidKetcheson it is simpler to compute the eigenvalues of the Christoffel equation, and solve the optimization problem $$\max_{(\theta, \phi)} \max_i \lambda_i(\theta, \phi) $$ with $\lambda_i$ eigenvalues of the Christoffel equation. And the speed is just $c = \sqrt{\lambda_i/\rho}$.
Sources for the path integral You can read any standard source, so long as you supplement it with the text below. Here are a few which are good: Feynman and Hibbs Kleinert (although this is a bit long winded) An appendix to Polchinski's string theory vol I Mandelstam and Yourgrau There are major flaws in other presentations, these are good ones. I explain the major omission below. Completing standard presentations In order for the discussion of the path integral to be complete, one must explain how non-commutativity arises. This is not trivial, because the integration variables in the path integral for bosonic fields or particle paths are ordinary real valued variables, and these quantities cannot be non-commutative themselves. Non-commutative quantities The resolution of this non-paradox is that the path integral integrand is on matrix elements of operators, and the integral itself is reproducing the matrix multiplication. So it is only when you integrate over all values at intermediate times that you get a noncommutative order-dependent answer. Importantly, when noncommuting operators appear in the action or in insertions, the order of these operators is dependent on exactly how you discretize them--- whether you put the derivative parts as forward differences or backward differences or centered differences. These ambiguities are extremely important, and they are discussed only in a handful of places (Negele/Orland Yourgrau/Mandelstam Feynman/Hibbs, Polchinski, Wikipedia) and hardly anywhere else. I will give the classic example of this, which is enough to resolve the general case, assuming you are familiar with simple path integrals like the free particle. Consider the free particle Euclidean action $$ S= -\int {1\over 2} \dot{x}^2 $$ and consider the evaluation of the noncommuting product $x\dot{x}$. This can be discretized as $$ x(t) {x(t+\epsilon) - x(t)\over \epsilon} $$ or as $$ x(t+\epsilon) {x(t+\epsilon) - x(t)\over \epsilon}$$ The first represents $p(t)x(t)$ in this operator order, the second represents $x(t)p(t)$ in the other operator order, since the operator order is the time order. The difference of the second minus the first is $$ {(x(t+\epsilon) - x(t))^2\over \epsilon} $$ Which, for the fluctuating random walk path integral paths has a fluctuating limit which averages to 1 over any finite length interval, when $\epsilon$ goes to zero. This is the Euclidean canonical commutation relation, the difference in the two operator orders gives 1. For Brownian motion, this relation is called "Ito's lemma", not dX, but the square of dX is proportional to dt. While dX is fluctuating over positive and negative values with no correlation and with a magnitude at any time of approximately $\sqrt{dt}$, dX^2 is fluctuating over positive values only, with an average size of dt and no correlations. This means that the typical Brownian path is continuous but not differentiable (to prove continuity requires knowing that large dX fluctuations are exponentially suppressed--- continuity fails for Levy flights, although dX does scale to 0 with dt). Although discretization defines the order, not all properties of the discretization matter--- only which way the time derivative goes. You can understand the dependence intuitively as follows: the value of the future position of a random walk is (ever so slightly) correlated with the current (infinite) instantaneous velocity, because if the instantaneous velocity is up, the future value is going to be bigger, if down, smaller. Because the velocity is infinite however, this teensy correlation between the future value and the current velocity gives a finite correlator which turns out to be constant in the continuum limit. Unlike the future value, the past value is completely uncorrelated with the current (forward) velocity, if you generate the random walk in the natural way going forward in time step by step, by a Markov chain. The time order of the operators is equal to their operator order in the path integral, from the way you slice the time to make the path integral. Forward differences are derivatives displaced infinitesimally toward the future, past differences are displaced slightly toward the past. This is is important in the Lagrangian, when the Lagrangian involves non-commuting quantities. For example, consider a particle in a magnetic field (in the correct Euclidean continuation): $$ S = - \int {1\over 2} \dot{x}^2 + i e A(x) \cdot \dot{x} $$ The vector potential is a function of x, and it does not commute with the velocity $\dot{x}$. For this reason, Feynman and Hibbs and Negele and Orland carefully discretize this, $$ S = - \int \dot{x}^2 + i e A(x) \cdot \dot{x}_c $$ Where the subscript c indicates infinitesimal centered difference (the average of the forward and backward difference). In this case, the two orders differ by the commutator, [A,p], which is $\nabla\cdot A$, so that there is an order difference outside of certain gauges. The correct order is given by requiring gauge invariance, so that adding a gradiant $\nabla \alpha$ to A does nothing but a local phase rotation by $\alpha(x)$. $$ ie \int \nabla\alpha \dot{x}_c = ie \int {d\over dt} \alpha(x(t))$$ Where the centered differnece is picked out because only the centered difference obeys the chain rule. That this is true is familiar from the Heisenberg equation of motion: $$ {d\over dt} F(x) = i[H,F] = {i\over 2} [p^2,F] = {i/2}(p[p,F] + [p,F]p) = {1\over 2}\dot{x} F'(x) + {1\over2} F'(x) \dot{x}$$ Where the derivative is a sum of both orders. This holds for quadratic Hamiltonians, the ones for which the path integral is most straightforward. The centered difference is the sum of both orders. The fact that the chain rule only works for the centered difference means that people who do not understand the ordering ambiguities 100% (almost everybody) have a center fetishism, which leads them to use centered differences all the time. THe centered difference is not appropriate for certain things, like for the Dirac equation discretization, where it leads to "Fermion doubling". The "Wilson Fermions" are a modification of the discretized Dirac action which basically amounts to saying "Don't use centered derivatives, dummy!" Anyway, the order is important. Any presentation of the path integral which gives the Lagrangian for a particle in a magnetic field without specifying whether the time derivative is a forward difference or a past difference, is no good. That's most discussions. A good formalism for path integrals thinks of things on a fine lattice, and takes the limit of small lattice spacing at the end. Feynman always secretly thought this way (and often not at all secretly, as in the case above of a particle in a magnetic field), as does everyone else who works with this stuff comfortably. Mathematicians don't like to think this way, because they don't like the idea that the continuum still has got new surprises in the limit. The other thing that is hardly ever explained properly (except for Negele/Orland, David John Candlin's Neuvo Cimento original article of 1956, and Berezin) is the Fermionic field path integral. This is a separate discussion, the main point here is to understand sums over Fermionic coherent states.
Need to know the relation between EXPTIME and NP HARD complexity classes. There are NP-hard problems that are not in EXPTIME and vice versa. This is to be expected as NP-hard is defined by a lower bound and EXPTIME mainly by an upper bound. NP is contained in EXPTIME, however, and NP-complete is of course contained in NP. The two classes are incomparable: neither is a subset of the other. There are problems in EXPTIMEthat are not NP-hard. The languages $\emptyset$ and $\Sigma^*$ are both in EXPTIMEbut are definitely not NP-hard since no other language can be many-one reduced to either of them. If we assume that P$\,\neq\,$NP, then we get plenty more problems (all of P) that are in EXPTIMEbut not NP-hard. There are NP-hard problems that are not in EXPTIME. For example, consider the class 2EXPTIME$\,=\bigcup_{c\geq 0}\mathrm{TIME}\left[2^{2^{n^c}}\right]$. Because NP$\,\subset\,$2EXPTIMEany 2EXPTIME-complete problem is NP-hard. However, by the time hierarchy theorem, we know that EXPTIME$\,\neq\,$2EXPTIME, which means that no problem in EXPTIMEis 2EXPTIME-complete. (In fact, for a more extreme example, the halting problem is NP-hard and that's definitely not in EXPTIME!) NP hard is a subset of EXPTIME, but we have no idea if that subset is strict. Answered by: @jmite
I'm practising solving some limits and, currently, I'm trying to solve $\lim\limits_{x\to\infty}{\left({{(x!)^2}\over{(2x)!}}\right)}$. What I have done: I have attempted to simplify the fraction until I've reached an easier one to solve, however, I'm currently stuck at the following: $$ \lim_{x→\infty}{\left({{(x!)^2}\over{(2x)!}}\right)}= \lim_{x→\infty}{\left({{(\prod_{i=1}^{x}i)^2}\over{\prod_{i=1}^{2x}i}}\right)}= \lim_{x→\infty}{\left({ { {\prod_{i=1}^{x}i}\cdot{\prod_{i=1}^{x}i} }\over{ { {\prod_{i=1}^{x}}i}\cdot{\prod_{i=x+1}^{2x}i} } }\right)}= \lim_{x→\infty}{\left({ {\prod_{i=1}^{x}i}\over{ {\prod_{i=x+1}^{2x}i}} }\right)}. $$ Instinctively, I can see that the limit is equal to $0$, since the numerator is always less than the denominator, thus approaching infinity slower as $x→\infty$. Question: How can I continue solving the above limit w/o resorting to instinct to determine it equals $0$ ? If the above solution can't go any further, is there a better way to approach this problem?
In classical field theory, the on-shell dynamical field variables $\bar{q}$ give a minimum value of the action: $$A=\int dt ~L(\bar{q}(t),\dot{\bar{q}}(t)).$$ In this case, the action is actually a real number, so it makes good sense for it to have some extremal value. What is the meaning of "extremal" in canonical Quantum Field Theory where the action, $$A=\int d^{4}x ~\mathcal{L}(\bar{\phi}(x),\partial_{\mu}{\bar{\phi}}(x))$$ is instead, an operator?
Circular droplet in equilibrium This is the classical “spurious” or “parasitic currents” test case discussed in Popinet, 2009. We use the Navier–Stokes solver with VOF interface tracking and surface tension. The interface is represented by the volume fraction field c. scalar c[], * interfaces = {c}; The diameter of the droplet is 0.8. The density is constant (equal to unity by default), and the viscosity is defined through the Laplace number \displaystyle La = \sigma\rho D/\mu^2 with \sigma set to one. The simulation time is set to the characteristic viscous damping timescale. #define DIAMETER 0.8#define MU sqrt(DIAMETER/LAPLACE)#define TMAX (sq(DIAMETER)/MU) We will vary the number of levels of refinement (to study the convergence), the Laplace number and DC a convergence parameter which measures the variation in volume fraction between successive timesteps (to evaluate whether we are close to a steady solution). int LEVEL;double LAPLACE;double DC = 0.;FILE * fp = NULL;int main() { We neglect the advection terms and vary the Laplace, for a constant resolution of 5 levels. TOLERANCE = 1e-6; stokes = true; c.sigma = 1; LEVEL = 5; N = 1 << LEVEL; for (LAPLACE = 120; LAPLACE <= 12000; LAPLACE *= 10) run(); We now fix the Laplace number and look for stationary solutions (i.e. the volume fraction field is converged to within 1e-10) for varying spatial resolutions. LAPLACE = 12000; DC = 1e-10; for (LEVEL = 3; LEVEL <= 7; LEVEL++) if (LEVEL != 5) { N = 1 << LEVEL; run(); }} We allocate a field to store the previous volume fraction field (to check for stationary solutions). We set the constant viscosity field… const face vector muc[] = {MU,MU}; mu = muc; … open a new file to store the evolution of the amplitude of spurious currents for the various LAPLACE, LEVEL combinations… … and initialise the shape of the interface and the initial volume fraction field. At every timestep, we check whether the volume fraction field has converged. double dc = change (c, cn); if (i > 1 && dc < DC) return 1; /* stop */ And we output the evolution of the maximum velocity. At the end of the simulation, we compute the equivalent radius of the droplet. double vol = statsf(c).sum; double radius = sqrt(4.*vol/pi); We recompute the reference solution. scalar cref[]; fraction (cref, sq(DIAMETER/2) - sq(x) - sq(y)); And compute the maximum error on the curvature ekmax, the norm of the velocity un and the shape error ec. We output these on standard error (i.e. the log file). We use an adaptive mesh with a constant (maximum) resolution along the interface. #if TREEevent adapt (i <= 10; i++) { adapt_wavelet ({c}, (double[]){0}, maxlevel = LEVEL, minlevel = 0);}#endif Results The maximum velocity converges toward machine zero for a wide range of Laplace numbers on a timescale comparable to the viscous dissipation timescale, as expected. set xlabel 't{/Symbol m}/D^2'set ylabel 'U(D/{/Symbol s})^{1/2}'set logscale yplot 'La-120-5' w l t "La=120", 'La-1200-5' w l t "La=1200", \ 'La-12000-5' w l t "La=12000" The equilibrium shape and curvature converge toward the exact shape and curvature at close to second-order rate. set xlabel 'D'set ylabel 'Shape error'set logscale xset xtics 2set pointsize 1plot [5:120]'< sort -n -k1,2 log' u (0.8*2**$1):5 w lp t "RMS", \ '< sort -n -k1,2 log' u (0.8*2**$1):6 w lp t "Max", \ 0.2/(x*x) t "Second order" set ylabel 'Relative curvature error'plot [5:120]'< sort -n -k1,2 log' u (0.8*2**$1):($7/2.5) w lp t "Max", \ 0.6/(x*x) t "Second order"
I have been trying to debug this error the last few days I wondered if anybody has advice on how to proceed. I am solving the Poisson equation for a step charge distribution (a common problem in electrostatics/semiconductor physics) on a non-uniform finite volume mesh where the unknown are defined on cell centres and the fluxes on the cell faces. $$ 0 = (\phi_x)_x + \rho(x) $$ the charge profile (the source term) is given by, $$ \rho(x)= \begin{cases} -1,& \text{if } -1 \leq x \leq 0\\ 1,& \text{if } 0 \leq x \leq 1\\ 0, & \text{otherwise} \end{cases} $$ and the boundary conditions are, $$ \phi(x_L)=0 \\ \frac{\partial\phi}{\partial x}\bigg|_{x_R}=0 $$ and the domain is $[-10,10]$. I am using code developed to solve the advection-diffusion-reaction equation (I have written myself see my notes here, http://danieljfarrell.github.io/FVM). The advection-diffusion-reaction equation is a more general case of the Poisson equation. Indeed the Poisson equation can be recovered by setting the advection velocity to zero and removing the transient term. The code has been tested against a number of situations for uniform, nonuniform and random grids and always produces a reasonable solutions (http://danieljfarrell.github.io/FVM/examples.html) for the advection-diffusion-reaction equation. To show where the code breaks down I have made the following example. I setup a uniform mesh of 20 cells and then make it nonuniform by removing a single cell. In the left figure I have removed cell $\Omega_8$ and in the right $\Omega_9$ has been removed. The 9th cell covers the region where the source term (i.e. the charge) changes sign. The bug appears when the grid is nonuniform in a region where the reaction term changes sign. As you can see below. Any ideas what could possibility be causing this issue? Let me know if more information regarding the discretisation would be helpful (I didn't want to pack too much detail into this question).
The probability distribution function, cumulative distribution function and survival function of Weibull distribution are given by respectively, \begin{equation} f(t;\alpha, \beta)= \dfrac{\beta}{\alpha^{\beta}} t^{\beta - 1} exp\left[- \left(\dfrac{t}{\alpha} \right)^{\beta} \right] \end{equation} \begin{equation} F(t;\alpha, \beta)= 1 - exp\left[- \left(\dfrac{t}{\alpha} \right)^{\beta} \right] \end{equation} \begin{equation} S(t;\alpha, \beta)= exp\left[- \left(\dfrac{t}{\alpha} \right)^{\beta} \right] \end{equation} where $\alpha$ is a scale parameter and $\beta$ is a shape parameter. Here, we are interested in reparameterization of Weibull distribution in terms of scale parameter $\sigma=\dfrac{1}{\beta}$ and location parameter $\mu = \log (\alpha)$. Thus, we substitute those in to the equations above, we will get $f(t)$, $F(t)$ and $S(t)$ like, \begin{equation} f(t;\sigma, \mu)= \dfrac{1}{\sigma \left[\exp(\mu)\right]^{\frac{1}{\sigma}}} t^{\frac{1}{\sigma} - 1} exp\left[- \left(\dfrac{t}{\exp(\mu)} \right)^{\frac{1}{\sigma}} \right] \end{equation} \begin{equation} F(t;\sigma, \mu)= 1 - exp\left[ - \left(\dfrac{t}{\exp(\mu)} \right)^{\frac{1}{\sigma}} \right] \end{equation} \begin{equation} S(t;\sigma, \mu)= exp\left[- \left(\dfrac{t}{\exp(\mu)} \right)^{\frac{1}{\sigma}} \right] \end{equation} Now, we are trying to derive variance-covariance matrix of these parameters, i.e. $\sigma$ and $\mu$ when we have right-censored data. The definition and how to find likelihood function of right-censored data are given below:\ Let $f(t)$ and $S(t)$ denote the pdf and survivor functions, each of which is a function of a parameter vector $\theta$ and individual covariate information $x_i$ ($i= 1, \ldots, n$). Assume independent observations. Let $T_i$ denote the lifetime for the $i^{th}$ subject and let $C_i$ denote the time from the date of entry to the end of the study data. Thus we observe $t_i = \min\{T_i, C_i\}$. Finally, let $Y_i$ be an indicator variable, where 1 indicates the observation is right censored, i.e., $T_i > C_i$ and 0 indicates the observation is not censored. Then \begin{equation} L(\theta) = \prod_{i=1}^n f(t_i|\theta)^{1-Y_i} S(t_i|\theta)^{Y_i}. \end{equation} Each observation contributes one factor in the likelihood, either the value of the density or of the survivor probability. If we take the natural logarithm of this, we will get log-likelihood function which is \begin{equation} \begin{split} \log L(\theta) &= \sum_{i=1}^n (1-Y_i) f(t_i|\theta) + \sum_{i=1}^n (Y_i) S(t_i|\theta)\\ &=\sum_{i=1}^n (1-Y_i) \left\{-\log (\sigma) - \frac{\mu}{\sigma} + \frac{\log(t)}{\sigma} - \log(t) - \left[\dfrac{t}{\exp(\mu)}\right]^{\frac{1}{\sigma}} \right\} - \sum_{i=1}^n Y_i \left[\dfrac{t}{\exp(\mu)}\right]^{\frac{1}{\sigma}} \end{split} \end{equation} From then on, I know what I need to do. I need to take the first derivaties with respect to $\sigma$ and $\mu$ and find score matrix. And then I have to take second derivatives and get Hessian matrix so I can easily get observed Fisher Information matrix. but I cannot do it because it is too complicated. Can someone help me about it?
Asian Journal of Mathematics Asian J. Math. Volume 15, Number 4 (2011), 611-630. A New Pinching Theorem for Closed Hypersurfaces with Constant Mean Curvature in $S^{n+1}$ Abstract We investigate the generalized Chern conjecture, and prove that if $M$ is a closed hypersurface in $S^{n+1}$ with constant scalar curvature and constant mean curvature, then there exists an explicit positive constant $C(n)$ depending only on $n$ such that if $|H| < C(n)$ and $S > \beta (n,H)$, then $S > \beta (n,H) + \frac{3n}{7}$, where $\beta(n,H) = n + \frac{n^3 H^2}{2(n−1)} + \frac{n(n−2)}{2(n−1)} \sqrt{n^2 H^4 + 4(n − 1)H^2}$. Article information Source Asian J. Math., Volume 15, Number 4 (2011), 611-630. Dates First available in Project Euclid: 12 March 2012 Permanent link to this document https://projecteuclid.org/euclid.ajm/1331583350 Mathematical Reviews number (MathSciNet) MR2853651 Zentralblatt MATH identifier 1243.53104 Citation Xu, Hong-Wei; Tian, Ling. A New Pinching Theorem for Closed Hypersurfaces with Constant Mean Curvature in $S^{n+1}$. Asian J. Math. 15 (2011), no. 4, 611--630. https://projecteuclid.org/euclid.ajm/1331583350
Consider Poisson’s equation $$- \Delta u = f{\rm\qquad{ in }}\;\Omega $$ with following mixed boundary cconditions $$u = g{\rm\qquad{ on }}\;\Gamma \subset \partial \Omega $$ $$\frac{{\partial u}}{{\partial n}} = h{\qquad\rm{ on }}\;\partial \Omega \backslash \Gamma$$ where $\frac{{\partial u}}{{\partial n}}$ denotes the derivative of $u$ in the direction normal to the boundary, $\partial \Omega$. When $g=0$ and $h=0$, the problem becomes homogeneous boundary condition. I already know the Aubin-Nitsche trick to get error estimates for $u - {u_h}$ in the $L^2$ norm with homogeneous boundary condition. I learned the duality argument from Brenner and Scott's book The mathematical theory of finite element methods. But when the boundary condition is inhomogeneous, it is difficult for me to get error estimates for $u - {u_h}$ in the $L^2$ norm using duality argument. Although the book says, ...inhomogeneous boundary conditions are easily treated' and the proof is analogous I encountered a problem arising from the Neumann boundary condition. When the boundary condition is homogeneous, I can get the dual problem $$a\left( {v,w} \right) = \left( {u - {u_h},v} \right)\qquad\forall v \in H_0^1\left( \Omega \right)$$ and the elliptic regularity estimates $${\left| w \right|_{{H^2}}} \le C{\left\| {u - {u_h}} \right\|_{{L^2}}}$$ which is crucial to the proof. In the case of inhomogeneous boundary condition, however, I get the dual problem $$a\left( {v,w} \right) = \left( {r - {r_h},v} \right) + {\left( {h,v} \right)_{\partial \Omega \backslash \Gamma }}\qquad\forall v \in H_0^1\left( \Omega \right)$$ and the elliptic regularity estimates $${\left| w \right|_{{H^2}}} \le C\left( {{{\left\| {r - {r_h}} \right\|}_{{L^2}}} + {{\left\| h \right\|}_{{H^{\frac{1}{2}}}}}} \right)$$ which $r = u - \tilde u$, $\tilde u \in {H^1}\left( \Omega\right)$ and ${\left.{\tilde u} \right|_\Gamma } = g$. In the following steps of proof, I try to estimate ${\left( {h,v} \right)_{\partial \Omega \backslash \Gamma }}$ and ${{{\left\| h \right\|}_{{H^{\frac{1}{2}}}}}}$, but I didn't get the result. Questions: Is my idea about the proof of the inhomogeneous boundary condition, such as the dual problem and the elliptic regularity estimate which I get, correct? If my idea about the proof is correct, how can I continue to prove the error estimates for $u - {u_h}$ in the $L^2$ norm(my idea to estimate ${\left( {h,v} \right)_{\partial \Omega \backslash \Gamma }}$ and ${{{\left\| h \right\|}_{{H^{\frac{1}{2}}}}}}$)? If my idea about the proof is wrong, how to obtain the error estimates for $u - {u_h}$ in the $L^2$ norm(the case of inhomogeneous boundary condition)?Please tell me the proof or the main idea of the proof.
I'm doing a wind tunnel experiment and I'm trying to plot the pressure coefficient distribution for the upper and the lower surfaces of an airfoil based on experimental data. The airfoil chord is 150 mm. I have the pressure static ports located at: Upper static port #: 1 3 5 7 9 11 13 15 17 19X-coord [mm]: 0.76 3.81 11.43 19.05 38.00 62.00 80.77 101.35 121.92 137.16Lower static port #: 2 4 6 8 10 12 14 16 18 20X-coord [mm]: 1.52 7.62 15.24 22.86 41.15 59.44 77.73 96.02 114.30 129.54 And the pressure readings for each static port are the following: Static port #: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Pressure [kPa]: -0.034 -0.113 -0.087 -0.172 -0.136 -0.168 -0.148 -0.17 -0.15 -0.156 -0.133 -0.15 -0.124 -0.139 -0.12 -0.13 -0.13 -0.13 -0.117 -0.12 I also have Pitot velocity reading: $U_\infty=12.26$ m/s, density: $\rho=1.225$ kg/$m^3$ and $p_\infty=101.3$ kPa. I know that to compute the pressure coefficient for each static port, I have to use: $$c_p(x,z(x)) = \dfrac{p(x,z(x))-p_\infty}{1/2\cdot \rho_\infty \cdot U_\infty^2 }$$ I coded a MATLAB script that allows me to compute this equation for the upper part of the airfoil using the odd static ports readings, and for the lower part of the airfoil by using the even static ports readings of pressure. But I'm having two problems that are a bit strange, Pressure coefficient must be dimensionless. I have the pressure values in "kPa" and the problem is that if I convert them into "Pa" by multiplying a 1000 factor. The pressure coefficients values will not be around 1, which is what I'm used to seeing in the Cp-chord graphs. How can I fix this? I'm not sure it's correct to use a dividing 1000 factor to compensate this effect. When I plot the the result I obtain the following result, which is not very similar to the typical Cp-chord plots I have seen in experiments. Am I doing something wrong? Or is it a problem of the experiment in which I obtained bad data? Could you help me solve my doubts by giving me a little help based on your experience? I'm not asking to get the problem solved, I just want to know how to do it properly.
First of all $\min (|E|) = 0$ since the graph can be disconnected. $\min(|E|) = |V| - 1$, is true only for connected graphs. Whether a vertex is needed to be connected to the graph depends on the problem being considered. In general, a vertex can be isolated. So, in general, $0 \leq |E| \leq {{|V|}\choose{2}}$, if the graph is not a multi-graph. If the graph is a multi-graph then there is no upper limit to $|E|$. Secondly if we are comparing $O((\log|V|^2||E|))$ against something like $O(|V|^2+|E|)$, former will be always better than the latter , since $O(\log(|V|^2)|E|) = O(\log|E|+2\log|V|)$. So the question is, whether to analyze algorithms in terms of $|E|$ (i.e $|E|$ and $|V|$) or only in terms of $|V|$. Of course what you say about needing to relate $|V|$ and $|E|$ is correct. However, whenever a complexity is stated in terms of $|E|$, it is better than, say, substituting $|V|^2$ for $|E|$. Note that $|E|$ is always $O(|V|^2)$ even for the cases when it is smaller, for example, for trees. $O()$ is upper bound. If the analysis of an algorithm is in terms of $|E|$ than we get the complexity bound for all the cases, whether the graph is dense or sparse. Thus $|E||V|$-time algorithm is considered better than a $|V|^3$-time algorithm. In the cases, if the graph is a tree, or a graph with a max degree $d=O(1)$, then $|E|$ is only $O(|V|)$. So, as an example, an $O(|E||V|)$ algorithm will be considered better than $O(|V|^3)$ algorithm. Thus, if we are able to do a tighter analysis, we are sure that the algorithm will fare better in case of non-worst case input. As pointed by Raphael, $\Theta(|E|)$ is not same as $O(|V|^2)$. $\Theta$ analysis is better wherever applicable. But usually we don't give $\Theta$ analysis, because if we say some algorithm is $\Theta(f(n))$, and for some easy input the algorithm runs in $o(f(n))$ time, our statement will be wrong.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
What are advantages and disadvantages of having questions from this site in SE network hot questions list? It is clear that there are some advantages for the OP: The question gets more visibility through some visits both from other sites and from this site, this might add more votes and more bumps, which again increases the exposure of the particular question. What are the advantages for the Math.SE community as a whole? I can see some positive aspects: Maybe some of the users who visit this site through that list can be made aware of this site. Some of them might visit the site again. Simply, more people will know about existence of this site. I guess there are also some disadvantages. For example, questions which appear on this list and also answers to them might receive many upvotes (some kind of Matthew effect). It seems a bit unfair that one question is much more upvoted than other posts of comparable or even better quality. Is it better not to remove questions from hot questions list (for example, by adding MathJax to the tile)? Or is this good thing to do, so that other questions will have chance to get there and we will have more diversity in this list? I will try to explain what motivated me to ask this question. Although some of the issues in the following paragraphs might deserve a separate discussion (or discussions) on meta. I was thinking about this mainly in the context of editing the titles. The titles should be informative and describe the question as well as possible. But when I want to improve the title I face this dilemma, especially if the question is already in the hot question list: Should I edit it? If I include MathJax, it will not have chance to become a network hot questions - or if it already is, it will be removed from the list. Sometimes it is perfectly possible to write a descriptive title which does not use MathJax at all. And if the math is not complicated, it is possible to use Unicode instead of MathJax. But my experience is that using Unicode instead of MathJax tends to lead to much worse list of related questions and also makes a searching difficult. For example, if you have a look at these two questions (I have added Wayback Machine links - just in the case the titles will be edited): If $(B \cap C) \subset A$, then $(C\setminus A) \cap (B\setminus A) = \emptyset$ (Internet Archive) Is this a correct proof: B \ (A \ C) ⊆ (B \ C) ∪ A ⇔ (B ∩ C) ⊆ A (Internet Archive) In one case the title uses MathJax, in the other case, the title uses Unicode. Both titles are specific enough and perfectly readable. But if you look at the list of related questions in the sidebar (which is generated by the SE software), only in the case where MathJax is used, the questions shown there are at least containing the correct symbol and there is chance that if a duplicate is on the site, it will be shown there. For the Unicode-titled questions, the questions shown in the sidebar seem not to be closely related. You can find a few more examples here. I listed there some pairs of questions with MathJax and Unicode titles, and you can compare the results. My experience is that MathJax-based title tends to produce better list of related questions. Also MathJax seems to be better for searching. If I google for "A∪C" site:math.stackexchange.com I get completely different results than from "A\cup C" site:math.stackexchange.com. The latter is much better. (As a side note, both MathJax and Unicode are definitely better if they appear in some list of questions - be it from search or related questions or elsewhere - than titles like "question from elementary set theory".)
General case. In relativistic thermodynamics, inverse temperature $\beta^\mu$ is a vector field, namely the multipliers of the 4-momentum density in the exponent of the density operator specifying the system in terms of statistical mechanics, using the maximum entropy method, where $\beta^\mu p_\mu$ (in units where $c=1$) replaces the term $\beta H$ of the nonrelativistic canonical ensemble. This is done in C.G. van Weert, Maximum entropy principle and relativistic hydrodynamics, Annals of Physics 140 (1982), 133-162. for classical statistical mechanics and for quantum statistical mechanics in T. Hayata et al., Relativistic hydrodynamics from quantum field theory on the basis of the generalized Gibbs ensemble method, Phys. Rev. D 92 (2015), 065008. https://arxiv.org/abs/1503.04535 For an extension to general relativity with spin see also F. Becattini, Covariant statistical mechanics and the stress-energy tensor, Phys. Rev. Lett 108 (2012), 244502. https://arxiv.org/abs/1511.05439 Conservative case. One can define a scalar temperature $T:=1/k_B\sqrt{\beta^\mu\beta_\mu}$ and a velocity field $u^\mu:=k_BT\beta^\mu$ for the fluid; then $\beta^\mu=u^\mu/k_BT$, and the distribution function for an ideal fluid takes the form of a Jüttner distribution $e^{-u\cdot p/k_BT}$. For an ideal fluid (i.e., assuming no dissipation, so that all conservation laws hold exacly), one obtains the format commonly used in relativistic hydrodynamics (see Chapter 22 in the book Misner, Thorne, Wheeler, Gravitation). It amounts to treating the thermodynamics nonrelativistically in the rest frame of the fluid. Note that the definition of temperature consistent with the canonical ensemble needs a distribution of the form $e^{-\beta H - terms~ linear~ in~ p}$, conforming with the identification of the noncovariant $\beta^0$ as the inverse canonical temperature. Essentially, this is due to the frame dependence of the volume that enters the thermodynamics. This is in agreement with the noncovariant definition of temperature used by Planck and Einstein and was the generally agreed upon convention until at least 1968; cf. the discussion in R. Balescu, Relativistic statistical thermodynamics, Physica 40 (1968), 309-338. In contrast, the covariant Jüttner distribution has the form $e^{-u_0 H/k_BT - terms~ linear~ in~ p}$. Therefore the covariant scalar temperature differs from the canonical one by a velocity-dependent factor $u_0$. This explains the different transformation law. The covariant scalar temperature is simply the canonical temperature in the rest frame, turned covariant by redefinition. Quantum general relativity. In quantum general relativity, accelerated observers interpret temperature differently. This is demonstrated for the vacuum state in Minkowski space by the Unruh effect, which is part of the thermodynamics of black holes. This seems inconsistent with the assumption of a covariant temperature. Dissipative case. The situation is more complicated in the more realistic dissipative case. Once one allows for dissipation, amounting to going from Euler to Navier-Stokes in the nonrelativistic case, trying to generalize this simple formulation runs into problems. Thus it cannot be completely correct. In a gradient expansion at low order, the velocity field defined above from $\beta^\mu$ can be identified in the Landau-Lifschitz frame with the velocity field proportional to the energy current; see (86) in Hayata et al.. However, in general, this identification involves an approximation as there is no reason for these velocity fields to be exactly parallel; see, e.g., P. Van and T.S. Biró, First order and stable relativistic dissipative hydrodynamics, Physics Letters B 709 (2012), 106-110. https://arxiv.org/abs/1109.0985 There are various ways to patch the situation, starting from a kinetic description (valid for dilute gases only): The first reasonable formulation by Israel and Stewart based on a first order gradient expansion turned out to exhibit acausal behavior and not to be thermodynamically consistent. Extensions to second order (by Romatschke, e.g., https://arxiv.org/abs/0902.3663) or third order (by El et al., https://arxiv.org/abs/0907.4500) remedy the problems at low density, but shift the difficulties only to higher order terms (see Section 3.2 of Kovtun, https://arxiv.org/abs/1205.5040). A causal and thermodynamically consistent formulation involving additional fields was given by Mueller and Ruggeri in their book Extended Thermodynamics 1993 and its 2nd edition, called Rational extended Thermodynamics 1998. Paradoxes. Concerning the paradoxes mentioned in the original post: Note that the formula $\langle E\rangle = \frac32 k_B T$ is valid only under very special circumstances (nonrelativistic ideal monatomic gas in its rest frame), and does not generalize. In general there is no simple relationship between temperature and velocity. One can say that your paradox arises because in the three scenarios, three different concepts of temperature are used. What temperature is and how it transforms is a matter of convention, and the dominant convention changed some time after 1968; after Balescu's paper mentioned above, which shows that until 1963 it was universally defined as being frame-dependent. Today both conventions are alive, the frame-independent one being dominant. This post imported from StackExchange Physics at 2016-06-24 15:03 (UTC), posted by SE-user Arnold Neumaier
Electric field lines emanating from a point positive electric charge suspended over an infinite sheet of conducting material. An electric field is generated by electric charge and time-varying magnetic fields. At each point in space, the electric field describes the electric force that would be experienced by a motionless test particle of unit positive charge. The concept of an electric field was introduced by Michael Faraday. Contents Qualitative description 1 Definition 2 Classical electrodynamics 2.1 Superposition 3 Array of discrete point charges 3.1 Continuum of charges 3.2 Electrostatic fields 4 Uniform fields 4.1 Parallels between electrostatic and gravitational fields 4.2 Electrodynamic fields 5 Energy in the electric field 6 Further extensions 7 Definitive equation of vector fields 7.1 Constitutive relation 7.2 See also 8 References 9 External links 10 Qualitative description The electric field is a vector field. The field vector at a given point is defined as the force vector per unit charge that would be exerted on a stationary test charge at that point. An electric field is generated by electric charge (also called source charge), as well as by a time-varying magnetic field. The electric charge can be a single charge or a group of discrete charges or any continuous distribution of charge. Electric fields contain electrical energy with energy density proportional to the square of the field magnitude. The electric field is to charge as gravitational acceleration is to mass. The SI units of the field are newtons per coulomb (N⋅C −1) or, equivalently, volts per metre (V⋅m −1), which in terms of SI base units are kg⋅m⋅s −3⋅A −1. An electric field that changes with time, such as due to the motion of charged particles producing the field, influences the local magnetic field. That is: the electric and magnetic fields are not separate phenomena; what one observer perceives as an electric field, another observer in a different frame of reference perceives as a mixture of electric and magnetic fields. For this reason, one speaks of "electromagnetism" or "electromagnetic fields". In quantum electrodynamics, disturbances in the electromagnetic fields are called photons. Definition Classical electrodynamics The electric field is defined in classical electrodynamics as follows (it is not used in quantum electrodynamics, in which case electric potentials are more fundamental). Consider a point charge q with position ( x, y, z). Now suppose the charge is subject to a force F on due to a field generated by other charges. Since this force varies with the position of the charge and by Coulomb's Law it is defined at all points in space, q F on is a continuous function of the charge's position. This suggests that there is some property of the space that causes the force that is exerted on the charge q q. This property is called the electric field and it is defined by \mathbf{E}(x,y,z)\equiv \frac{\mathbf{F}_{\text{on }q}(x,y,z)}{q} Notice that the magnitude of the electric field has dimensions of force/charge. Mathematically, the E field can be thought of as a function that associates a vector with every point in space. Each such vector's magnitude is proportional to how much force a charge at that point would "feel" if it were present and this force would have the same direction as the electric field vector at that point. The electric field defined above is caused by a configuration of other electric charges. This means that the charge q in the equation above is not the charge that is generating the electric field, but rather, being acted upon by it. The definition is part of the Lorentz force law, which provides a general definition of the classical electric and magnetic fields as well as the equation of motion for the charge q. This definition does not give a means of computing the electric field caused by a group of charges - one has to solve Maxwell's equations. Superposition Array of discrete point charges Electric fields satisfy the superposition principle. If more than one charge is present, the total electric field at any point is equal to the vector sum of the separate electric fields that each point charge would create in the absence of the others. That is, \mathbf{E} = \sum_i \mathbf{E}_i = \mathbf{E}_1 + \mathbf{E}_2 + \mathbf{E}_3 + \cdots \,\! where E is the electric field created by the i i-th point charge. At any point of interest, the total E-field due to N point charges is simply the superposition of the E-fields due to each point charge, given by \mathbf{E} = \sum_{i=1}^N \mathbf{E}_i = \frac{1}{4\pi\varepsilon_0} \sum_{i=1}^N \frac{Q_i}{r_i^2} \mathbf{\hat{r}}_i. where Q is the electric charge of the i i-th point charge, \mathbf{\hat{r}}_i the corresponding unit vector of r , which is the position of charge i Q with respect to the point of interest. i Continuum of charges It holds for an infinite number of infinitesimally small elements of charges – i.e. a continuous distribution of charge. By taking the limit as N approaches infinity in the previous equation, the electric field for a continuum of charges can be given by the integral: \mathbf{E} = \int_V d\mathbf{E} = \frac{1}{4\pi\varepsilon_0} \int_V\frac{\rho}{r^2} \mathbf{\hat{r}}\,\mathrm{d}V = \frac{1}{4\pi\varepsilon_0} \int_V\frac{\rho}{r^3} \mathbf{r}\,\mathrm{d}V \,\! where ρ is the charge density (the amount of charge per unit volume), ε 0 the permittivity of free space, and d V is the differential volume element. This integral is a volume integral over the region of the charge distribution. The equations above express the electric field of point charges as derived from Coulomb's law, which is a special case of Gauss's Law. While Coulomb's law is only true for stationary point charges, Gauss's law is true for all charges either in static form or in motion. Gauss's Law establishes a more fundamental relationship between the distribution of electric charge in space and the resulting electric field. It is one of Maxwell's equations governing electromagnetism. Gauss's law allows the E-field to be calculated in terms of a continuous distribution of charge density. In differential form, it can be stated as \nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon _0} where ∇⋅ is the divergence operator, ρ is the total charge density, including free and bound charge, in other words all the charge present in the system. Electrostatic fields Electrostatic fields are E-fields which do not change with time, which happens when the charges are stationary. Illustration of the electric field surrounding a positive (red) and a negative (blue) charge. Electric field between two conductors The electric field E at a point r, that is, E( r), is equal to the negative gradient of the electric potential \scriptstyle \mathbf{\Phi}(\mathbf{r}) , a scalar field at the same point: \mathbf{E} = -\nabla \Phi where ∇ is the gradient operator. This is equivalent to the force definition above, since electric potential Φ is defined by the electric potential energy U per unit (test) positive charge: \Phi = \frac{U}{q} and force is the negative of potential energy gradient: \mathbf{F} = - \nabla U If several spatially distributed charges generate such an electric potential, e.g. in a solid, an electric field gradient may also be defined. Uniform fields A uniform field is one in which the electric field is constant at every point. It can be approximated by placing two conducting plates parallel to each other and maintaining a voltage (potential difference) between them; it is only an approximation because of edge effects. Ignoring such effects, the equation for the magnitude of the electric field E is: E = - \frac{\Delta\phi}{d} where Δ ϕ is the potential difference between the plates and d is the distance separating the plates. The negative sign arises as positive charges repel, so a positive charge will experience a force away from the positively charged plate, in the opposite direction to that in which the voltage increases. In micro- and nanoapplications, for instance in relation to semiconductors, a typical magnitude of an electric field is in the order of 1 volt/µm achieved by applying a voltage of the order of 1 volt between conductors spaced 1 µm apart. Parallels between electrostatic and gravitational fields Coulomb's law, which describes the interaction of electric charges: \mathbf{F}=q\left(\frac{Q}{4\pi\varepsilon_0}\frac{\mathbf{\hat{r}}}{|\mathbf{r}|^2}\right)=q\mathbf{E} is similar to Newton's law of universal gravitation: \mathbf{F}=m\left(-GM\frac{\mathbf{\hat{r}}}{|\mathbf{r}|^2}\right)=m\mathbf{g} . This suggests similarities between the electric field E and the gravitational field g, so sometimes mass is called "gravitational charge". Similarities between electrostatic and gravitational forces: Both act in a vacuum. Both are central and conservative. Both obey an inverse-square law (both are inversely proportional to square of r). Differences between electrostatic and gravitational forces: Electrostatic forces are much greater than gravitational forces for natural values of charge and mass. For instance, the ratio of the electrostatic force to the gravitational force between two electrons is about 10 42. Gravitational forces are attractive for like charges, whereas electrostatic forces are repulsive for like charges. There are not negative gravitational charges (no negative mass) while there are both positive and negative electric charges. This difference, combined with the previous two, implies that gravitational forces are always attractive, while electrostatic forces may be either attractive or repulsive. Electrodynamic fields Electrodynamic fields are E-fields which do change with time, when charges are in motion. An electric field can be produced not only by a static charge, but also by a changing magnetic field (in which case it is a non-conservative field). Let B denote the magnetic flux density, and let A denote the magnetic vector potential. φ denotes the electric potential. Then the electric field E is given by \mathbf{E} = - \nabla \varphi - \frac { \partial \mathbf{A} } { \partial t } in which ∇φ denotes the gradient of φ, and B is given by \mathbf{B} = \nabla \times \mathbf{A} in which ∇× denotes the curl. By taking the curl of the electric field, we obtain \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t} which is Faraday's law of induction, another one of Maxwell's equations. [1] Energy in the electric field The electrostatic field stores energy. The energy density u (energy per unit volume) is given by [2] u = \frac{1}{2} \varepsilon |\mathbf{E}|^2 \, , where ε is the permittivity of the medium in which the field exists, and E is the electric field vector (in newtons per coulomb). The total energy U stored in the electric field in a given volume V is therefore U = \frac{1}{2} \varepsilon \int_{V} |\mathbf{E}|^2 \, \mathrm{d}V \, , Further extensions Definitive equation of vector fields In the presence of matter, it is helpful in electromagnetism to extend the notion of the electric field into three vector fields, rather than just one: [3] \mathbf{D}=\varepsilon_0\mathbf{E}+\mathbf{P}\! where P is the electric polarization – the volume density of electric dipole moments, and D is the electric displacement field. Since E and P are defined separately, this equation can be used to define D. The physical interpretation of D is not as clear as E (effectively the field applied to the material) or P (induced field due to the dipoles in the material), but still serves as a convenient mathematical simplification, since Maxwell's equations can be simplified in terms of free charges and currents. Constitutive relation The E and D fields are related by the permittivity of the material, ε. [4] [5] For linear, homogeneous, isotropic materials E and D are proportional and constant throughout the region, there is no position dependence: For inhomogeneous materials, there is a position dependence throughout the material: \mathbf{D(r)}=\varepsilon\mathbf{E(r)} For anisotropic materials the E and D fields are not parallel, and so E and D are related by the permittivity tensor (a 2nd order tensor field), in component form: D_i=\varepsilon_{ij}E_j For non-linear media, E and D are not proportional. Materials can have varying extents of linearity, homogeneity and isotropy. See also References ^ Huray, Paul G. (2009), Maxwell's Equations, Wiley-IEEE, p. 205, , Chapter 7, p 205 ^ Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9 ^ Electricity and Modern Physics (2nd Edition), G.A.G. Bennet, Edward Arnold (UK), 1974, ISBN 0-7131-2459-8 ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9 External links Electric field in "Electricity and Magnetism", R Nave – Georgia State University 'Gauss's Law' – Chapter 24 of Frank Wolfs's lectures at University of Rochester 'The Electric Field' – Chapter 23 of Frank Wolfs's lectures at University of Rochester [1] – An applet that shows the electric field of a moving point charge. Fields – a chapter from an online textbook Learning by Simulations Interactive simulation of an electric field of up to four point charges Java simulations of electrostatics in 2-D and 3-D Interactive Flash simulation picturing the electric field of user-defined or preselected sets of point charges by field vectors, field lines, or equipotential lines. Author: David Chappell This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
I am by no means an expert on LLL, but I have worked with it before. Please correct me if this answer is in some way incorrect. Define a basis $\beta = \{v_1,v_2,\ldots,v_n\}$ for $\mathbb{R}^n$. Then the lattice $L$ generated by $\beta$ is the set of integer linear combinations of $\beta$: $$L = \{ m_1v_1 + \cdots + m_nv_n : m_i \in \mathbb{Z} \} $$ This means the $\beta$-coordinate representation of vectors in $L$ are entirely integers. The basis $B$ is a set of vectors in $L$ that spans $L$ by integer linear combinations of the vectors in $B$. Since each of the vectors in $B$ are in $L$, they must have integer coordinates with respect to $\beta$, but they may not have integer entries as vectors in $\mathbb{R}^n$. To make this concrete, consider the lattice $L$ spanned by $\beta = \{ (\sqrt{2},0), (0,\sqrt{3}) \}$. Then $B = \{(\sqrt{2},0),(\sqrt{2},\sqrt{3}) \}$ is a basis for $L$. Note that the vectors in $B$ have irrational entries. The coordinates of $(\sqrt{2},0)$ in the basis $\beta$ is $(1,0)$ and the coordinates of $(\sqrt{2},\sqrt{3})$ in $\beta$ is $(1,1)$. So while the vectors in $B$ do not have integer values, they do have integer coordinates with respect to the basis $\beta$.
Cell stimulation with optically manipulated microsources Entry by Angelo Mao, AP 225, Fall 2010 Title: Cell stimulation with optically manipulated microsources Authors: Holger Kress, Jin-Gyu Park, Cecile O Mejean, Jason D Forster, Jason Park, Spencer S Walse, Yong Zhang, Dianqing Wu, Orion D Weiner, Tarek M Fahmy & Eric R Dufresne Journal: Nature Methods Volume: 6(12) Pages: 905-909 Summary The researchers developed an apparatus for creating gradients of small molecules using optically manipulated microsources. Briefly, the researchers loaded polylactic-co-glycolic acid (PLGA) particles on the micrometer scale with a chemical called formyl-methionine-leucine-phenylalanine (fMLP). The fMLP steadily diffused out of the PLGA beads, and the cells used in this study, neutrophils, would migrate up the concentration gradient of fMLP. Researchers demonstrated that it was possible to move microsources in real time and to capture the movement of neutrophils. Soft Matter keywords: in vitro, diffusion, gradient Overview The researchers formulated a steady state equation for the concentration gradient around microsources. <math>c(\rho, z) = c_{b} + c_{0}\times a(\frac{1}{(\rho^{2}+(z+h)^{2})^{1/2}} + \frac{1}{(\rho^{2}+(z-h)^{2})^{1/2}})</math> This equation is not very important, but there are several noteworthy aspects. First is the willingness of the researchers to assume steady state, even though the microsources should eventually run out of the chemoattractant agent. Second is the fact that it is strictly geometric around a point source, so the gradient quite simple. The researchers tested their apparatus in various way, including by having a cell be drawn to a single microsource (figure 1), by having cell be simultaneously attracted by two microsources (figure 2), by loading microsources with a cell motility inhibitor called cytochalasin D (figure 3), and by moving microsources by optical tweezers and affecting cell response (figure 4). Figure 1. Neutrophil migration in response to a microsource loaded with fMLP. In all figures, the time progression is by rows from left to right. The time scale is on the order of ~3 minutes.. Figure 1. Neutrophil migration in response to two microsources loaded with fMLP. Figure 1. Neutrophil migration in response to a microsource loaded with cytochalasin D. Figure 1. Neutrophil migration in response to a migrating microsource. Discussion The researchers indeed demonstrate an apparatus that is capable of creating chemical gradients that can be manipulated using a technique called optical tweezers. However (presumably this was in an earlier paper), the researchers do not specify how to use their optical tweezers technique, or, more importantly, in what situations it can be used. The cells seem to be moving on top of a 2D glass surface. The problem with this surface is that it is not a "soft matter," and is not physiologically relevant. Materials in the body are often soft, three-dimensional, and best mimicked with polymer-based hydrogels. Another limitation with this apparatus may be, as mentioned earlier, the fact that steady state is assumed with microsources. Since the chemoattractant needs to be loaded into microsources, the steady state necessarily can only apply for a finite time, before the gradient deteriorates. This means that it is difficult to maintain the gradient for long term studies (although moving microsources around is a potential solution).
In this paper:https://arxiv.org/pdf/1805.08898.pdf I can understand why C4 means $P_E > P^*$,I just need to times $1-\rho_k$ in both two side,the C4 will become $(1-\rho_k)[\sum\limits_{k=1}^{K}\mathbf h_k^H \mathbf F_j \mathbf h_k+\sigma^2_{\alpha_k}]\ge P$,and it is the same as $P_E > P^*$,but why can C5 also mean $\sum\limits_{k=1}^{K}tr(\mathbf F_{K4E})>P_T$?In here,you can see $F_{K4E}$ as $F_K$. $P_T$ is the power budget,it means that i have only $P_T$ power to send the message out $\sum\limits_{k=1}^{K}tr(\mathbf F_{K4E})$ means that the total power i need to send the power
I just wanted to give a more concrete idea of how we know these equations even though we have trouble proving analytical theorems about them. Stuff moving in space Consider any stuff (as in, any conserved quantity) distributed over space. We know that we can describe this with a time-dependent density field $\rho(x,y,z,t)$ such that any little volume $dV$ has some amount of stuff $\rho~dV$ at that point. We also know that this stuff might be flowing around over time and we formally treat this by saying that we want to know the flow through a little flat surface of area $dA,$ which is oriented in the $\hat n$ direction: that is, the surface is normal to $\hat n$ and "positive" flow will be in the $+\hat n$ direction. Combined together this is a vector $d\mathbf A = \hat n~dA$ and there is some vector field $\mathbf J(x,y,z,t)$ such that the amount of stuff which flows through this area over a time $\delta t$ is $\delta t~d\mathbf A\cdot\mathbf J(x,y,z,t).$ With $\rho$ and $\mathbf J$ we know almost everything. Since the stuff is conserved, we can say that in this box of volume $dV,$ if the amount of stuff in the box changes, it is either because there was a net flow into or out of the sides of the box, so we are doing some $\iint d\mathbf A\cdot \mathbf J$ which turns out by Gauss's theorem to be just $dV~\nabla\cdot\mathbf J,$ or else it came from outside the system we're studying, so there is some term $dV~\Phi$. Equating that to the change in the box $dV~(\partial\rho/\partial t)$ gives the simple starting equation $${\partial \rho\over\partial t} = -\nabla\cdot \mathbf J + \Phi.$$Now when we've got a flow field $\mathbf v(x,y,z,t)$ dictating how a fluid flows, the most dominant transport term is that the box flows downstream, $\mathbf J = \rho~\mathbf v + \mathbf j$ for some deviation $\mathbf j.$ Usually the principal deviation then comes from Fick's law, that there is a flow proportional to the difference in density between adjacent points, $\mathbf j = -D~\nabla \rho,$ but there may be more complex terms there; in particular we shall see pressure here. Conservation of momentum The key point here is that $p_x$, the momentum in the $x$-direction, is a stuff. It is a known conserved quantity. It is conserved as a direct result of Newton's third law which turns out, under Emmy Noether's celebrated theorem, to be the same as the statement that the laws of physics are the same at position $x$ as they are at position $x+\delta x$, for a suitable definition of "laws of physics." We are pretty sure about this, and we are pretty sure that the momentum of the fluid itself in the $x$-direction must therefore also be conserved, and this is $\rho~v_x$ where I am shifting definitions a bit on you: $\rho$ now refers to the mass density field and $v_x$ still refers to the fluid velocity in the $x$-direction. Now a flow of momentum per unit time, which we said is what $\mathbf J\cdot d\mathbf A$ is, is a force. Therefore $\mathbf J$ naturally takes the form of a force per unit area in this context. Now we know that Newton's expression for viscous forces was in fact to write $F_x = \mu~A~v_x/y$ where I am moving a surface of a fluid at speed $v_x$ at a perpendicular distance $y$ from a place where it is being held still; it will not surprise you at all to see that this is very similar to Fick's law and can be written as just $\mathbf j_\text{viscosity} = -\mu~\nabla v_x.$ To that we also need to add the effects of pressure, as a lowering in pressure also drives a fluid motion; this is a little bit harder to reason out but it takes the form that we can imagine a constant flow in the $x$-direction of $p~\hat x$ and then deviations in this flow would produce the change in momentum per unit time $-\partial p/\partial x$ through this divergence term. (That's a little bit of a sloppy way to show that we are talking about a stress tensor and part of it is $p~\mathbf 1$, the identity matrix multiplied by the pressure.) Combining these two components of $\mathbf j$ we have $${\partial \over\partial t}(\rho~v_x) = -\nabla\cdot (\rho~v_x~\mathbf v - \mu \nabla (v_x)) - \frac{\partial p}{\partial x} + \Phi_x.$$The external contribution $\Phi$ comes from forces influencing the fluid from outside, like gravity. In the Navier-Stokes equations the Millenium Prize has restricted itself to a considerably simpler case where $\nabla\cdot\mathbf v = 0$ and $\rho$ and $\mu$ are constant, which we call "incompressible flow." This is generally a valid assumption when you're interacting with a fluid at speeds much lower than the speed of sound in that fluid; then the fluid would rather move away from you than be compressed into any one place. In this case we can commute $\rho$ out of all of the spatial derivatives and then divide by it, so that the only impact is to rewrite $\nu=\mu/\rho$ and $\lambda=p/\rho$ and $a_x=\Phi_x/\rho$, eliminating the unit of mass from the equation. For $v_x$ we have specifically,$${\partial v_x\over\partial t} + \mathbf v\cdot\nabla v_x - \nu \nabla^2 v_x = - \frac{\partial \lambda}{\partial x} + a_x,$$ and then we can extend the above analysis to the directions $y,z$ too to find,$$\dot{\mathbf v} + (\mathbf v\cdot\nabla)\mathbf v - \nu \nabla^2 \mathbf v = - \nabla \lambda + \mathbf a.$$This is the version of the Navier-Stokes equations written down in the Millenium Prize; we have a very straightforward explanation of this as "The flow of momentum in a small box flowing downstream in an incompressible homogeneous Newtonian fluid is due entirely to Fick's-law diffusion of the momentum due to the viscosity of the fluid, plus a force due to pressure gradients inside the fluid, plus forces imposed by the external world." Why this equation? The understanding of the physics of how we got to this equation is not in question. What's at stake is the mathematics of this equation, in particular this $(\mathbf v \cdot \nabla) \mathbf v$ term which contains $\mathbf v$ twice and thereby makes it a nonlinear partial differential equation: given two flow fields $\mathbf v_{1,2}$ which are valid, in general $\alpha \mathbf v_1 + \beta \mathbf v_2$ will not solve this equation, removing our most powerful tool from our toolbox. Nonlinearity turns out to be unbelievably hard to solve in general, and essentially the Clay Mathematics institute is giving the million-dollar prize for anyone who cracks nonlinear differential equation theory strongly enough that they can answer one of the more basic mathematical questions about these Navier-Stokes equations, as a "most basic example" for their new theoretical toolkit. The idea of the Clay prizes is that they are specific problems (which is important for awarding a prize for their solution!) but that they seem to require powerful new general ideas which would allow our mathematics to go into places where it has historically been unable to go. You see this for example in $\text{P} = \text{NP}$, it's a very specific question but to answer it we would seem to need to have a better handle on "here's a classification of set of stuff which computers can do, and here are some things which a computer can't efficiently do" which nobody has yet been able to convincingly present. A new toolbox which could resolve this "stupid little" question would therefore profoundly improve our ability to work on a huge class of related problems in computation.
Entropy is the randomness collected by an operating system or application for use in Cryptography or other uses that require random data. The formula for Entropy is $$H(p_1, ..., p_k)=-\sum_{i=1}^{k} p_i\log_2(p_i)[bit]$$ So if I were to calculate the Entropy of a coin toss. it would be $$H(\frac{1}{2}, \frac{1}{2})=-(\log_2(\frac{1}{2})+\frac{1}{2}\log_2(\frac{1}{2}))=-(0-1)=1 Bit$$ But why is there a $\frac{1}{2}$ before the $\log$? Also if I were doing an experiment where the probability of an outcome is $\frac{1}{3}$ and there are $3$ outcomes, so would the entropy be $$H(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})=(\log_2(\frac{1}{3})+\frac{1}{3}\log_2(\frac{1}{3})+\frac{1}{3}\log_2(\frac{1}{3}))$$
One page 5 in Landau & Lifshitz Fluid Mechanics (2nd edition), the authors pose the following problem: Write down the equations for one-dimensional motion of an ideal fluid in terms > of the variables $a$, $t$, where $a$ (called a Lagrangian variable) is the $x$ coordinate of a fluid particle at some instant $t=t_0$. The authors then go on to give their solutions and assumptions. Here are the important parts: The coordinate $x$ of a fluid particle at an instant $t$ is regarded as a function of $t$ and its coordinate $a$ at the initial instant: $x=x(a,t)$. For the condition of mass conversation the authors arrive at (where $\rho_0 = \rho(a)$ the given initial density distribution): $$ \rho\,\mathrm{d}x = \rho_0 \mathrm{d}a $$ or alternatively: $$ \rho\left(\frac{\partial x}{\partial a}\right)_t = \rho_0 $$ Now the authors go on to write out Euler's equation, where I start to miss something. With the velocity of the fluid particle $v=\left(\frac{\partial x}{\partial t}\right)_a$ and $\left(\frac{\partial v}{\partial t}\right)_a$ the rate of change of the velocity of the particle during its motion, they write: $$ \left(\frac{\partial v}{\partial t}\right)_a = -\frac{1}{\rho_0} \left(\frac{\partial p}{\partial a}\right)_t $$ How are the authors arriving at that equation? In particular, when looking at Euler's equation: $$ \frac{\partial\mathbb{v}}{\partial t} + \left( \mathbf{v} \cdot \textbf{grad} \right) \mathbf{v} = - \frac{1}{\rho} \textbf{grad}\, p $$ what happens with the second term on the LHS $\left( \mathbf{v} \cdot \textbf{grad} \right) \mathbf{v}$? Why does it not appear in the authors' solution?
Please refer to the question here for additional details. Theorem: If $R$ and $S$ are rings and $\phi: R \to S$ a ring homomorphism defined by $g(n+\ker(\phi)) = f(n)$, then $R/Ker(\phi) \cong \operatorname{im}(\phi).\\$ Consider the rings $\mathbb{Z}$, $\mathbb{Z}_{4} = \{\bar{0},\bar{1},\bar{2},\bar{3}\}$ and $\mathbb{Z}_{12} = \{[0],[1],[2],\ldots,[11]\}$. Define $\phi: \mathbb{Z} \to \mathbb{Z}_{12}$ by $\phi(x) = 9x$. By applying the theorem, show that there is a ring isomorphism $g: \mathbb{Z}_{4} \cong \operatorname{im}(\phi))$ My attempt: This is what I did: Since, the $\ker(\phi) = \{4k| k \in \mathbb{Z}\}$ and the $\operatorname{im}(\phi) = \{[0],[3],[6],[9]\}$, $g(n+4\mathbb{Z}) = \phi(n)= 9n$ Thus, $\mathbb{Z}_{4} \cong \{[0],[3],[6],[9]\} \in \mathbb{Z}_{12}$ Is this correct? Otherwise, any hints would be much appreciated.
How many people do you need before the probability that two of them have the same birthday is 50%? The answer to this question is, amazingly, 23. This surprising result is known as the birthday paradox. The word paradox is used here in the sense that the result is intuitively unexpected, not in the usual sense of a logical contradiction. To determine a formula to compute the probability, and to see that 23 is indeed the correct answer, it is easier to compute the probability that there are no two people in the group that share a birthday. The wanted result is then simply one minus that probability, because there are only two possibilities that are mutually exclusive. If there is only one person, then the probability that there is no other person with the same birthday is 1. Moreover, one day of the year is now “occupied” by that person. If a second person is added, his or her birthday has to be one of the other 364 days. Hence, the probability that the person has a different birthday is 364/365. Two days of the year are now “occupied”. I assume a year of 365 days, so ignoring leap years. I also assume that al dates are equally probable as a birthday. If a third person is added, then there are 363 days left if he or she has to have a different birthday from the first two. The total probability is then \[1\times\frac{364}{365}\times\frac{363}{365}.\] This can then be extended for \(n\) people, as \[\frac{365}{365}\times\cdots\times\frac{365-n+1}{365}=\prod_{i=0}^{n-1}\frac{365-i}{365}=\prod_{i=0}^{n-1}\left(1-\frac{i}{365}\right).\] Hence, for \(n\) people, the probability that two of them have the same birthday is \[1-\prod_{i=0}^{n-1}\left(1-\frac{i}{365}\right).\] The probability is 97% for 50 people, and crosses the 99% mark for 57 people. In a group of 70 people, the chance is already 99.9%. Hence, if you invite 70 people to your wedding, there’s only about a one in a thousand chance that there are no two people with the same birthday. For a one in a million chance, invite 97 people… Python Code As usual, the Python implementation is short. The example code is for 23 people. from __future__ import print_function from __future__ import division n = 23 p = 1 for i in range(n): p *= 1 - i / 365 print(1 - p)
Multiscale stochastic homogenization of monotone operators 1. Department of Computational Mathematics, Chalmers University, SE-412 96 Göteborg, Sweden $\frac{\partial u^\omega_\varepsilon}{\partial t}- $div$(a(T_1(\frac{x}{\varepsilon_1})\omega_1, T_2(\frac{x}{\varepsilon_2})\omega_2 ,t, D u^\omega_\varepsilon))=f.$ It is shown, under certain structure assumptions on the random map $a(\omega_1,\omega_2,t,\xi)$, that the sequence $\{u^\omega_\e}$ of solutions converges weakly in $ L^p(0,T;W^{1,p}_0(\Omega))$ to the solution $u$ of the homogenized problem $ \frac{\partial u}{\partial t} - $div$( b( t,D u )) = f$. Mathematics Subject Classification:35B27, 35B4. Citation:Nils Svanstedt. Multiscale stochastic homogenization of monotone operators. Networks & Heterogeneous Media, 2007, 2 (1) : 181-192. doi: 10.3934/nhm.2007.2.181 [1] Fabio Camilli, Claudio Marchi. On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems. [2] Jean Louis Woukeng. $\sum $-convergence and reiterated homogenization of nonlinear parabolic operators. [3] [4] [5] [6] Assyr Abdulle, Yun Bai, Gilles Vilmart. Reduced basis finite element heterogeneous multiscale method for quasilinear elliptic homogenization problems. [7] [8] [9] Teresa Alberico, Costantino Capozzoli, Luigi D'Onofrio, Roberta Schiattarella. $G$-convergence for non-divergence elliptic operators with VMO coefficients in $\mathbb R^3$. [10] [11] Lijian Jiang, Yalchin Efendiev, Victor Ginting. Multiscale methods for parabolic equations with continuum spatial scales. [12] Andriy Bondarenko, Guy Bouchitté, Luísa Mascarenhas, Rajesh Mahadevan. Rate of convergence for correctors in almost periodic homogenization. [13] Defei Zhang, Ping He. Functional solution about stochastic differential equation driven by $G$-Brownian motion. [14] Walter Allegretto, Liqun Cao, Yanping Lin. Multiscale asymptotic expansion for second order parabolic equations with rapidly oscillating coefficients. [15] Rong Dong, Dongsheng Li, Lihe Wang. Regularity of elliptic systems in divergence form with directional homogenization. [16] [17] Eric Chung, Yalchin Efendiev, Ke Shi, Shuai Ye. A multiscale model reduction method for nonlinear monotone elliptic equations in heterogeneous media. [18] [19] Hakima Bessaih, Yalchin Efendiev, Razvan Florian Maris. Stochastic homogenization for a diffusion-reaction model. [20] Erik Kropat. Homogenization of optimal control problems on curvilinear networks with a periodic microstructure --Results on $\boldsymbol{S}$-homogenization and $\boldsymbol{Γ}$-convergence. 2018 Impact Factor: 0.871 Tools Metrics Other articles by authors [Back to Top]
Difference between revisions of "Kakeya problem" (18 intermediate revisions by 4 users not shown) Line 1: Line 1: − + '''Kakeya set''' <math></math> <math>\{\mathbb F}_3^n</math> that contains an [[algebraic line]] in every direction; that is, for every <math>d\in{\mathbb F}_3^n</math>, there exists <math>\in{\mathbb F}_3^n</math> such that <math>,+d,+2d</math> all lie in <math></math>. Let <math>k_n</math> be the smallest size of a Kakeya set in <math>{\mathbb F}_3^n</math>. Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. − == + == == − Trivially, + Trivially, :<math>k_n\le k_{n+1}\le 3k_n</math>. :<math>k_n\le k_{n+1}\le 3k_n</math>. − Since the Cartesian product of two Kakeya sets is another Kakeya set, + Since the Cartesian product of two Kakeya sets is another Kakeya set, :<math>k_{n+m} \leq k_m k_n</math>; :<math>k_{n+m} \leq k_m k_n</math>; − this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. + this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. − + − To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, + To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, this direction. Therefore, <math>\binom{k_n}2\ge 3(3^n-1)/2<math>, and hence − :<math>k_n\ + :<math>k_n\3^{(n+1)/2}</math> − One can + One can essentially the same conclusion using the "bush" argument. <math>N := (3^n-1)/2</math> different directions. <math>\mu</math> be the largest number of lines that are concurrent at a point . Eat least <math>3N/\mu</math>. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>2\mu+1</math>. one obtains + <math>\sqrt{6N} \approx 3^{(n+1)/2}</math>. − + The better − == + + + + + + + + + == == We have We have Line 33: Line 42: since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. − + can be to <math>\</math><math>A</math> the set of with <math>/3+O(\sqrt )</math>1<math> </math>, and <math></math> the set of with <math>/3+O(\sqrt )</math> <math></math> and <math>0</math>. <math>E(27/4)^{/3}</math> [], in direction <math>\{\}n</math> <math>=+</math> with <math>\</math>, and then <math>,,+</math>. can use the random rotations trick to get the rest of the directions in E(losing a polynomial factor in n). − + − + − + − + − Putting all this together, + Putting all this together, we have − :<math>(3^{6/11} + o(1))^n \ + :<math>(3^{6/11} + o(1))^n \k_n \( (27/4)^{1/3} + o(1))^n</math> or or − :<math>(1.8207 + :<math>(1.8207+o(1))^n \k_n \(1.+o(1))^n</math> Latest revision as of 00:35, 5 June 2009 A Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. Basic Estimates Trivially, we have [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. Lower Bounds To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\ge 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. The better estimate [math]k_n\ge (9/5)^n[/math] is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements). A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math]
I am trying to attack TAOCP once again, given the sheer literal heaviness of the volumes I have trouble committing to it seriously. In TAOCP 1 Knuth writes, page 8, basic concepts:: Let $A$ be a finite set of letters. Let $A^*$ be the set of all strings in $A$ (the set of all ordered sequences $x_1$ $x_2$ ... $x_n$ where $n \ge 0$ and $x_j$ is in $A$ for $1 \le j \le n$). The idea is to encode the states of the computation so that they are represented by strings of $A^*$ . Now let $N$ be a non-negative integer and Q (the state) be the set of all $(\sigma, j)$, where $\sigma$ is in $A^*$ and j is an integer $0 \le j \le N$; let $I$ (the input) be the subset of Q with $j=0$ and let $\Omega$ (the output) be the subset with $j = N$. If $\theta$ and $\sigma$ are strings in $A^*$, we say that $\theta$ occurs in $\sigma$ if $\sigma$ has the form $\alpha \theta \omega$ for strings $\alpha$ and $\omega$. To complete our definition, let $f$ be a function of the following type, defined by the strings $\theta_j$, $\phi_j$ and the integers $a_j$, $b_j$ for $0 \le j \le N$: $f((\sigma, j)) = (\sigma, a_j)$ if $\theta_j$ does not occur in $\sigma$ $f((\sigma, j)) = (\alpha \psi_j \omega, b_j)$ if $\alpha$ is the shortest possible string for which $\sigma = \alpha \theta_j \omega$ $f((\sigma,N)) = (\sigma, N)$ Not being a computer scientist, I have trouble grasping the whole passage. I kind of get the idea that is behind a system of opcodes, but I haven't progressed effectively in understanding. I think that the main problem is tat I don't know how to read it effectively. Would it be possible to explain the passage above so that I can understand it, and give me a strategy in order to get in the logic in interpreting these statements?
Instead of constructing a regular expression, it is better to construct a DFA, or rather, a DFA with some "missing transitions". The DFA has the following states: $q_0$: initial state. $q_A$: state after seeing one A. $q_B$: state after seeing one B. $q_{BB}$: state after seeing two Bs. The transition function is$$\begin{array}{c|cccc}& q_0 & q_A & q_B & q_{BB} \\\hlineq_0 & & A & B & \\q_A & & & B & \\q_B & & A & & B \\q_{BB} & & A & &\end{array}$$This is read as follows: when at row $q$, upon reading $\sigma$, move to the appropriate column, if any. If there is no appropriate column, we have reached the (invisible) sink state. Using this, it is not hard to see that the number of words of length $n$ in your language is$$\begin{pmatrix}1 & 0 & 0 & 0\end{pmatrix}\begin{pmatrix}0 & 1 & 1 & 0 \\0 & 0 & 1 & 0 \\0 & 1 & 0 & 1 \\0 & 1 & 0 & 0\end{pmatrix}^n\begin{pmatrix}1 \\ 1 \\ 1 \\ 1\end{pmatrix}.$$Calculation shows that for $n \geq 1$, the number of words of length $n$ in your language is$$\mathrm{round}(C \lambda^n), \text{where } C \approx 1.678735602594163 \text{ and } \lambda \approx 1.324717957244746.$$The eigenvalue $\lambda$ is also the unique real root of $x^3-x-1$. A glance at the OEIS reveals that your sequence is A164001, and has the recurrence$$a_n =\begin{cases}n+1 & n \leq 3, \\a_{n-2} + a_{n-3} & n \geq 4.\end{cases}$$(The recurrence $a_n = a_{n-2} + a_{n-3}$ is of course related to the polynomial $x^3-x-1$ above.)
Look at this simplified itnernal layout (from its datasheet): You can see there's an opamp in there driving a transistor (NPN), whose collector is considered the output (the anode is grounded). This means that, by taking a quantity from the output (cathode), and bringing it to the input, you create a negative feedback (because the collector inverts the phase), so RF1 and CF1, together with RH1 and RL1, are there to provide a zero in the overall transfer function: \$f_Z=\frac{1}{2\pi C(R_{F1}+R_{H1}||R_{L1})}\$. That zero compensates for the loss of phase of the converter at around the switching frequency, ensuring it does not oscillate. Here's a quick proof: The zero frequency is at \$\frac{1}{2\pi 1e-9[10000 + (1000||1000)]}\approx15.16\text{kHz}\$. Looking at the readings, you see the phase is 45 o at around 13.2kHz, which is different than the math because of the gain-bandwidth product of the opamp, plus the parasitic capacitances of the transistor itself (both thrown in there for exemplification, only, no other reason), which influence the magnitude and the phase, both. In short, the TL431, together with the feedback network, acts like an error amplifier, frequency compensated.
I have learnt that matrix mechanics came before Schroedinger's wave mechanics, however introductory quantum mechanics textbooks introduce you to wave mechanics first. The way in which the transition to matrix mechanics is made is by defining the matrix elements: $$ H_{mn} = \int _{-\infty}^{+\infty} \psi_m^* \hat{H} \psi_n ~\mathrm dV $$ but these elements are defined using a wavefunction. How did Heisenberg (and others too) come up with matrix mechanics and what was the motivation? I have seen the application of matrix mechanics to angular momentum but how would I apply it to a simple system like a particle trapped in an infinite potential well without starting from the wave mechanics point of view?
A sample of a random process is given as: $$ x(t) = A\cos(2\pi f_0t) + Bw(t) $$ where $w(t)$ is a white noise process with $0$ mean and a power spectral density of $\frac{N_0}{2}$, and $f_0$, $A$ and $B$ are constants. Find the auto-correlation function. Here's my attempt at a solution: Let $a = 2\pi f_0t$, and $b = 2\pi f_0(t+\tau)$ \begin{align} \text{Autocorrelation of } x(t) & = E\left\{x(t)x(t + \tau)\right\}\\ & = E\left\{\left(A\cos(a) + Bw(t)\right)\left(A\cos(b) + Bw(t+\tau)\right)\right\}\\ & = E\{A^2\cos(a)\cos(b) + AB\cos(a)w(t+\tau) + AB\cos(b)(wt)\\&\quad + B^2w(t)w(t+\tau)\}\\ & = E\left\{A^2\cos(a)\cos(b)\right\} + E\left\{AB\cos(a)w(t+\tau)\right\} + E\left\{AB\cos(b)(wt)\right\}\\&\quad + E\left\{B^2w(t)w(t+\tau)\right\}\\ & = E\left\{A^2\cos(a)\cos(b)\right\} + E\left\{B^2w(t)w(t+\tau)\right\}\\ & = E\left\{A^2\cos(a)\cos(b)\right\} + B^2\left(R_w(\tau)\right)\\ & = E\left\{A^2\cos(a)\cos(b)\right\} + B^2\left(\frac{N_0}{2}\right)(\delta(\tau))\\ \end{align} The expectation terms with the noise in them all equal $0$ (the last is just the auto correlation of white noise ... hence the simplification above. Using trigonometric identities: $$ \cos(a)\cos(b) = \frac 12\left[\cos(a + b) + \cos(a - b)\right] $$ we have: \begin{align} \text{Autocorrelation of } x(t) & = E\left\{A^2\cos(a)\cos(b)\right\} + B^2\left(\frac{N_0}{2}\right)(\delta(\tau))\\ & = E\left\{\left(A^2\right)\frac 12\left[\cos(a+b)+\cos(a-b)\right]\right\} + B^2\left(\frac{N_0}{2}\right)(\delta(\tau))\\ & = \left(\frac{A^2}{2}\right)\left[E\{\cos(a+b)\} + E\{\cos(a-b)\}\right] + B^2\left(\frac{N_0}{2}\right)(\delta(\tau))\\ \end{align} We're dealing with constant terms, so expectation term goes away and subbing in our initial conditions we get: $$ \frac {A^2}2 \left[\cos(2\pi f_o(2t + \tau) + \cos(2\pi f_o\tau)\right] + B^2\left(\frac{N_0}{2}\right)(\delta(\tau)) $$ For some reason I can't help but feel I did something incorrectly calculating that autocorrelation ... it's supposed to be a function of $\tau$, but has a $t$ is in there ... I would very much appreciate it if someone could point me in the right direction, or explain what I messed up. I don't know whether it matters, but in this class we're dealing with only wide sense stationary processes.
To start out, I think it is important to clarify one issue. If we think of many quantum messages over which the information is spread, we can ask two questions that both seem consistent with your phrasing: 1) If I measure each message as I receive it, what is the information gained from that message, or 2) If I store the quantum messages, how much information can be extracted from the first $n-1$ of them, versus the first $n$ of them? These problems are not the same, and can have very different answers depending on the encoding. Indeed, quantum messages sent over two or more zero capacity channels can contain non-zero information (see arXiv:0807.4935 for example). We've only known about weirdness like this for 3 years, so its worth keeping in mind that older papers might evoke additivity conjectures which have recently been proven false. As regards the blackhole information problem, it is (2) that seems the most relevant. Holevo information gives an obvious way to bound the information contained in the system, and to calculate an upper bound on the rate of information leakage. This can probably be improved in the blackhole setting since there is no way to introduce additional randomness (which would essentially mean creating information). The Holevo information is given by $\chi = S(\rho) - \sum_i p_i S(\rho_i)$, where $\rho = \sum_i p_i \rho_i$ is the density matrix for the system, and $\rho_i$ is the density matrix used to encode a particular message which occurs with probability $p_i$, and $S(\rho) = \mbox{Tr}(\rho \log_2 \rho)$ is the von Neumann entropy. The mutual information between the encoded information and the measurement outcomes is bounded from above by this quantity (see the link in Frédéric Grosshans' answer for a more detailed description of Holevo's theorem). Thus for a given encoding, you can calculate the Holevo information as a function of the number of messages received, which gives you the kind of thing you are looking for. You also ask "For a generic decoding, is it known how much time you need in order to gain access to a finite fraction of the information? Are there some universal results about the asymptotics of such process (in the spirit of "critical exponents")?" The answer to this is that there is a trivial upper bound on the time of infinity, and a lower bound of $n$ bits per $n$ qubits received which comes from Holevo's bound. Given more information about the process, better bounds are of course probably possible. The reason for the infinite upperbound is as follows: If you encode quantum information with an error correction code which can detect errors on up to $d$ sites, it is necessarily the case that it is impossible to obtain the correct result for a measurement on the encoded information with probability better than guessing, if the measurement is restricted to be on less than $d$ sites. Now, it is easy to construct codes with $d$ arbitrarily large as long as the encoding can be made even larger, and hence we get the infinite upper bound. You can use the same trick to make pretty much any distribution you want, as long as it respects the lower bound given by Holevo information (which is tight for some encodings).This post has been migrated from (A51.SE)
Energy model of single component phase diagram van der Waals, Nobel prize in Physics, 1910. <math>\left( p+\frac{a}{V^{2}} \right)\left( V-b \right)=RT</math> The van der Waals equation shows the transition from an ideal gas at high temperature to a two-phase system at temperaures below the critial temperature. The van der Waals equation does not describe the liquid-solid transition so lacks a triple point. The van der Waals equation is usually plotted as pressure versus volume per mole, but for our purposes, a plot of pressure versus temperature will be more illuminating. The ideas are the same except that the liquid/solid phase transition is included with the resulting triple point. In fact the phase diagram can be plotted in three dimensions. It is thought that Maxwell constructed models such as these and sent them to Gibbs (without much approval on Gibbs' part. ' (Check out this story!!)' To extend the phase diagram to the condensed phases, liquids and solids, the phase diagram is often plotted as the temperature versus density. We can compare the pressure versus temperature with the temperature versus density representation. This accepts the features at higher density, that is the liquid and solid phases. Geometry and entropic derivations – useful because many “particles” have quite short-range interactions and so behave as hard spheres. The free energy is simply: because the work is always zero; no change in heat. <math>A=-TS</math> The densest packing is face-centered cubic, a volume fraction of <math>\varphi =\frac{\pi }{\sqrt{18}}\simeq 0.74</math>. There is another limit – random close packing <math>\varphi _{RCC}\simeq 0.63</math> We can consider what the phase diagram might look like as the strength of interaction decreases. Therefore stating with: Phase diagrams for globular proteins Phase diagrams for globular proteins have been showing up in the literature en masse over the last few years, mostly due to the troubles of protein crystallization. MIT has been a big contributor of articles explaining modeling via hard spheres (above diagram) and the experimental creation of phase diagrams. In a review by Neer Asherie (he worked with George Benedek and Aleksey Lomakin, whom I would consider the giants of protein phase diagrams), the general process and analysis of protein specific phase diagrams is shown. Note that this was accepted in 2004, I believe really showing the youth of the field. "The problems associated with producing protein crystals have stimulated fundamental research on protein crystallization. An important tool in this work is the phase diagram. A complete phase diagram shows the state of a material as a function of all of the relevant variables of the system. For a protein solution, these variables are the concentration of the protein, the temperature and the characteristics of the solvent (e.g., pH, ionic strength and the concentration and identity of the buffer and any additives). The most common form of the phase diagram for proteins is two-dimensional and usually displays the concentration of protein as a function of one parameter, with all other parameters held constant. Three-dimensional diagrams (two dependent parameters) have also been reported and a few more complex ones have been determined as well." Asherie, Neer. Protein crystallization and phase diagrams. Methods 34 (2004) 266–272.[1] One of the interesting things about liquid liquid phase separations in protein solutions is that they mimic to an extent the phase separation of a gas into a liquid (like cloud formation) except for one key difference: the phase separated states are always metastable, and will not remain phase separated indefinitely. This is a good thing for crystallographers; the protein rich state of the solution will be likely to form a distinct phase: crystals. Ising model of phase transitions One model of phase transitions is the Ising Model. The Ising model consists of a lattice of spins that can be either up or down that are coupled to each other through the coupling energy J. In addition a magnetic field can be added to the system that couples to each spin individually. The Hamiltonian of the system is: <math>H= - \frac{1}{2} \sum_{\langle i,j\rangle} J_{ij} S_i S_j - \sum_i h_i S_i \,</math> In 2D the Ising model causes a phase transition in spins from an unordered state to an ordered state as temperature drops below T_c. T_c depends on the lattice configuration, for a square lattice <math>k_B T_c = \frac{2J}{\ln{(1+\sqrt{2})}}</math>. A lattice gas can be mapped onto the Ising model, with a spin up corresponding to an atom being present and a spin down corresponding to no atom. The magnetic field term includes the chemical potential. (Source: Pathria, Statistical Mechanics. http://en.wikipedia.org/wiki/Ising_model) Two States Can Be Stable at The Same Time If a system has a degree of freedom x, the stable state is identified as the value x=x* at which the free energy is a minimum. Using the metaphor of a ball on a landscape (commonly used to describe minimum free energy), the ball rolls along x until it reaches the bottom of the valley, x=x*. If the degree of freedom is the density rho of water, then for water at T=25C, <math>\rho =\rho *=1gcm^{-3}</math>. But at T=150C, the free energy is a minimum where the density equals an appropriate value for steam as shown in the figure above. At the boiling point, there are two equal minima in the free energy function. Both the liquid and the vapor states of water are stable at the same time. There is more than one valley on the energy landscape, and the balls roll with equal tendency into either of them. Changing the temperature changes the relative depths of the two minima.
I wanted to write a post to make clear in simple and elegant way the singular characteristic of a Topic-Specific PageRank to be composible with respect to the personalization vectors, but I wanted to start from the beginning so I split the entire journey in two posts: the first is this one and we will see the general PageRank motivation and formulation, the second one will deal with the personalized PageRank ones. What is PageRank Suppose that you want to rank web pages by assigning them a score that is a degree of their authority and popularity, the fundamental key property of the web graph that you need to consider are links. Links are a measure of popularity, theoretically if you link another web page you consider it authoritative (I'm going link in this article Wikipedia, books, citation resources), moreover it's rare that a spam page links a good page, in general good pages links good pages and spam pages links other spam pages, this is the links' nature. The random walker But how we can consider links to score pages? It does not suffices to count them since we can easily create spam pages that have lots of in and out links. The brilliant idea of PageRank is to consider the weighted sum of the in-links, but weighted on the base of what? Easy, on the rank of the page from which the link is coming. So in general, for a page \(j\) and for every page \(i\), with out-degree \(d_i\), that points to \(j\) we can express the score of \(j\) as: $$r_j = \sum_{i\rightarrow j} \frac{r_i}{d_i}$$ Therefore, the amount of rank of the linking page \(i\) is divided equally to all the out links. We can give to this equation a nice interpretation that also allows us to understand why it is not enough. Suppose that we have a random walker, an entity that just walks on our graph with a specific behaviour: regardless on the node on which it stays at a certain time \(t\) it always follows a random link of that node, so the probability that any link of the node will be chosen by the walker is evenly distributed to all the links, did you remember? Even the rank was evenly distributed to all the out links. We can model the random walker with a Markov chain, a statistical model composed by \(N\) states, where \(N\) is the number of nodes since the walker can be any of them in a certain time \(t\). The key property of a Markov chain is that we can write in an elegant way all the decisions that can be made by the walker at any state by writing down a column stochastic transition probabilities matrix \(M\). Trust me, it seems more difficult than what it actually is. In the matrix \(M\) every column represents a state in which the random walker can be, in other words, a node, and each row represents the next node in which the walker will be directed, specifically given a column \(j\) and a row \(i\) the cell \(M_{ij}\) is the probability that the next node will be \(i\) given the fact that the walker is on the node \(j\). If you remember, this probability is again evenly distributed to all the out links a node, so you will agree with me that the sum of every column is 1 (that is what column stochastic matrix means). An example will clarify any doubt. In the graph that is in the figure we can write down the following transition probabilities matrix: $$M = \begin{pmatrix} 0 & 1 & 0 \\ 0.5 & 0 & 1 \\ 0.5 & 0 & 0\end{pmatrix}$$ You should notice now that the element of the matrix are just the previous \(\frac{1}{d_j}\) for a row \(j\) and for any link to node \(i\): $$M = \begin{pmatrix} 0 & \frac{1}{d_2} & 0 \\ \frac{1}{d_1} & 0 & \frac{1}{d_3} \\ \frac{1}{d_1} & 0 & 0\end{pmatrix}$$ Now, given the column vector \(\mathbf{r}\) that contains all the ranks for every page, you will allow me to write the following compact formulation of the PageRank: $$\mathbf{r}^{(t+1)} = M \cdot \mathbf{r}^{(t)}$$ And you will see that I also added the concept of time, since we are updating the rank vector \(\mathbf{r}\) at any step of the random walker. Did you see it? Just take your time to see that the matrix formulation is the same of the one that we started with. The Power Method How the algorithm would work? We call this method of computing the PageRank the power method since we progressively apply always the same equation on the rank vector \(\mathbf{r}\). We initialize the rank vector as follows: $$\mathbf{r}^{(0)} = \left[ \frac{1}{N} \right]_N$$ That is a vector of \(N\) elements in which every element is \(\frac{1}{N}\). You have to think about this vector as the probability of the random walker to be in every node of the graph at certain time \(t\). At the beginning, time 0, the walker can be in any node with a evenly distributed probability (this is why we initialize it in that way), after applying the same equation a lot of times we will have that vector to converge to a steady-state probability vector or also called long-term visit rate, the probability that a node is visited in the long-term, that is another interpretation of the PageRank. We have: Step 0: \(\mathbf{r}^{(1)} = M \cdot \mathbf{r}^{(0)}\) Step 1: \(\mathbf{r}^{(2)} = M \cdot \mathbf{r}^{(1)}\) Step 2: \(\mathbf{r}^{(3)} = M \cdot \mathbf{r}^{(2)}\) ..until convergence Pathologies So far so good, we can also prove that the previous formulation converges (see the references at the end of this post if you want to know why). Now everything works with graphs like the one in the example, in general the graph of the web not only is not so simple but also it has two pathological issues: It has spider traps, namely there are some structures of nodes that if the walker manages to enter them then it cannot escape because all the pages have links between each other of the same group. This is a problem since the trap will absorbe the PageRank of all the pages, this can also be more pathological if we have cyclic structure since they makes our Markov chain periodic. It has dead ends, there are some nodes that have no out links and this is a problem because for a dead end the column for that state is not summing to 1 and the matrix is no more column stochastic. In general if a graph has dead ends then its corresponding Markov chain is reduciblemeaning that there not exists a path from every node to every other one. Thus, there are some conditions that must be followed in order to have the convergence to a true PageRank \(\mathbf{r}\): we say that the Markov chain has to be ergodic, namely irreducible and aperiodic. The cure The second brilliant idea that fixes all the pathological issues of a web graph was to modify the behaviour of the random walker in such a way when it stays on a node it has two possibilities to be chosen: with probability \(\beta\), it follows a random out-link of the node it is on, as in the general formulation; with probability \(1-\beta\), it teleports itself to node of the graph chosen at random (so every node is chosen with probability \(\frac{1}{N}\)). You will realize now that if the walker jumps to random node it can easily escape both from dead ends than spider traps, so the teleporting made our PageRank defined and our iteration method to converge. How our equation will be modified? The equation of a single node \(j\) becomes: $$r_j = \beta \sum_{i\rightarrow j} \frac{r_i}{d_i} + (1-\beta) \frac{1}{N}$$ in vector notation we want to modify our matrix \(M\) in order to accomodate the random teleport, therefore we write a new matrix \(A\) that is: $$A = \beta M + \left[\frac{(1-\beta)}{N}\right]_{N\times N}$$ and since \(\mathbf{r}^{(t+1)} = A\cdot\mathbf{r}^{(t)}\) we want to find the final formulation to be applied: $$\begin{split} \mathbf{r}^{(t+1)} & = \left(\beta M + \left[\frac{(1-\beta)}{N}\right]_{N\times N}\right) \cdot \mathbf{r}^{(t)} \\ & = \beta M \cdot \mathbf{r}^{(t)} + (1-\beta) \left[\frac{1}{N}\right]_N \end{split}$$ This is the final formulation of the PageRank, notice that in the first step with added a \(N\times N\) matrix \(\left[\frac{(1-\beta)}{N}\right]_{N\times N} \) to simulate the random teleporting, but when we multiply it to the rank vector \(\mathbf{r}\), since it is a stochastic vector (the sum of all coordinates is 1) the previous matrix becomes a vector of \(N\) elements, \(\left[\frac{(1-\beta)}{N}\right]_N\) and the rank vector disappears on the second element of the sum [1]. Practical example Let's see this in practice with SageMath code [2]: # Number of the nodesn_nodes = 3# Transition probabilities matrixM = matrix([[0,1,0],[0.5,0,1],[0.5,0,0]])b = 0.9# Vector of evenly distributed probabilitiesp = vector([1/n_nodes for i in range(n_nodes)])# Init the rank vector with evenly distributed probabilitiesr = vector([1/n_nodes for i in range(n_nodes)])for i in range(1000): r = (b*M)*r + (1-b)*p print("r = " + str(r)) This replies with: r = (0.391901663051338, 0.398409255242227, 0.209689081706435) The dead ends problem The equation that we have just seen does not work if the graph has dead ends, this because a dead end does not have the the contribution of \(\beta\) probability since it has no out links! For this reason the leaked probability by the dead ends it's not just \(1-\beta\) but it is larger. So we have two possibilities, or adding the vector \(\frac{1}{N}_N\) to dead ends states of the matrix or splitting the iteration in two steps: Step 1: compute the standard PageRank $$\mathbf{r}^{(t+1)} = \beta M \cdot \mathbf{r}^{(t)}$$ Step 2: redistribute the leaked probability evenly by summing all the coordinates of the rank vector and dividing the remaining probability to all the nodes of the graph: $$\begin{split} \mathbf{r}^{(t+1)} & = \mathbf{r}^{(t+1)} + (1 - \beta) \cdot \left[\frac{\left(1 - \sum_i r^{(t+1)}_i\right)}{N}\right]_N \\ & = \mathbf{r}^{(t+1)} + (1 - \beta) \cdot \left(1 - \sum_i r^{(t+1)}_i\right) \left[\frac{1}{N}\right]_N \end{split}$$ I prefer to get out the \(\left[\frac{1}{N}\right]_N\) because we will see in the next post that it can personalized as we want for giving a generalization of the PageRank. Practical example Again we see an application with SageMath in the following code: # Number of the nodesn_nodes = 3# Transition probabilities matrixM = matrix([[0,1,0],[0.5,0,1],[0.5,0,0]])b = 0.9# Vector of evenly distributed probabilitiesp = vector([1/n_nodes for i in range(n_nodes)])# Init the rank vector with evenly distributed probabilitiesr = vector([1/n_nodes for i in range(n_nodes)])for i in range(1000): # Step 1 r = (b*M)*r # Step 2 - distribute the remaining probability r = r + (1 - sum(r)) * p print("r = " + str(r)) This replies with: r = (0.391901663051338, 0.398409255242227, 0.209689081706435) The same result of the previous algorithm. References Mining of Massive Datasets, Jure Leskovec, Anand Rajaraman, Jeff Ullman IIR: Introduction to Information Retrieval. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze. Cambridge University Press, 2008
Note: There was an error in the reverse formula (and maybe also with some of the values given). Frank pointed this out (see comments section below). In analytical chemistry, linear regression or linear function is a common (maybe the most common) tool to describe the relationship between a measured signal and the concentration of an analyte. Even if the relationship is much more complex, one usually works in small ranges only where the assumption of linearity is convenient. However, there are analytical problems, which cannot be solved with this simple approach. In this short article I want to introduce and present another useful function for data evaluation on the basis of a real example. The following plot shows the response (fluorescence emission at a certain wavelength) of a pH sensor foil to different pH values. The foil consists of a pH-sensitive dye in a hydrogel matrix. As can be seen from the figure above, the response is not linear except for a small area around pH 6. On the contrary, the curve approaches asymptotic values for higher and lower pH values. This effect is easily explainable. Since the receptor is a pH-sensitive dye, its fluorescence depends on the H + concentration. The fluorescence intensity is the higher the less protons are bound to the receptor. Obviously, there is a limit in both directions. For very high proton concentrations (i.e. low pH values!), all available binding sites on the dye are blocked. Therefore, lowering the pH more won’t have any further effect. On the other hand, for low proton concentrations (i.e. high pH values!) all possibly bound protons are already removed from the dye and no further protons can’t be removed. So, the fluorescence intensity remains constant. Now, how can we fit this sensor output? We could extract just the linear area and fit this with a linear equation. However, we don’t know which points actually belong to the linear area and, therefore, it would be possible that we choose too many or too few points. Also, we don’t want to exclude the two bending areas, since there is – even if the sensitivity is lower than in the linear region – still analytical information in these areas. So, we need a function which fits to this S-shaped curve. Such a function is called ‘ sigmoid‘ function. The name is derived from ‘sigma’ (=letter S) and ‘-oid’ (-like or -shaped). There are many forms of sigmoid functions [1]. A common example is the logistical function [2]: \(f(x) = \frac{1}{1 + e^{-x}}\) In our case the x is the pH value and f(x) is the fluorescence intensity F as a function of pH. If depicted, the function with the formula above will look similar to the one in Figure 2. This form of the function returns values between 0 and 1 for a given x. Hence, it is not suitable for our needs and we have to modify it a little bit. First, let us introduce some parameters: \(f(x) = \frac{a}{b + c \cdot e^{- d \cdot (x – f)}} + t\) With these parameters we can control the position, form, shape, and range of the function. One can play around with the parameters in an online function plotter [3] to get a feeling for the behaviour of the function and in order to visualize the invidual effect of each parameter. All we need to do now is to find the appropriate parameter values for our problem from above. Of course, this is a bit more complicated than finding the fit for a straight line. Usually, one uses software tools, such as Origin [4], for this purpose. If we do so the resulting function with all parameters has the following form: Note: I replaced the parameters with values, which I know work. However, keep in mind there is always more than one good solution. Always double check your results! \(F(pH) = \frac{1.31}{0.21 + 2.96 \cdot e^{- 1.69 \cdot (pH – 4.23)}} + 0.04\) Now, we know all parameters (i.e. we calibrated the sensor foil) and can use the sensor foil and measure some pH. However, since we actually measure the fluorescence intensity, we need a method to calculate the pH from this intensity. Therefore, we solve the equation for pH to get the reverse function: Note: I replaced this formula now with one, which is correct (see comments of Frank below). Originally, I used Matlab because I wanted to make sure this is correct. Maybe I copied it wrong or mixed things up. Mmh. Should have just trusted my good old brain a little more! \(\begin{aligned} pH(F) &= -\;ln( \frac{a – b \cdot (F – t)}{ c \cdot (F – t)} ) \cdot \frac{1}{d} + f = \\ &= -0.59 \cdot ln( \frac{1.31 – 0.21 \cdot (F – 0.04)}{ 2.96 \cdot (F – 0.04)} ) + 4.23 = \\ &= -0.59 \cdot F’ + 4.23 \end{aligned} \) Also the function looks complicated, it allows us to calculate the pH value to the respective fluorescence intensity. It should be noted that the reverse function is – of course – limited, which is obvious if plotted. Since the ln-function in the above formula only consists of constants and F as variable it can be substituted with a new variable F’, which makes the formula looking like an ordinary linear function. Indeed, the relationship between F’ and pH is linear (but not the relationship between F and pH!). Substitution is not only good for presenting formulas but for entering them in calculation programs such as Excel or OpenOffice Calc. I have observed many people having problems with entering the whole thing at once – debugging and finding typos is horrible! The following figure shows both the fitted function as well as the reverse function. Finally, we can use the sensor foil to measure the pH of unknown solutions by dropping the solution onto the foil (or putting the foil into the solution), reading out the fluorescence intensity, and calculating the corresponding pH value. That’s it! References "Sigmoid function - Wikipedia", Wikipedia, 2018. http://en.wikipedia.org/wiki/Sigmoid_function "Logistic function - Wikipedia", Wikipedia, 2018. http://en.wikipedia.org/wiki/Logistic_function "Function Grapher Online", 2011. http://www.walterzorn.de/en/grapher/grapher_e.htm " OriginLab - Origin and OriginPro - Data Analysis and Graphing Software " http://www.originlab.com/
DFAs can be stored in a regular way: We assume $\#\notin \Sigma$ and define $$L = \{\#\#e\mid e \in \{0,1\}^*\}^*\cdot\{\#s\#b\mid b \in \{0,1\}^*,s\in\Sigma\}^*\quad ,$$ which is clearly regular. Then for $w\in L$ such that $w = \#\#e_1\dots \#\#e_o\# s_1\#b_1\#\dots\#s_n\#b_n$ we define $$p_0 = 1, p_i = \min\{r_{i-1},n\}$$ where $$r_i = \min\{j>p_i\mid \exists k, p_i \leq k < j: \ s_j = s_k \}$$ is the index of the first symbol repetition after $p_i$. Let $\{p_1,\dots,p_k\}$ be the set definable in this way. Now we construct a DFA: The set of states will be $Q=\{1,\dots,m\}$, where $m=\max(\{k\}\cup\{\mathrm{bin}(b_i)\mid 1\leq i \leq n\})$ and for the sake of simplicity we choose $1$ as the starting state. The set of accepting states shall be $E=\{\mathrm{bin}(e_i)\mid 1\leq i \leq o\}$. By our interpretation of the string $w$, each part $\#s_{p_i}\#,\dots,\#b_{p_{i+1-1}}$ contains each $s\in\Sigma$ at most once and for each such $s$ a binary string. We'll interpret this string as target for our transition function $\delta: Q\times \Sigma \to Q$: $$\delta(i,s)=\begin{cases}\mathrm{bin}(b_j) & \exists p_i,j: p_i\leq j < \min\{p_{i+1},n\}, s_j=s\\ 1 & \text{else}\end{cases}$$ Now $(\Sigma,Q,\delta,1,E)$ is a DFA. On the other hand it's obvious that any DFA can be sored this way (after renaming the states).
The answer is No. The keyword is that the Coulomb force (which hasn't been spelled out), the main forces binding the atoms together in the lattice of solid-state or condensed matter for the long pole, supposed to have retarded time to transmit the force/information between the atoms. Again, Coulomb force, is part of Electromagnetic (E&M) force, which has been emphasized already by others. All E&M forces somehow become consistent between different reference frames only if we take into account the constant speed of light, and the retardation effect of E&M potential/force. You can easily read the retarded electromagnetic potential$ (\varphi,\mathbf A )$/force here: $$\mathrm\varphi (\mathbf r , t) = \frac{1}{4\pi\epsilon_0}\int \frac{\rho (\mathbf r' , t_r)}{|\mathbf r - \mathbf r'|}\, \mathrm{d}^3\mathbf r'$$ $$\mathbf A (\mathbf r , t) = \frac{\mu_0}{4\pi}\int \frac{\mathbf J (\mathbf r' , t_r)}{|\mathbf r - \mathbf r'|}\, \mathrm{d}^3\mathbf r'\,.$$ where 'r' is a position vector in space, 't' is time, The retarded time is:$$t_r = t-\frac{|\mathbf r - \mathbf r'|}{c}$$ There is also gravity, and strong-force binding the nucleus; but they are in the much weaker energy scale comparing to E&M concerning binding atoms on the lattice. Also, all forces and all massless particle (photon/gluons/gravitons) may have the same speed of propagation, the speed of light.
Associative, idempotent, symmetric, and order-preserving operations on chains Devillet, Jimmy ; Teheux, Bruno in Order: A Journal on the Theory of Ordered Sets and its Applications (in press) We characterize the associative, idempotent, symmetric, and order-preserving operations on (finite) chains in terms of properties of (the Hasse diagram of) their associated semilattice order. In ... [more ▼] We characterize the associative, idempotent, symmetric, and order-preserving operations on (finite) chains in terms of properties of (the Hasse diagram of) their associated semilattice order. In particular, we prove that the number of associative, idempotent, symmetric, and order-preserving operations on an n-element chain is the nth Catalan number. [less ▲]Detailed reference viewed: 185 (49 UL) Characterizations and classifications of quasitrivial semigroups Devillet, Jimmy ; Marichal, Jean-Luc ; Teheux, Bruno Scientific Conference (2019, March 03)Detailed reference viewed: 75 (11 UL) Classifications of quasitrivial semigroups Devillet, Jimmy ; Marichal, Jean-Luc ; Teheux, Bruno E-print/Working paper (2018) We investigate classifications of quasitrivial semigroups defined by certain equivalence relations. The subclass of quasitrivial semigroups that preserve a given total ordering is also investigated. In ... [more ▼] We investigate classifications of quasitrivial semigroups defined by certain equivalence relations. The subclass of quasitrivial semigroups that preserve a given total ordering is also investigated. In the special case of finite semigroups, we address and solve several related enumeration problems. [less ▲]Detailed reference viewed: 87 (24 UL) Categories of coalgebras for modal extensions of Łukasiewicz logic Teheux, Bruno ; Scientific Conference (2018, August 27) The category of complete and completely distributive Boolean algebras with complete operators is dual to the category of frames. We lift this duality to the category of complete and completely ... [more ▼] The category of complete and completely distributive Boolean algebras with complete operators is dual to the category of frames. We lift this duality to the category of complete and completely distributive MV-algebras with complete operators. [less ▲]Detailed reference viewed: 38 (1 UL) An n-ary generalization of the concept of distance ; Marichal, Jean-Luc ; Teheux, Bruno Scientific Conference (2018, July 03)Detailed reference viewed: 51 (5 UL) On associative, idempotent, symmetric, and nondecreasing operations Devillet, Jimmy ; Teheux, Bruno Scientific Conference (2018, July 02) see attached fileDetailed reference viewed: 21 (1 UL) Characterizations of nondecreasing semilattice operations on chains Devillet, Jimmy ; Teheux, Bruno Scientific Conference (2018, June 01) See attached fileDetailed reference viewed: 52 (3 UL) A generalization of the concept of distance based on the simplex inequality Kiss, Gergely ; Marichal, Jean-Luc ; Teheux, Bruno in Beitraege zur Algebra und Geometrie = Contributions to Algebra and Geometry (2018), 59(2), 247266 We introduce and discuss the concept of \emph{$n$-distance}, a generalization to $n$ elements of the classical notion of distance obtained by replacing the triangle inequality with the so-called simplex ... [more ▼] We introduce and discuss the concept of \emph{$n$-distance}, a generalization to $n$ elements of the classical notion of distance obtained by replacing the triangle inequality with the so-called simplex inequality \[ d(x_1, \ldots, x_n)~\leq~K\, \sum_{i=1}^n d(x_1, \ldots, x_n)_i^z{\,}, \qquad x_1, \ldots, x_n, z \in X, \] where $K=1$. Here $d(x_1,\ldots,x_n)_i^z$ is obtained from the function $d(x_1,\ldots,x_n)$ by setting its $i$th variable to $z$. We provide several examples of $n$-distances, and for each of them we investigate the infimum of the set of real numbers $K\in\left]0,1\right]$ for which the inequality above holds. We also introduce a generalization of the concept of $n$-distance obtained by replacing in the simplex inequality the sum function with an arbitrary symmetric function. [less ▲]Detailed reference viewed: 139 (28 UL) Pivotal decomposition schemes inducing clones of operations ; Teheux, Bruno in Beitraege zur Algebra und Geometrie = Contributions to Algebra and Geometry (2018), 59(1), 25-40 We study pivotal decomposition schemes and investigate classes of pivotally decomposable operations. We provide sufficient conditions on pivotal operations that guarantee that the corresponding classes of ... [more ▼] We study pivotal decomposition schemes and investigate classes of pivotally decomposable operations. We provide sufficient conditions on pivotal operations that guarantee that the corresponding classes of pivotally decomposable operations are clones, and show that under certain assumptions these conditions are also necessary. In the latter case, the pivotal operation together with the constant operations generate the corresponding clone. [less ▲]Detailed reference viewed: 78 (21 UL) On the generalized associativity equation Marichal, Jean-Luc ; Teheux, Bruno in Aequationes Mathematicae (2017), 91(2), 265-277 The so-called generalized associativity functional equation G(J(x,y),z) = H(x,K(y,z)) has been investigated under various assumptions, for instance when the unknown functions G, H, J, and K are real ... [more ▼] The so-called generalized associativity functional equation G(J(x,y),z) = H(x,K(y,z)) has been investigated under various assumptions, for instance when the unknown functions G, H, J, and K are real, continuous, and strictly monotonic in each variable. In this note we investigate the following related problem: given the functions J and K, find every function F that can be written in the form F(x,y,z) = G(J(x,y),z) = H(x,K(y,z)) for some functions G and H. We show how this problem can be solved when any of the inner functions J and K has the same range as one of its sections. [less ▲]Detailed reference viewed: 155 (26 UL) Modal extensions of Ł_n-valued logics, coalgebraically ; Teheux, Bruno ; Scientific Conference (2017)Detailed reference viewed: 27 (2 UL) Generalized qualitative Sugeno integrals ; ; et al in Information Sciences (2017), 415-416 Sugeno integrals are aggregation operations involving a criterion weighting scheme based on the use of set functions called capacities or fuzzy measures. In this paper, we define generalized versions of ... [more ▼] Sugeno integrals are aggregation operations involving a criterion weighting scheme based on the use of set functions called capacities or fuzzy measures. In this paper, we define generalized versions of Sugeno integrals on totally ordered bounded chains, by extending the operation that combines the value of the capacity on each subset of criteria and the value of the utility function over elements of the subset. We show that the generalized concept of Sugeno integral splits into two functionals, one based on a general multiple-valued conjunction (we call integral) and one based on a general multiple-valued implication (we call cointegral). These fuzzy conjunction and implication connectives are related via a so-called semiduality property, involving an involutive negation. Sugeno integrals correspond to the case when the fuzzy conjunction is the minimum and the fuzzy implication is Kleene-Dienes implication, in which case integrals and cointegrals coincide. In this paper, we consider a very general class of fuzzy conjunction operations on a finite setting, that reduce to Boolean conjunctions on extreme values of the bounded chain, and are non-decreasing in each place, and the corresponding general class of implications (their semiduals). The merit of these new aggregation operators is to go beyond pure lattice polynomials, thus enhancing the expressive power of qualitative aggregation functions, especially as to the way an importance weight can affect a local rating of an object to be chosen. [less ▲]Detailed reference viewed: 77 (9 UL) Modal Extensions of Łukasiewicz Logic for Modeling Coalitional Power Teheux, Bruno ; in Journal of Logic & Computation (2017), 27(1), 129-154 Modal logics for reasoning about the power of coalitions capture the notion of effectivity functions associated with game forms. The main goal of coalition logics is to provide formal tools for modeling ... [more ▼] Modal logics for reasoning about the power of coalitions capture the notion of effectivity functions associated with game forms. The main goal of coalition logics is to provide formal tools for modeling the dynamics of a game frame whose states may correspond to different game forms. The two classes of effectivity functions studied are the families of playable and truly playable effectivity functions, respectively. In this paper we generalize the concept of effectivity function beyond the yes/no truth scale. This enables us to describe the situations in which the coalitions assess their effectivity in degrees, based on functions over the outcomes taking values in a finite Łukasiewicz chain. Then we introduce two modal extensions of Łukasiewicz finite-valued logic together with many-valued neighborhood semantics in order to encode the properties of many-valued effectivity functions associated with game forms. As our main results we prove completeness theorems for the two newly introduced modal logics. [less ▲]Detailed reference viewed: 115 (14 UL) Strongly barycentrically associative and preassociative functions Teheux, Bruno ; Marichal, Jean-Luc Scientific Conference (2016, November 08)Detailed reference viewed: 79 (7 UL) Relaxations of associativity and preassociativity for variadic functions ; Marichal, Jean-Luc ; Teheux, Bruno in Fuzzy Sets & Systems (2016), 299 In this paper we consider two properties of variadic functions, namely associativity and preassociativity, that are pertaining to several data and language processing tasks. We propose parameterized ... [more ▼] In this paper we consider two properties of variadic functions, namely associativity and preassociativity, that are pertaining to several data and language processing tasks. We propose parameterized relaxations of these properties and provide their descriptions in terms of factorization results. We also give an example where these parameterized notions give rise to natural hierarchies of functions and indicate their potential use in measuring the degrees of associativeness and preassociativeness. We illustrate these results by several examples and constructions and discuss some open problems that lead to further directions of research. [less ▲]Detailed reference viewed: 147 (21 UL) International Symposium on Aggregation and Structures (ISAS 2016) - Book of abstracts Kiss, Gergely ; Marichal, Jean-Luc ; Teheux, Bruno Book published by NA (2016)Detailed reference viewed: 259 (15 UL) A characterisation of associative idempotent nondecreasing functions with neutral elements Kiss, Gergely ; ; Marichal, Jean-Luc et al Scientific Conference (2016, June)Detailed reference viewed: 72 (14 UL) Strongly barycentrically associative and preassociative functions Marichal, Jean-Luc ; Teheux, Bruno in Journal of Mathematical Analysis and Applications (2016), 437(1), 181-193 We study the property of strong barycentric associativity, a stronger version of barycentric associativity for functions with indefinite arities. We introduce and discuss the more general property of ... [more ▼] We study the property of strong barycentric associativity, a stronger version of barycentric associativity for functions with indefinite arities. We introduce and discuss the more general property of strong barycentric preassociativity, a generalization of strong barycentric associativity which does not involve any composition of functions. We also provide a generalization of Kolmogoroff-Nagumo's characterization of the quasi-arithmetic mean functions to strongly barycentrically preassociative functions. [less ▲]Detailed reference viewed: 100 (14 UL)
Can a Turing machine $M_A$ determine if the Turing machine $M_B$ accepts the set $W_k$? I am curious about the answer to this as I am thinking about using the truth value of it on using it for a recursive enumerability proof. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community Is the language of all machines that accept $W$ recognizable? $$L_W = \{\langle M\rangle : L(M) = W\}$$ The answer is unfortunately no, except in trivial cases. If $W$ is unrecognizable, then $L_W$ is empty (no machine recognizes $W$) and hence decidable (The machine that rejects all inputs decides $L_W$). In all other cases, $L_W$ is unrecognizable. We can show this using reductions from the unrecognizable rejection problem— the problem of determining whether a machine rejects a word. If $W\neq \Sigma^*$ is decidable, then $L_W$ is unrecognizable. Consider a program that takes a machine and word $\langle N, x\rangle$ and builds a new machine $N_x(w)$. The new machine decides whether $w\in W$ and accepts if so. Otherwise, it simulates $N$ on $x$ and does what it does. The machine $N_x$ therefore accepts $W$ (if $N$ rejects $x$) or all strings (if $N$ accepts $x$). If we could recognize whether $N_x$ accepts $W$ or not, we could recognize whether $N$ rejects $x$ or not—an impossibility. If $W$ is recognizable but undecidable, then $L_W$ is unrecognizable: Consider a program that takes a machine and word $\langle N, x\rangle$ and builds a new machine $N_x(w)$. The new machine simulates $N$ on $x$ for $|w|$ steps. If the simulation accepts in that time, the machine accepts. Otherwise, the machine tests whether $w\in W$ and does what it does. Hence $N_x$ accepts the union of $W$ and the set of all strings longer than the number of steps it takes for $N$ to accept $x$. If $N$ rejects $x$, $N_x$ recognizes $W$. If $N$ accepts $x$, $N_x$ recognizes additional words. (Note that because $W$ is undecidable, there must be arbitrarily long strings not in $W$. Otherwise, a DFA could decide $W$ by memorizing the short strings and using a length requirement for the long ones. Hence $N_x$ recognizes a strict superset of $W$ if $N$ accepts $x$.) If we could recognize whether $N_x$ accepts $W$ or not, we could recognize whether $N$ rejects $x$ or not—an impossibility. While user326210's answer analyzes the language $\{\langle M\rangle\mid L(M)=W\}$, my answer analyzes the language $L_W=\{\langle M\rangle\mid L(M)\supseteq W\}$. $L_W$ is the set of encodings of all TMs, thus decidable. $L_W$ is undecidable but recognizable. Suppose $L_W$ is decidable by $D_{L_W}$, we can build a decider $D_H$ using $D_{L_W}$ to solve halting problem. The decider $D_H$ works as follows: On input $\langle \langle M\rangle, w\rangle$: Construct a TM $M'$ working on input $w'$ as follows: Run $M$ on $w$. If $w'\in W$ (recall that $W$ is finite), accept. Otherwise reject. Run $D_{L_W}$ on $\langle M'\rangle$. If $D_{L_W}$ accepts, accept. Otherwise reject. We can see if $M$ halts on $w$, $M'$ accepts $w'$ if and only if $w'\in W$, which means $L(M')=W$. Otherwise, $M'$ accepts nothing. Therefore $D_{L_W}$ accepts $\langle M'\rangle$ if and only if $M$ halts on $w$, so $D_H$ is indeed a decider for halting problem. Hence $L_W$ is undecidable by contradiction. To recognize $W$, a TM can run $M$ on all strings in $W$ (recall again that $W$ is finite) and accepts if $M$ accepts all these strings. This TM recognizes $L_W$. $L_W$ is unrecognizable. Suppose $L_W$ is recognizable by $M_{L_W}$, we can build a recognizer $M_R$ using $M_{L_W}$ to recognize $\overline{A_{\mathrm{TM}}}=\{\langle\langle M\rangle,w\rangle\mid M\text{ does not accept }w\}$, which is unrecognizable. $M_R$ works as follows. On input $\langle \langle M\rangle, w\rangle$: Construct a TM $M'$ working on input $w'$ as follows: Run $M$ on $w$ with at most $|w'|$ steps. If $M$ accepts, reject. Otherwise accept. Run $M_{L_W}$ on $\langle M'\rangle$. If $M_{L_W}$ accepts, accept. If $M_{L_W}$ rejects, reject. We can see if $M$ accepts $w$, say with $n$ steps, then $M'$ accepts $w'$ if and only if $|w'|<n$, which means $L(M')$ cannot be a superset of $L_W$ (recall that $L_W$ is infinite). Otherwise $M$ accepts every string, which means $L(M')$ is certainly a superset of $L_W$. Therefore $M_{L_W}$ accepts $\langle M'\rangle$ if and only if $M$ does not accept $w$, so $M_R$ is indeed a recognizer of $\overline{A_{\mathrm{TM}}}$. Hence $L_W$ is unrecognizable by contradiction.
Let $G$ be an infinite group, $F$ a finite subset of $G$ and $A=G\setminus F$. Is it true that $A^{-1}A=AA^{-1}=G$ (what about $AA=G$)? ($A^{-1}=\{ a^{-1}:a\in A\}$) Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $G$ be an infinite group, $F$ a finite subset of $G$ and $A=G\setminus F$. Is it true that $A^{-1}A=AA^{-1}=G$ (what about $AA=G$)? ($A^{-1}=\{ a^{-1}:a\in A\}$) This question appears to be off-topic. The users who voted to close gave this specific reason: We have $A^{-1}A=AA^{-1}=AA=G$. Show just $A^{-1}A=G$, the others are similar. Suppose there is a $g\in G$ with $g\notin A^{-1}A$. Then we have for every $a\in A$ that $g\notin a^{-1}A$ and so $ag\notin A$. So we have $Ag\cap A=\emptyset$ and so $Ag\subseteq G\setminus A$. This is contradiction to $|G\setminus A|<\infty$ since $A\to Ag, g\mapsto ag$ is bijective and so $|A|=|Ag|=\infty$.
You have two classes of points. Instead of managing them in two sets, one just assigns each point in the first class the value $-1$ and in the second class the value $+1$. So in fact you have point-value pairs $(x_i,y_i)$. To classify future points in a consistent way you now want to construct a function $f(x)$ that has not exactly $f(x_i)=y_i$ as in interpolation, but the still sufficient condition $f(x_i)\le -1$ for points in the first class and $f(x_i)\ge +1$ for points in the second class. These two kinds of inequality can be compressed into one single class of inequalities by multiplying with the sign $y_i$, $$y_if(x_i)\ge 1$$ for all $i=1,...,N$, where $N$ is the number of training points. "for all" has the symbolic sign $\forall$, an inverted letter "A". The inverted letter "E", $\exists$, is the symbol for "exists". Now to find such a function, you select a parametrized class of functions $f(w,x)+b$ with some parameter vector $(w,b)$ and strive to find a compromise between having a simple form of $f$ and small function values on the test set, or rather, $f(w,x_i)=y_i$, which defines the support vectors, on as many points as possible. Simplicity includes that the parameters in $w$ are small numbers. So we come to the linear SVM where $f(w,x)=w^Tx$ and minimal paramters means to minimize $\|w\|_2^2=w^Tw$. In optimization, this task is encoded via a Lagrange function $$L(w,b,α)=\tfrac12\|w\|_2^2-\sum_{i=1}^Nα_i(y_i(w^Tx_i+b)-1)$$ with the restriction $α_i\ge 0$. Standard optimization techniques solve this problem via its KKT system.\begin{align}0=\frac{\partial L}{\partial w}&=w-\sum_{i=1}^Nα_iy_ix_i\\0=\frac{\partial L}{\partial b}&=-\sum_{i=1}^Nα_i y_i\\α_i&\ge 0\\y_i(w^Tx_i+b)-1&\ge 0\\α_i\,(y_i(w^Tx_i+b)-1)&=0\end{align} The last three equations again for all $i$. They can be combined using NCP functions like $$N(u,v)=2uv-(u+v)_-^2$$ with $(u+v)_-=\min(0,u+v)$ to one condition per $i$ $$N(α_i,\, y_i(w^Tx_i+b)-1)=0.$$ This now is smooth enough so that Newton's method or quasi-Newton methods may be applied.
This surprisingly beguiling puzzle may also be solvedwith a surprisingly unsophisticated approach.Symmetry, by itself, predicts the average length ofevens-only sequences ending with 6 to be...Start with T  many random throws:2153664315121226553111444142566363625461525 . . 3644464461Sift them into 4 groups that,due ... The biggest problem with the prisoner's proof is that his model is incomplete. One piece of that is that it merges two key variables - days on which the execution can happen, and days on which the prisoner believes it possible to happen. It's easy to see why these two were merged - because of the requirement of surprise, they are strongly correlated. However,... This is indeed possible if we define information content as the least number of bits necessary to describe the data.Consider the following example (in human readable language):Book 1: Jim knows all animals whose name starts with a letter between A and L.Book 2: Jim knows all animals whose name starts with a letter between M and Z.Then the information ... I was at a dinner party the other night and I spoke to Mr Smith. He mentioned having two kids, both with unisex names: Sam and Alex. I remember that he told a story about "my son", but I don't recall which child he was talking about, and I don't know if he has a daughter or not. What are the odds that he has two sons?This avoids selection entirely, and ... The books in the library contains all binary strings of length 1 million, one per book, sorted in lexicographic order. An individual book takes a million bits to specify, but this description of the whole library is much shorter. The classic answer is "The surgeon is his mother".Another possible answers:It is another father of him. Gays are real.:)It is step-father. No one would care to chose precise words in such situation. There is no paradox. The teacher will get paid, one way or another.The key to understanding the situation is realizing there are multiple slightly different scenarios that are all being described as being identical:The student is obligated to pay the teacher.The student will be obligated to pay the teacher.The conditions of the contract have been met.... Let $X_n$ be the event that the dice takes $n$ rolls to get the first 6, given all the rolls are even. Let $A_n$ be the event that it takes $n$ rolls to get the first 6, and let $B$ be the event that all rolls up to the first 6 are even.$P(X_n)=P(A_n|B)=\dfrac{P(B|A_n)P(A_n)}{P(B)}$ (using Bayes' theorem)Now:$P(A_n)=\dfrac{1}{6}\cdot\left(\dfrac{5}{6}\... It seems rather obvious to me. Am I missing something?Edit to add this:I see now where there could be an incorrect way to reason about it that some people (like the cashier maybe) might do. It would have been nice if that were more clearly stated in the question.The cashier may have thought as follows: Putting together a batch of 4 at \$0.30 and 6 at \$0.... Original solution by YowE3K (who later turned it into this community wiki)This isn't an answer to the exact question, but the following link is to an image that I thought was worth looking at anyway:And @justhalf found another image which looks even more like the one in the question, except rotated 90 degrees:I'm thinking that the actual ... The negation of "All x are y" is "There is at least one x which is not y".So, Epimenides is a liar. Therefore his statement "All Cretans are liars" is false. This means that not all Cretans are liars. This means that at least one Cretan tells the truth. He can still be a liar, there just has to be at least one Cretan who's not a liar.Now, if Epimenides ... Basically the mirror line is a physical equivalent of an ideal wall, which can react to any force you apply to it with the same, but opposite force.A physical object can not cross a wall, especially an ideal wall. So my answer is no. The twins were travelling eastwards across the International Date Line during their birth.The elder twin brother was born first, on March 1st.After they crossed the International Date Line, the younger brother was born, on February 28th.When they celebrate their birthdays on a leap year, they're 2 days apart! I believe the answer isThis is computational calculation, so it is not statistical answer.It is for the people who try to find it probabilistically.Here is the probabilistic solution:First of all, we know that the probability of getting 6 on the first roll is $\frac{1}{6}$, then getting 6 after an even roll is $\frac{2}{6}\frac{1}{6}$, and so on as ... There's room even for one more day of difference, that is, the elder brother can celebrate his birthday 3 days after the younger brother does.leoll2 says in his answer that they travel across the International Date Line to jump back to the previous day in the calendar. You don't have to travel to that part of the globe, you can do this at any time zone ... Create a strong magnetic field at the mirror surface. North is up, south is down. Fire an electron beam at the mirror through the field. Electrons from either side will be deflected anti clockwise (as seen from above) and will miss each other, passing through the mirror. The simple (and I believe the only) answer to both your questions is "This is a paradox, so logic does not have predictability power here".Martin Gardner described this paradox in detail. Unfortunately, I can't find the English version online, but if you know Russian, you can find it here (The English Google translation is here). The English version should ... (1) No statement $n$ with $2\le n\le N$ in the second part can be true, as it would imply its own falseness.Every statement in the second part of the list is false.(2) As statement $n$ with $2\le n\le N$ in the second part is false, we conclude that not all statements with a number divisible by $n$ are false.Equivalently, there exists some statement ... It is rational for both parties to press the button, causing them to win $500$ on average. Here is what is wrong with the reasoning in the Paradox section.Once Alice is called in, she reasons that the strategy will lose them money. That is ok, since when Alice is not called in, she reasons that the strategy will win them money (when she is not called in, ... You should be deducting the Bellhop's £20 from the £270, not adding it.Edited with more details:The question is deliberately misleading you into thinking there's a paradox, and some money has gone missing.The total amount paid initially was £300, £100 each.The total amount paid after the partial refund was £270, £90 each.The sly bellhop kept the ... I can't see images now, but from the optics laws ang geometry it follows that the center point of the rainbow's arc is opposite to the light source with respect to your eye. In other words, if the rainbow is directly in front of you, the Sun is directly behind you. If you see two rainbows intersecting, you must have two suns behind you, relatively close to ...
This question already has an answer here: The residue at $\infty$ 1 answer The residue at infinity is given by: $$\underset{z_0=\infty}{\operatorname{Res}}f(z)=\frac{1}{2\pi i}\int_{C_0} f(z)dz$$ Where $f$ is an analytic function except at finite number of singular points and $C_0$ is a closed countour so all singular points lie inside it. It can be proven that the residue at infinity can be computed calculating the residue at zero. $$\underset{z_0=\infty}{\operatorname{Res}}f(z)=\underset{z_0=0}{\operatorname{Res}}\frac{-1}{z^2}f\left(\frac{1}{z}\right)$$ The proof is just to expand $-\frac{1}{z^2}f\left(\frac{1}{z}\right)$ as a Laurent series and to see that the $1/z$ is the integral mentioned. I can see that we change $f(z)$ to $f(1/z)$ so the variable tends to infinity. But, is there any intutive reason of why we introduce the $-1/z^2$ factor?
It seems the following. We shall use this question by Amathstudent and its answer by Brian M. Scott. We prove that each weak Hausdorff compactly generated $T_1$ space is $KC$. Since there is a weak Hausdorff compact $T_1$ space, which is not $KC$ (see the space $\Bbb Q^*\times\Bbb Q^*$ in the answer by Brian M. Scott), this space is not compactly generated. So, let $X$ be a weak Hausdorff compactly generated $T_1$ space and $Y$ be a compact subset of $X$. We claim that $Y$ is $k$-closed subset of $X$. Indeed, let $C$ be a compact Hausdorff space and $u: C\to X$ be a continuous map. Since the space $X$ is weak Hausdorff, the set $u(C)$ is closed in $X$. A set $u(C)\cap Y$ is compact as a closed subspace of a compact space $Y$. By Lemma 1, the space $u(C)$ is Hausdorff. So the set $u(C)\cap Y$ is closed in the space $u(C)$. Since the set $u(C)$ is closed in $X$, the set $u(C)\cap Y$ is closed in the space $X$ too. Since the map $u$ is continuous, the set $u^{-1}(Y)= u^{-1}(Y\cap u(C))$ is closed in $C$. Since the space $X$ is compactly generated, the set $Y$ is closed in $X$. Hence $X$ is a $KC$-space.
I try to simulate thermal version of 1D $(x, t)$ sine-Gordon field model. I am interested in finding thermal static solution that minimizes functional of energy $E$: $$E = \int dx \left( \frac{1}{2} \phi' ^2 + 1 - \cos \phi \right) \ ,$$ where $\phi' = \partial_x \phi $ What is very confusing that acceptance ratio of Metropolis algorithm is too high - more than $0.95$, so almost every new proposed configuration is accepted. On each Metropolis step I change field value at one spatial point and calculate difference in energy. To propose new configurations uniform sampling is used with step parameter $\delta = 0.5$, i.e. $$\phi_{new} = \phi_{old} + r \ ,$$ where $r$ is random number from $\phi_{old} - \delta$ to $\phi_{old} + \delta$. It seems to me that if acceptance ratio is too high then algorithm does not work correctly. However, such algorithm is perfectly applied to the ground state harmonic oscillator (via path integral Monte Carlo). Here is my code: double E_part(double dx, double phi, double phi_plus, double phi_minus) {return dx * ( 0.5 * pow ( 0.5 * ( phi - phi_minus ), 2.0 ) + 0.5 * pow ( 0.5 * ( phi_plus - phi ), 2.0 ) + 1 - cos ( 0.5 * ( phi_minus + phi ) ) - cos ( 0.5 * (phi + phi_plus ) ) );}double Metropolis(int N, int Steps, double dx, double Temperature, double* dphi, double* phi, double delta, double* EnergyArray) {double Beta = 1.0 / Temperature;double Epart;double Enewpart;double phinew;double phi_plus, phi_minus;int AcceptanceCounter = 0; //Initial energydouble r; //random number from 0 to 1;for (int k=0; k<Steps; k++) { for (int j=0; j<N; j++) { int pos = RandomUniformInt(1 , N-2); //random position to change; phinew = phi[pos] - delta + RandomUniform() * 2 * delta; phi_plus = phi[pos + 1]; phi_minus = phi[pos - 1]; Epart = E_part(dx, phi[pos], phi_plus, phi_minus); Enewpart = E_part(dx, phinew, phi_plus, phi_minus); r = RandomUniform(); if ( Enewpart * Beta - Epart * Beta < 0.0 || exp( Epart * Beta - Enewpart * Beta ) >= r ) { phi[pos] = phinew; dphi[pos - 1] = 0.5 * ( phi[pos] - phi[pos - 1] ); dphi[pos] = 0.5 * ( phi[pos + 1] - phi[pos] ); AcceptanceCounter++; } } EnergyArray[k] = Energy(N, dx, dphi, phi);}return ( AcceptanceCounter / static_cast<double>(Steps * N) ); //AcceptanceRatio is returned} I don't know what might be wrong.
Quiet a lot of insight can be gained form experience, I was just wondering if anybody has seen something similar to this before. The plot shows the initial condition (green) for the advection-diffusion equation, then the solution at iteration 200 (blue) and then again at iteration 400 (red). The solution of the advection-diffusion equation blows up after a few iterations. The Péclet number $\mu\approx0.07$, and the CFL condition is satisfied, $C\approx 0.0015$, so the equations should be stable. I anticipate I have a bug in the numerical code. Background. The discretisation is central difference for both advection and diffusion terms. I believe this is first order of advection and second order for diffusion. I have implemented this using a finite-volume approach (for the first time) in which the coefficients (velocity and diffusion coefficient) values at the cell faces is found by linear interpolation from the cell averages. I apply Robin boundary condition on the left and right surfaces and set the flux at the boundaries to zero. How do you debug your numerical code? Has anybody scene something like this before, where would be a good place to starting looking? Update Here is my personal "lab book" style notes on implementing a finite volume method for the advection-diffusion equation, http://danieljfarrell.github.io/FVM/ The Python source code is available here, http://github.com/danieljfarrell/FVM.git Update The solution couldn't be more simple! I just made a sign error on the diffusion term. It's strange, I'm sure it I had not of posted this I would not have found the error! If someone wants to share tips on how they debug their numerical code I am still interested. I don't have a method, it's a bit hit and miss, I keep on trying stuff to get clues, but this process can take weeks (sometimes). Proof it works ( N.B. that with the finite-volume method all you need to do to calculate the area is a summation of width $\times$ height for all the cells, if you use an integration method such as numpy.trapz your results include the numerical error of the trapezium method). What is happening here? There is a constant velocity and diffusion coefficients but with closed boundary conditions. Therefore at the boundary we see the equilibrium between the velocity field pushing to the right and the diffusion push to the left.
Let $T$ be a self-adjoint operator with bounded spectrum $\sigma(T)$. Does this imply that $T$ is bounded? I would say: Yes! My attempt: $$||T|| = sup_{||x||=1} \langle Tx,x \rangle =sup_{||x||=1} \int_{\sigma(T)} \lambda d\mu_{x,x}(\lambda) \le \sup_{t \in \sigma(T)}|t| \quad sup_{||x||=1} ||\mu_{x,x}|| \le \sup_{t \in \sigma(T)}|t| < \infty .$$ I am new to the spectral theorem, so I don't know if my proof is correct.
Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a function. A level set is a set of points: $$L(c) = \{x \in \mathbb{R}^n | f(x) = c\}$$ Two vectors $a, b \in \mathbb{R}^n$ are perpendicular, when their dot product is 0: $$a \perp b :\Leftrightarrow a \cdot b = \sum_{i=1}^n a_i b_i = 0$$ The gradient of $f$ is $$\nabla f = \begin{pmatrix}\frac{\partial f}{\partial x_1}\\ \frac{\partial f}{\partial x_2}\\ \dots\\ \frac{\partial f}{\partial x_n}\\\end{pmatrix}$$ Question Why is $\nabla f(p)$ at any given point $p \in \mathbb{R}^n$ perpendicular to the level set $L(f(p))$? What does it mean anyway to be perpendicular to the level set? Does it mean the tangent of the level set in this point is perpendicular to the gradient in this point? How do I get the tangent? Are there any important implications of this? What does it mean anyway to be perpendicular to the level set? Does it mean the tangent of the level set in this point is perpendicular to the gradient in this point? Context I found the question "why is the level curve perpendicular to the gradient" in an exam protocol for probabilistic planning.
Let's just do an example. Let's find the continued fraction for $\def\sf{\sqrt 5}\sf$. $\sf\approx 2.23$ or something, and $a_0$ is the integer part of this, which is 2. Then we subtract $a_0$ from $\sf$ and take the reciprocal. That is, we calculate ${1\over \sf-2}$. If you're using a calculator, this comes out to 4.23 or so. Then $a_1$ is the integer part of this, which is 4. So:$$\sf=2+\cfrac{1}{4+\cfrac1{\vdots}}$$ Where we haven't figured out the $\vdots$ part yet. To get that, we take our $4.23$, subtract $a_1$, and take the reciprocal; that is, we calculate ${1\over 4.23 - 4} \approx 4.23$. This is just the same as we had before, so $a_2$ is 4 again, and continuing in the same way, $a_3 = a_4 = \ldots = 4$:$$\sf=2+\cfrac{1}{4+\cfrac1{4+\cfrac1{4+\cfrac1\vdots}}}$$ This procedure will work for any number whatever, but for $\sf$ we can use a little algebraic cleverness to see that the fours really do repeat. When we get to the ${1\over \sf-2}$ stage, we apply algebra to convert this to ${1\over \sf-2}\cdot{\sf+2\over\sf+2} = \sf+2$. So we could say that: $$\begin{align}\sf & = 2 + \cfrac 1{2+\sf}\\2 + \sf & = 4 + \cfrac 1{2+\sf}.\end{align}$$ If we substitute the right-hand side of the last equation expression into itself in place of $ 2+ \sf$, we get: $$ \begin{align}2+ \sf & = 4 + \cfrac 1{4 + \cfrac 1{2+\sf}} \\ & = 4 + \cfrac 1{4 + \cfrac 1{4 + \cfrac 1{2+\sf}}} \\& = 4 + \cfrac 1{4 + \cfrac 1{4 + \cfrac 1{4 + \cfrac 1{2+\sf}}}} \\& = \cdots\end{align}$$ and it's evident that the fours will repeat forever.
I like this question very much. But I think the best approach is via a plethora of examples meant to demonstrate the variety of uses of equivalence classes. I doubt there is a singular example that can open every student's mind to the concept of equivalence classes. That said, here are some examples I have used effectively: 1) I have taught a "transition to proofs" course for a few years, and have included the following sequence of exercises. During the first few weeks or so, after working with sets and their notation, I assign this exercise: What's $\mathbb{Z}$ Point Of This Problem?: In this problem, we are going to ''prove'' the existence of the negative integers! I say ''prove'' because we won't really understand what we've done until later but, trust me, it's what we're doing. Because of this goal, you cannot assume any integers strictly less than 0 exist, so your algebraic steps, especially in part (d), should not involve any terms that might be negative. That is, if you consider an equation like $x+y=x+z$, we can deduce that $y=z$, by subtracting $x$ from both sides, since $x-x=0$. However, if we consider an equation like $x+y=z+w$, we cannot deduce that $x-z=w-y$. Perhaps $y>w$, so $w-y$ does not exist in our context... On to the problem! Let $P=\mathbb{N}\times\mathbb{N}$. Define the set $R$ by $$ R = \{((a,b),(c,d))\in P\times P\mid a+d=b+c\} $$ Find three different pairs $(c,d)$ such that $((1,4),(c,d))\in R$. Let $(a,b)\in P$. Prove that $((a,b),(a,b))\in R$. Let $((a,b),(c,d))\in R$. Prove that $((c,d),(a,b))\in R$, as well. Assume $((a,b),(c,d))\in R$ and $((c,d),(e,f))\in R$. Prove that $((a,b),(e,f))\in R$, as well. I pose this mainly as a "can you understand new notation and write a proof about it" problem, and say as much to the students. But a few weeks later on, when we're talking about equivalence relations, I bring up this exercise again. I even write a passage in our book about this: Remember that crazy exercise from Chapter 3 that had you prove something about a set of pairs of pairs of natural numbers, and we claimed that was proving something about the existence of the integers? What was that all about? Look back at the exercise now, Exercise [reference]. You'll see that the last three parts of the problem have you prove that the set $R$ we defined is an equivalence relation on the set $P=\mathbb{N}\times\mathbb{N}$. Look at that! You proved $R$ is reflexive, symmetric, and transitive. What that exercise showed is that (essentially, we are glossing over some details here) any negative integer is represented as the equivalence class of pairs of integers whose difference is that negative integer. That is, $$ -1\;\; \text{''}=\text{''}\;\; [(1,2)]_R = \{(1,2),(2,3),(3,4),\dots\} $$ and, for another example, $$ -3\;\;\text{''}=\text{''}\;\; [(1,4)]_R=\{(1,4),(2,5),(3,6),\dots\}$$ This is only an intuitive explanation and not rigorous, mathematically speaking, but that's the idea! For the students who might already be inclined to think abstractly and want to pursue higher math, this is a great teaser, and has led to many discussions in office hours about set theory, logic, and so on. For other students, it's a reminder that exercises from the past weren't done in a vacuum; they have a meaning, and can teach us new things. And for every student, it's at least a reminder that math is interconnected in ways we might not expect, a priori. 2) An in-class example I like to discuss involves comparing an equivalence relation to a similar (non-equivalence) relation that is meant to "encode the same information". I include the following example in the text, and follow up on it with an in-class discussion, which also serves to point out the distinction between "a relation from $A$ to $B$" and a "relation on $A$": Let $S$ be the set of students in our class. Define a relation $R_1$ between $S$ and $\mathbb{N}$ by saying $(s,n)\in R_1$ if person $s\in S$ is $n$ years old. Now, define a relation $R_2$ on $S$ itself by saying $(s,t)\in R_2$ if persons $s$ and $t$ are the same age (in years). How do the relations $R_1$ and $R_2$ compare? Do they somehow ''encode'' the same information about the elements of the set $S$? Why or why not? 3) You mention $\mathbb{Z}/n\mathbb{Z}$, naturally. I think it behooves us to show why this is useful, not just that it's an equivalence relation. In the past, I have demonstrated exemplary uses via the Chinese Remainder Theorem, Fermat's Little Theorem, etc. But I've found that the most striking (and convincing) uses are ''smaller'', computationally, and serve to show how previously tedious arguments can be cleaned up. For instance, a standard induction problem asks a student to show $\forall n\in\mathbb{N}$ that $6\mid n^3+5n$. A standard induction argument requires some algebraic manoeuvering, and ends up being entirely unenlightening for the beginning student (for whom this is meant to be practice relating to the inductive nature of such relationships). Instead, I go back and use mod 6 and say, ''There are 6 cases. Either $n$ is congruent to 0,1,2,3,4,5 modulo 6. In each case, we see...'' And there they have it. Likewise, consider proving that any perfect square is either a multiple of 4 or one more than a multiple of 4. This can be good practice working with formal definitions (''multiple of 4 means there exists $k\in\mathbb{Z}$ such that...'') but it's far more ''fun'' for them to just see that $0^2\equiv 2^2\equiv 0$ and $1^2\equiv 3^2\equiv 1$. Finally, divisibility tricks are fun, too. I find that college students are well aware of the ''casting out 3s/9s'' trick, but are wholly unaware of how it works. Setting up a mod 10 congruence to prove it really shows some "aha"s and smiles. In summary, I don't think it's necessary to show a typical undergraduate the sophisticated concept of ''modding out'' a set by an equivalence relation. In a classroom setting, it usually suffices to whet their appetite by showing students the utility of equivalence relations in various settings. (In the examples above, this means (1) formally defining $\mathbb{Z}$ from $\mathbb{N}$, (ii) comparing an equivalence relation and a ''regular'' relation, and (iii) using equivalence classes to clean up a formerly ''messy'' argument.) If particular students are intrigued by this, then you might foster further discussions, either in class or in office hours (depending on the popularity/prevalence).
Linear Regression and ANOVA shaken and stirred (Part 1)Mon, Mar 20, 2017 Updated 2018-03-27 Motivation Linear Regression and ANOVA concepts are understood as separate concepts most of the times. The truth is they are extremely related to each other being ANOVA a particular case of Linear Regression. Even worse, its quite common that students do memorize equations and tests instead of trying to understand Linear Algebra and Statistics concepts that can keep you away from misleading results, but that is material for another entry. Most textbooks present econometric concepts and algebraic steps and do empathise about the relationship between Ordinary Least Squares, Maximum Likelihood and other methods to obtain estimates in Linear Regression. Here I present a combination of little algebra and R commands to try to clarify some concepts. Linear Regression Let \(\renewcommand{\vec}[1]{\boldsymbol{#1}} \newcommand{\R}{\mathbb{R}} \vec{y} \in \R^n\) be the outcome and \(X \in \mathbb{R}^{n\times p}\) be the design matrix in the context of a general model with intercept: \[\vec{y} = X\vec{\beta} + \vec{e}\] Being: \[ \underset{n\times 1}{\vec{y}} = \begin{pmatrix}y_0 \cr y_1 \cr \vdots \cr y_n\end{pmatrix} \text{ and } \underset{n\times p}{X} = \begin{pmatrix}1 & x_{11} & & x_{1p} \cr 1 & x_{21} & & x_{2p} \cr & \ddots & \cr 1 & x_{n1} & & x_{np}\end{pmatrix} = (\vec{1} \: \vec{x}_1 \: \ldots \: \vec{x}_p) \] In linear models the aim is to minimize the error term by chosing \(\hat{\vec{\beta}}\). One possibility is to minimize the squared error by solving this optimization problem: \[ \begin{equation} \label{min} \displaystyle \min_{\vec{\beta}} S = \|\vec{y} - X\vec{\beta}\|^2 \end{equation} \] Books such as Baltagi discuss how to solve \(\eqref{min}\) and other equivalent approaches that result in this optimal estimator: \[ \begin{equation} \label{beta} \hat{\vec{\beta}} = (X^tX)^{-1} X^t\vec{y} \end{equation} \] With one independent variable and intercept, this is \(y_i = \beta_0 + \beta_1 x_{i1} + e_i\), equation \(\eqref{beta}\) means: \[ \begin{equation} \label{beta2} \hat{\beta}_1 = cor(\vec{y},\vec{x}) \cdot \frac{sd(\vec{y})}{sd(\vec{x})} \text{ and } \hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{\vec{x}} \end{equation} \] Coding example with mtcars dataset Consider the model: \[mpg_i = \beta_1 wt_i + \beta_2 cyl_i + e_i\] This is how to write that model in R notation: lm(mpg ~ wt + cyl, data = mtcars) Call:lm(formula = mpg ~ wt + cyl, data = mtcars)Coefficients:(Intercept) wt cyl 39.686 -3.191 -1.508 Or written in matrix form: y <- mtcars$mpgx0 <- rep(1, length(y))x1 <- mtcars$wtx2 <- mtcars$cylX <- cbind(x0,x1,x2) It’s the same to use lm or to perform a matrix multiplication because of equation \(\eqref{beta}\): fit <- lm(y ~ x1 + x2)coefficients(fit) (Intercept) x1 x2 39.686261 -3.190972 -1.507795 beta <- solve(t(X)%*%X) %*% (t(X)%*%y)beta [,1]x0 39.686261x1 -3.190972x2 -1.507795 Coding example with Galton dataset Equation \(\eqref{beta2}\) can be verified with R commands: if (!require(pacman)) install.packages("pacman")p_load(HistData)# read the documentation# ??Galtony <- Galton$childx <- Galton$parentbeta1 <- cor(y, x) * sd(y) / sd(x)beta0 <- mean(y) - beta1 * mean(x)c(beta0, beta1) [1] 23.9415302 0.6462906 #comparing with lm resultslm(y ~ x) Call:lm(formula = y ~ x)Coefficients:(Intercept) x 23.9415 0.6463 Coding example with mtcars dataset and mean centered regression Another possibility in linear models is to rewrite the observations in the outcome and the design matrix with respect to the mean of each variable. That will only alter the intercept but not the slope coefficients. So, for a model like \(y_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + e_i\) I can write the equivalent model: \[y_i - \bar{y} = \beta_0 + \beta_1 (x_{i1} - \bar{x}_{i1}) + \beta_2 (x_{i2} - \bar{x}_{i2}) + e_i\] Another possibility is to consider that \(\bar{y}_i = \beta_0 + \beta_1 \bar{x}_{i1} + \beta_2 \bar{x}_{i2} + 0\) under the classical assumption \(\bar{e}_i = 0\) and substracting I obtain: \[y_i - \bar{y} = \beta_1 (x_{i1} - \bar{x}_{i1}) + \beta_2 (x_{i2} - \bar{x}_{i2}) + e_i\] I’ll analyze the first case, without dropping \(\beta_0\) unless there’s statistical evidence to show its not significant. In R notation the model \(y_i - \bar{y} = \beta_0 + \beta_1 (x_{i1} - \bar{x}_{i1}) + \beta_2 (x_{i2} - \bar{x}_{i2}) + e_i\) can be fitted in this way: # read the documentation# ??mtcarsnew_y <- mtcars$mpg - mean(mtcars$mpg)new_x1 <- mtcars$wt - mean(mtcars$wt)new_x2 <- mtcars$cyl - mean(mtcars$cyl)fit2 <- lm(new_y ~ new_x1 + new_x2)coefficients(fit2) (Intercept) new_x1 new_x2 5.996835e-16 -3.190972e+00 -1.507795e+00 new_X <- cbind(x0,new_x1,new_x2)new_beta <- solve(t(new_X)%*%new_X) %*% (t(new_X)%*%new_y)new_beta [,1]x0 5.769401e-16new_x1 -3.190972e+00new_x2 -1.507795e+00 Here the intercept is close to zero, so I can obtain more information to check significance: summary(fit2) Call:lm(formula = new_y ~ new_x1 + new_x2)Residuals: Min 1Q Median 3Q Max -4.2893 -1.5512 -0.4684 1.5743 6.1004 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.997e-16 4.539e-01 0.000 1.000000 new_x1 -3.191e+00 7.569e-01 -4.216 0.000222 ***new_x2 -1.508e+00 4.147e-01 -3.636 0.001064 ** ---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1Residual standard error: 2.568 on 29 degrees of freedomMultiple R-squared: 0.8302, Adjusted R-squared: 0.8185 F-statistic: 70.91 on 2 and 29 DF, p-value: 6.809e-12 In this particular case I should drop the intercept because its not significant so I write: fit3 <- lm(new_y ~ new_x1 + new_x2 - 1)coefficients(fit3) new_x1 new_x2 -3.190972 -1.507795 new_X <- cbind(new_x1,new_x2)new_beta <- solve(t(new_X)%*%new_X) %*% (t(new_X)%*%new_y)new_beta [,1]new_x1 -3.190972new_x2 -1.507795 Residuals The total sum of squares is defined as the sum of explained and residual (or unexplained) sum of squares or, in other words, the sum of explained and unexplained variance in the model: \[TSS = ESS + RSS = \sum_i (\hat{y}_i - \bar{y})^2 + \sum_i (y_i - \hat{y}_i)^2 = \sum_i (y_i - \bar{y})^2 \] Being \(\hat{\vec{y}} = X\hat{\vec{\beta}}\). Here \(TSS\) follows a \(F(p,n-1)\) distribution with \(n-1\) degrees of freedom. This is, \(ESS\) has \(p\) degrees of freedom and \(RSS\) has \(n-p-1\) degrees of freedom and the F-statistic is: \[ F = \frac{ESS/p}{RSS/(n-p-1)} \] This statistic tests the null hypothesis \(\vec{\beta} = \vec{0}\). This is, the F-statistic provides information about the joint effect of all the variables in the model together and therefore p-values are required to determine single coefficients’ significance. ANOVA The term analysis of variance refers to categorical predictors so ANOVA is a particular case of the linear model that works around the statistical test just described and the difference in group means. ANOVA is a particular case of the linear model where predictors (or independent variables) are dummy variables that reflect if an observation belongs to a certain group. An example of this would be \(x_{i1} = 1\) if observation \(i\) belongs to a group of interest (e.g. the interviewed person is in the group of people who has a Twitter account) and \(x_{i1} = 0\) otherwise. The null hypothesis in ANOVA is “group means are all equal” as I’ll explain with examples. This comes from the fact that regression coefficients in ANOVA measure the effect of belonging to a group, and as its explained about F test you can examinate the associated p-value to a regression coefficient to check if the group effect is statistically different from zero (e.g. if you have a group of people who uses social networks and a subgroup of people who use Twitter, then if the dummy variable that expresses Twitter using has a non-significative regression coefficient, then you have to evidence to state that group means are different) An example with mtcars dataset In the mtcars dataset, am can be useful to explain ANOVA as its observations are defined as:\[am_i = \begin{cases}1 &\text{ if car } i \text{ is manual} \cr 0 &\text{ if car } i \text{ is automatic}\end{cases}\] Case 1 Consider a model where the outcome is mpg and the design matrix is \(X = (\vec{x}_1 \: \vec{x}_2)\) so that the terms are defined in this way: y <- mtcars$mpgx1 <- mtcars$am x2 <- ifelse(x1 == 1, 0, 1) This is: \[ x_1 = \begin{cases}1 &\text{ if car } i \text{ is manual} \cr 0 &\text{ if car } i \text{ is automatic}\end{cases} \quad \quad x_2 = \begin{cases}1 &\text{ if car } i \text{ is automatic} \cr 0 &\text{ if car } i \text{ is manual}\end{cases} \] The estimates without intercept would be: fit <- lm(y ~ x1 + x2 - 1)fit$coefficients x1 x2 24.39231 17.14737 Taking \(\eqref{beta}\) and replacing in this particular case would result in this estimate: \[ \hat{\vec{\beta}} = \begin{bmatrix}\bar{y}_1 \cr \bar{y}_2 \end{bmatrix} \] being \(\bar{y}_1\) and \(\bar{y}_2\) the group means. This can be verified with R commands: y1 <- y*x1; y1 <- ifelse(y1 == 0, NA, y1)y2 <- y*x2; y2 <- ifelse(y2 == 0, NA, y2)mean(y1, na.rm = TRUE) [1] 24.39231 mean(y2, na.rm = TRUE) [1] 17.14737 If you are not convinced of this result you can write down the algebra or use R commands. I’ll do the last with the notation \(U = (X^tX)^{-1}\) and \(V = X^t\vec{y}\): X <- cbind(x1,x2)U <- solve(t(X)%*%X)V <- t(X)%*%yU;V;U%*%V x1 x2x1 0.07692308 0.00000000x2 0.00000000 0.05263158 [,1]x1 317.1x2 325.8 [,1]x1 24.39231x2 17.14737 \(U\) entries are just one over the number of observations of each group and V entries are the sum of mpg observations of each group so that the entries of \(UV\) are the means of each group: u11 <- 1/sum(x1)u22 <- 1/sum(x2)v11 <- sum(y1, na.rm = TRUE)v21 <- sum(y2, na.rm = TRUE)u11;u22 [1] 0.07692308 [1] 0.05263158 v11;v21 [1] 317.1 [1] 325.8 u11*v11;u22*v21 [1] 24.39231 [1] 17.14737 Aside from algebra, now I’ll show the equivalency between lm and aov that is the command used to perform an analysis of variance: y <- mtcars$mpgx1 <- mtcars$amx2 <- ifelse(x1 == 1, 0, 1)fit2 <- aov(y ~ x1 + x2 - 1)fit2$coefficients x1 x2 24.39231 17.14737 Case 2 Changing the design matrix to \(X = (\vec{1} \: \vec{x}_1)\) will lead to the estimate: \[ \hat{\vec{\beta}} = \begin{bmatrix}\bar{y}_2 \cr \bar{y}_1 - \bar{y}_2 \end{bmatrix} \] Fitting the model results in: y <- mtcars$mpgx1 <- mtcars$amfit <- lm(y ~ x1)fit$coefficients (Intercept) x1 17.147368 7.244939 So to see the relationship between the estimates and the group means I need additional steps: x0 <- rep(1,length(y))X <- cbind(x0,x1)beta <- solve(t(X)%*%X) %*% (t(X)%*%y)beta [,1]x0 17.147368x1 7.244939 I did obtain the same estimates with lm command so now I calculate the group means: x2 <- ifelse(x1 == 1, 0, 1)x1 <- ifelse(x1 == 0, NA, x1)x2 <- ifelse(x2 == 0, NA, x2)m1 <- mean(y*x1, na.rm = TRUE)m2 <- mean(y*x2, na.rm = TRUE)beta0 <- m2beta1 <- m1 - m2beta0;beta1 [1] 17.14737 [1] 7.244939 In this case this means that the slope for the two groups is the same but the intercept is different, and therefore exists a positive effect of manual transmission on miles per gallon in average terms. Again I’ll verify the equivalency between lm and aov in this particular case: y <- mtcars$mpgx1 <- mtcars$amx2 <- ifelse(x1 == 1, 0, 1)fit2 <- aov(y ~ x1)fit2$coefficients (Intercept) x1 17.147368 7.244939 A simpler way to write the model is: fit3 <- lm(mpg ~ am, data = mtcars)summary(fit3) Call:lm(formula = mpg ~ am, data = mtcars)Residuals: Min 1Q Median 3Q Max -9.3923 -3.0923 -0.2974 3.2439 9.5077 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 17.147 1.125 15.247 1.13e-15 ***am 7.245 1.764 4.106 0.000285 ***---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1Residual standard error: 4.902 on 30 degrees of freedomMultiple R-squared: 0.3598, Adjusted R-squared: 0.3385 F-statistic: 16.86 on 1 and 30 DF, p-value: 0.000285 I can calculate the residuals by hand: mean_mpg <- mean(mtcars$mpg)fitted_mpg <- fit3$coefficients[1] + fit3$coefficients[2]*mtcars$amobserved_mpg <- mtcars$mpgTSS <- sum((observed_mpg - mean_mpg)^2) ESS <- sum((fitted_mpg - mean_mpg)^2)RSS <- sum((observed_mpg - fitted_mpg)^2)TSS;ESS;RSS [1] 1126.047 [1] 405.1506 [1] 720.8966 Here its verified that \(TSS = ESS + RSS\) but aside from that I can extract information from aov: summary(fit2) Df Sum Sq Mean Sq F value Pr(>F) x1 1 405.2 405.2 16.86 0.000285 ***Residuals 30 720.9 24.0 ---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 And check that, as expected, \(ESS\) is the variance explained by x1. I also can run ANOVA over lm with: anova(fit3) Analysis of Variance TableResponse: mpg Df Sum Sq Mean Sq F value Pr(>F) am 1 405.15 405.15 16.86 0.000285 ***Residuals 30 720.90 24.03 ---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The table provides information on the effect of am over mpg. In this case the null hypothesis is rejected because of the large F-value and the associated p-values. Considering a 0.05 significance threshold I can say, with 95% of confidence, that the regression slope is statistically different from zero or that there is a difference in group means between automatic and manual transmission.
Background: I understand that inter-molecular van der Waals' forces are responsible for maintaining water in the liquid phase. Now, if we suppose that the net van der Waals' force on a given H2O molecule suspended in liquid phase H2O is due to its interaction with a very large number of neighbouring H2O molecules, I wonder whether we can estimate the average van der Waals' force on this molecule given that the boiling point of water is known to be 373 Kelvin. Method: Here's what I have tried(although my approach may be incorrect as this is entirely based on independent study): The relation between average speed, molar mass and absolute temperature is given by the following equation: $$\begin{equation} v_{rms} = \sqrt{\frac{3RT}{M_{H_2O}}} \tag{1} \end{equation}$$ where $M_{H_2O}$= 18.02 g/mol, $T=373K$, and $R=8.314 K^{-1} \cdot mol^{-1} \cdot J$ From $(1)$ we may determine that for a particular molecule at boiling point the expected final kinetic energy, $\mathbb{E}[E_f]$, is given by: $$\begin{equation} \mathbb{E}[E_f] = \frac{1}{2} \cdot m_{H2O} \cdot v_{rms}^2 \tag{2} \end{equation}$$ where $m_{H2O}\approx 3 \cdot 10^{-23} g$ is the mass of a single water molecule and $v_{rms} \approx 718 \space m \cdot s^{-1}$ To obtain the average initial kinetic energy $\mathbb{E}[E_i]$ of the average water molecule in liquid phase at boiling point we may assume that this particle would cover the same distance in the same time interval if not for the significantly greater density of liquids: Let's suppose that the density of water in liquid phase at boiling point is approximately given by $\rho_l = 10^3 \space kg \cdot m^{-3}$ To determine the density in the gas phase we have: $$\begin{equation} \rho_{g} = \frac{m}{V} = M_{H_2O} \cdot \frac{P}{RT}= 18.02 g \cdot mol^{-1} \frac{0.0230 atm}{0.0821 \frac{atm \cdot L}{mol \cdot K} \cdot 373 K} = 0.0135 \space kg \cdot m^{-3} \tag{3} \end{equation}$$ From $\rho_g$ and $\rho_l$ we can infer that the ratio of average inter-molecular distances is approximately: $$\begin{equation} \frac{D_l}{D_g} \approx \big(\frac{\rho_g}{\rho_l}\big)^{\frac{1}{3}}\approx 2.4 \% \tag{4} \end{equation}$$ where I suspect that if the molecules may be modelled as hard spheres there's probably some kind of hexagonal lattice structure. Using the previous calculations we may estimate the average work done to overcome the van der Waals' forces to change from the liquid to the gas phase as follows: $$\begin{equation} \mathbb{E}[W]=\mathbb{E}[E_f]-\mathbb{E}[E_i]= \frac{1}{2} m_{H2O} (v_{rms}^2-(0.024 \cdot v_{rms})^2) \approx 7.73 \cdot 10^{-18} \cdot J \tag{5} \end{equation}$$ Now, if we assume that on average the particles are approximately uniformly distributed in space in the liquid phase then the average inter-molecular distance in the liquid phase may be determined by calculating the number of particles per meter: $$\begin{equation} N^{\frac{1}{3}} = (\frac{\rho_l}{M_{H20}} \cdot 6 \cdot 10^{23})^{\frac{1}{3}} \approx 6.4 \cdot 10^{10} \tag{6} \end{equation}$$ and from this we deduce that the inter-molecular distance is approximately $0.016 \cdot nm$ It follows that the vdW force is on the order of: $$\begin{equation} \mathbb{E}[F_{vdw}] = \frac{\mathbb{E}[W]}{\mathbb{E}[D]}=\frac{7.73 \cdot 10^{-18} \cdot J}{.016 \cdot 10^{-9} \cdot m} \approx 5 \cdot 10^{-7} \cdot N \tag{7} \end{equation}$$ Any constructive thoughts and comments are welcome. Note: All of these calculations appear to make sense to me but it's not yet clear how I can be certain that they aren't completely off. I mean this calculation isn't coupled with a testable hypothesis although I think I can figure out whether this estimate is reasonable by using this estimate of vdW force magnitudes to predict how far a metal ball with mass $M$ would travel in a container filled with water after a time interval $T$ when released from a given height.
I have been pondering about this issue for some time... Say, I want to minimize a costfunctional $$ \tilde J(u) = J(v(u),u) = \frac 12 \int_0^T (v-v_0)^2 + \alpha u^2 dt $$ subject to $$ \dot v = v^2 + u, \quad v(0)=0. $$ Then, from the first order necessary optimality conditions it follows that at an optimal point $(v^*,u^*)$, there is a $\lambda$ that solves $$ -\dot \lambda = 2v^*\lambda - (v^*-v_0), \quad \lambda(T)=0, \tag{1} $$ such, that that the gradient of $\tilde J$ is given as $$ D_u\tilde J(u^*) = \lambda + \alpha u^*=0. \tag{2} $$ From $(1)$ and $(2)$, I infer that $$u^*(T) = 0.$$ My Question: Is this right? Or is there something wrong in the argumentation. Two more remarks: I don't think that the answer lies in the rightfunctional analytic formulation. (I have had a very close look at this). If one uses gradient based methods, then $(2)$ implies that the update is zero at $t=T$. Which means that the terminal value of the converged control will equal the terminal value of the initial guess. (I have seen this in a master thesis, where a fluid/structure interaction was successfully controlled by the adjoint based approach; see the screenshot) EDIT: The conclusion that $u^*(T) = 0$ (as well as the update) is not true. (Thanks to L.P. for pointing this out) However, in practice, in a gradient descent method, one updates an initial guess $u_0$ via $u_1 = u_0 - s D_u\tilde J(u_0)$ so that $\lambda(T) = 0$ implies that $$ u_1(T) = (1-s\alpha)u_0(T) $$ Note, that the step size $s=\mathcal O(1)$ whereas $\alpha$ can be like $10^{-5}$ so that a gradient iteration, in fact, hardly affects the endpoint of the control.
B BHUSHAN Articles written in Pramana – Journal of Physics Volume 87 Issue 4 October 2016 Article ID 0056 Regular We have synthesized, characterized and studied the third-order nonlinear optical properties of two different nanostructures of polydiacetylene (PDA), PDA nanocrystals and PDA nanovesicles, along with silver nanoparticles-decorated PDA nanovesicles. The second molecular hyperpolarizability $\gamma (−\omega; \omega,−\omega,\omega$) of the samples has been investigated by antiresonant ring interferometric nonlinear spectroscopic (ARINS) technique using femtosecond mode-locked Ti:sapphire laser in the spectral range of 720–820 nm. The observed spectral dispersion of $\gamma$ has been explained in the framework of three-essential states model and a correlation between the electronic structure and optical nonlinearity of the samples has been established. The energy of two-photon state, transition dipole moments and linewidth of the transitions have been estimated. We have observed that the nonlinear optical properties of PDA nanocrystals and nanovesicles are different because of the influence of chain coupling effects facilitated by the chain packing geometry of the monomers. On the other hand, our investigation reveals that the spectral dispersion characteristic of $\gamma$ for silver nanoparticles-coated PDA nanovesicles is qualitatively similar to that observed for the uncoated PDA nanovesicles but bears no resemblance to that observed in silver nanoparticles. The presence of silver nanoparticles increases the $\gamma$ values of the coated nanovesicles slightly as compared to that of the uncoated nanovesicles, suggesting a definite but weak coupling between the free electrons of the metal nanoparticles and $\pi$ electrons of the polymer in the composite system. Our comparative studies show that the arrangement of polymer chains in polydiacetylene nanocrystals is more favourable for higher nonlinearity. Current Issue Volume 93 | Issue 6 December 2019 Click here for Editorial Note on CAP Mode
Waecmaths Title waecmaths question Question 1 Correct 0.04945 to two significant figures Question 2 Simplify $\frac{5}{\sqrt{3}}-\frac{3}{\sqrt{2}}$ Question 3 Evaluate $7\tfrac{1}{2}-(2\tfrac{1}{2}+3)\div \tfrac{33}{2}$, correct to the nearest whole number Question 4 Simplify the expression ${{\log }_{10}}18-{{\log }_{10}}2.88+{{\log }_{10}}16$ Question 5 Find the equation whose roots are 2 and $-3\tfrac{1}{2}$ Question 6 A man bought 220 mangoes at N5 Question 7 From a point Question 8 A fair die is tossed once, what is the probability of obtaining neither 5 nor 2 Question 9 In the diagram $KL\parallel MN,\angle NMP={{30}^{\circ }}$ and $\angle NMP={{45}^{\circ }}$ find the size of the reflex $\angle KPM$ Question 10 In the diagram, Question 11 Convert 101101 Question 12 Solve the equation ${{2}^{7}}={{8}^{5-x}}$ Question 13 Expand $(2x-3y)(x-5y)$ Question 14 Make Question 15 The probability that John and James passes an examination are $\tfrac{3}{4}$ and $\tfrac{3}{5}$ respectively. Find the probability of both boys failing the examination. Question 16 In the diagram, Question 17 In the diagram, PQRS is a parallelogram and $\angle QRT={{30}^{\circ }}$. Find Question 18 From a point Question 19 The lengths of the adjacent sides of a right-angled triangle are Question 20 What is the diameter of a circle of area 77cm Question 21 Solve for $2x-3y=22$ $3x+2y=7$ Question 22 Solve the equation $\frac{2y-1}{3}-\frac{3y-1}{4}=1$ Question 23 In the diagram O is the centre of the circle, $\angle MON={{80}^{\circ }}$ $\angle LMO={{10}^{\circ }}$ and $\angle LNO={{15}^{{}^\circ }}$.Calculate the value of Question 24 If ${{\log }_{9}}x=1.5$ find Question 25 A sequence is given 2 ½ , 5, 7½….. if the Question 26 Given that one of the roots of the equation $2{{x}^{2}}+(k+2)x+k=0$ is 2. Find the value of Question 27 The figure shows a quadrilateral Question 28 Find the mean of the numbers 1, 3, 4, 8, 8, 4 and 7 Question 29 What is the total surface area of a closed cylinder of height 10cm and diameter 7cm (Take $\pi =\tfrac{22}{7}$) Question 30 An arc of a circle of radius 14cm subtends angle 300 Question 31 Question 32 Question 33 Which of the following statement describes the locus of a point Question 34 Question 35 A train moving at uniform speed, covers 36km in 21 minutes. How long does it take to cover 60km Question 36 The salary of a man was increased in the ratio 40: 47. Calculate the percentage increase in the salary Question 37 A regular polygon has 9 sides. What is the size of one of the its exterior angles? Question 38 Simplify $\frac{2}{3xy}-\frac{3}{4yz}$ Question 39 The total surface area of the walls of a room, 7m long, 5m wide and Question 40
Bayesian Parameter Estimation Algorithm Context: Example(s): MCMC. Counter-Example(s): See:Probability Distribution Parameter Estimation Algorithm. References 2014 http://en.wikipedia.org/wiki/Bayesian_inference#Estimates_of_parameters_and_predictions It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation select measurements of central tendency from the posterior distribution. For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a robust estimator. [1]If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation. [:[math]\tilde \theta = \operatorname{E}[\theta] = \int_\theta \theta \, p(\theta \mid \mathbf{X},\alpha) \, d\theta[/math] Taking a value with the greatest probability defines maximum citation needed] a posteriori(MAP) estimates: [:[math]\{ \theta_{\text{MAP}}\} \subset \arg \max_\theta p(\theta \mid \mathbf{X},\alpha) .[/math] citation needed] There are examples where no maximum is attained, in which case the set of MAP estimates is empty. There are other methods of estimation that minimize the posterior risk(expected-posterior loss) with respect to a loss function, and these are of interest to statistical decision theory using the sampling distribution ("frequentist statistics"). [ citation needed] The posterior predictive distribution of a new observation [math]\tilde{x}[/math] (that is independent of previous observations) is determined by [:[math]p(\tilde{x}|\mathbf{X},\alpha) = \int_\theta p(\tilde{x},\theta \mid \mathbf{X},\alpha) \, d\theta = \int_\theta p(\tilde{x} \mid \theta) p(\theta \mid \mathbf{X},\alpha) \, d\theta .[/math] citation needed] It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation select measurements of central tendency from the posterior distribution. For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a robust estimator. Sen, Pranab K.; Keating, J. P.; Mason, R. L. (1993). Pitman's measure of closeness: A comparison of statistical estimators. Philadelphia: SIAM. 2012 (Levy, 2012) ⇒ Roger Levy. (2012). “Probabilistic Models in the Study of Language - Chapter 4: Parameter Estimation." QUOTE: … In this chapter we delve more deeply into the theory of probability density estimation, focusing on inference within parametric families of probability distributions (see discussion in Section 2.11.2). We start with some important properties of estimators, then turn to basic frequentist parameter estimation (maximum-likelihood estimation and corrections for bias), and finally basic Bayesian parameter estimation.
In the comments of Yuval Filmus' answer, the OP suggested that while, for any two CNF formulas $f$ and $\tilde{f}$, ($f \text{ falsifiable} \iff \tilde{f} \text{ satisfiable}) \iff (f \text{ satisfiable} \iff \tilde{f} \text{ falsifiable}$) wasn't usually true, it was implied in that specific case by the parsimonious reduction of the CNFFAL instance $f$ to the CNFSAT instance $\tilde{f}$, thus proving that $f \text{ satisfiable} \iff \Phi_{NTM_{CNFFAL},f} \text{ falsifiable}$. This answer is supposed to show evidence that it isn't the case (assuming it isn't clear to the reader), as a complement of the previously mentioned answer (per request of the OP in the comments). I'll try to exhibit an example making clear that it isn't the case. Now, building the corresponding $\Phi_{NTM_{CNFFAL},f}$ of some $f$ would be a bit tedious, so I'll build a parsimonious reduction from CNFFAL to SAT instead (contrast with the parsimonious reduction from CNFFAL to CNFSAT used in the question), but such that the falsifiability of the SAT instance doesn't imply the satisfiability of the CNFFAL instance. I claim that, for any CNF formula $f$,any assignment $(a_1, \dots, a_n)$ falsifying $f(a_1, \dots, a_n)$ allows me to build a unique satisfying assignment of $\tilde{f}(a_1, \dots, a_n, c) = \lnot f(a_1, \dots, a_n) \land c$ (and vice-versa). This (admittedly dumb) reduction from CNFFAL to SAT preserves the number of solutions so it's parsimonious. Now let's assume $f(a, b) = (a \lor b) \land (\lnot a \lor b) \land (a \lor \lnot b) \land (\lnot a \lor \lnot b)$. There indeed are as many ways to falsify $f$ as to satisfy $\tilde{f}$ (4, per the previous parsimonious reduction). Therefore: $f \text{ falsifiable} \iff \tilde{f} \text{ satisfiable}$ However there also are 4 ways to falsify $\tilde{f}$ (just set $c$ to 0) whereas $f$ is clearly unsatisfiable. Therefore: $\lnot(f \text{ satisfiable} \iff \tilde{f} \text{ falsfiable})$ Contrast this with that excerpt of the reasoning in the question: Therefore $\Phi_{NTM_{CNFFAL},f}\equiv\lnot f$ since Cook and Levin reduction is parsimonious and $\Phi_{NTM_{CNFFAL},f}$ is also in conjunctive normal form. Therefore f is falsifiable if and only if $\Phi_{NTM_{CNFFAL},f}$ is satisfiable. Therefore f is satisfiable if and only if $\Phi_{NTM_{CNFFAL},f}$ is falsifiable. The previous statement stems from the fact that $\tilde{f}$ uses more variables than $f$, hence the number of possible assignments of $\tilde{f}$ is greater than the number of possible assignments of $f$. Indeed, if those two numbers of possibilities were equal, the reduction being parsimonious would indeed lead to the equivalence of $f$ being satisfiable and $\tilde{f}$ being falsifiable (by a simple counting argument, as the set of falsifiable assignments is the complement of the set of satisfiable assignments). In the question, $\Phi_{NTM_{CNFFAL},f}$ has been obtained thanks to Cook-Levin's reduction from an arbitrary NP problem to CNFSAT. That formula doesn't necessarily requires the same amount of variable as $f$ (and is actually very likely to require a far greater amount), so the equivalence doesn't necessarily holds.
Most of the students I help have a pretty good grasp of the three straightforward power laws: $(x^a)^b = x^{ab}$ $x^a \times x^b = x^{a+b}$ $x^a \div x^b = x^{a-b}$ So far, so dandy - and usually good enough if you're hoping for a B at GCSE. The trouble comes when they start throwing strange things in: what's $3^{-2}$? Or $81^{\frac14}$? Or, for the love of all that's holy, $16^{-\frac32}$? How on earth do you multiply something by itself negative two times? Or a quarter of a time? Non-positive numbers are probably the easier of the two to get to grips with, and I have two ways to explain them. The first involves making a list: $10^3 = 1,000$ $10^2 = 100$ $10^1 = 10$ ... you see how it's dividing by 10 each time? That pattern continues: $10^0 = 1$ $10^{-1} = \frac{1}{10}$ $10^{-2} = \frac{1}{100}$ ... and so on. In general, $x^{-k} = \frac{1}{x^k}$ - the negative power just 'flips' whatever you're working with and turns it into a fraction. That means $3^{-2} = \frac{1}{3^2} = \frac19$; similarly, $2^{-6} = \frac{1}{2^6} = \frac{1}{64}$. The second argument is that $3^{-2}$ must be the same as $3^{0-2} = 3^0 \div 3^2 = \frac{1}{9}$. Easy! Fractional powers are a bit harder to get your head around, but they do make sense - fractions, remember are really division sums. Division sums are the opposite of multiplications. Remember that $x^{ab} = (x^{a})^b$? Well, it stands to reason - since roots are the opposites of powers - that $x^{\frac ab}$ is the same as $\sqrt[b]{x^a}$. So, to work out $81^\frac14$, you need to work out the fourth root of 81. 81 is $9^2$, or $3^4$, so $81^\frac14 = 3$. In the same vein, $8^\frac23 = \sqrt[3]{8}^2 = 2^2 = 4$. And how about when they're combined? Well, you break it down into small steps. If you've got $16^{-\frac32}$, you deal with the ugliest thing first: the bottom of the fraction. That means 'square root', so you're left with $4^{-3}$. Already looking better! $4^3 = 64$, so you've got $64^{-1}$; the power of negative one is just the reciprocal - so your answer is $\frac{1}{64}$.
You write So, I already know that $L = \{ww \mid w \in (0,1)*\}$ is not context free. Since CFL are not closed under complement its complement $L'$ is a CFL Indeed $L'$ is context-free, but your argumentation is ...eeuh... nonsense. Shure, CFL are not closed under complement, meaning there exist context-free languages of which the complement is not context-free. In general however the complement of a CFL can be both CFL or non-CFL. Even if the complement of every CFL would be non-CFL [ which is NOT the case] it would NOT follow that the complement of every non-CFL is CFL. The language $L'$ indeed consists of all strings of odd length (that is a regular language and can be checked without using the stack) plus the strings of even length for which both halves differ. For the latter part see here. Phrased more explicitly, let $K = \{ xy \mid |x|=|y|, x\neq y \}$ be the language from question 307 linked above. My claim is that $L' = K \cup \{ x \mid |x| \text{ is odd } \}$, as $L$ equals $\{ xy \mid |x|=|y|, x= y \}$. (And so this explains, as you say in your question, why odd and even length strings are treated separately.) To answer your question how to implement this using a PDA, see the problem I have linked and its answers. A little hint. To check a word is of the form $xy$ with $|x|=|y|$, but $x\neq y$ we must be able to write $x=x_1ax_2$ and $y=y_1by_2$ with $a\neq b$, and both $|x_1|=|y_1|$ and $|x_2|=|y_2|$. Unfortunately we cannot check both length requirements at the same time. We can however test whether $|x_1y_2|= |x_2y_1|$, and that is enough. Essentially we find the matching positions of $a$ and $b$ without expicitly finding the middle of the string. (This is not at all obvious, please make some drawings.)
Several years ago, I became one of the developers of the open-source computer algebra system (CAS) called Maxima. Maxima is the open source descendant of the first ever computer algebra system, MACSYMA. Initially developed by the US Department of Defense, it was later commercialized. The company behind commercial MACSYMA has since disappeared, but in the late 1990s, the DOE agreed to permit the original version to be released under an open-source license. My interest in Maxima is due to my interest in general relativity. Perhaps more than most other areas of physics, general relativity relies heavily on computer algebra tools, due to the complexity involved with defining and analyzing metrics in curved spacetime. Maxima has two packages related to general relativity work. They both perform complicated tensor calculations. One package, itensor, is a general-purpose package that deals with indexed objects: specifically, objects with covariant, contravariant, and derivative indices. The package knows about contraction rules, the raising and the lowering of indices, ordinary and covariant differentiation, and Christoffel symbols. The other package, ctensor, is really a collection of subroutines that are designed to compute tensor components used mainly in general relativity, including the components of the Christoffel symbols, the Riemann tensor, the Ricci tensor, and the Weyl tensor. The two packages neatly complement each other: many problems can be solved by writing up and deriving indicial tensor equations using itensor, and then using a special function provided by the itensor package to convert the result into component form that can then be processed by ctensor. The main problem with the tensor packages was, simply put, that they were broken! This is where I come in: having been able to fix the core functionality in itensor, I decided to offer my services to the Maxima development team. I'm working on more than mere fixes, however. I have also extended the functionality of the tensor packages. On the one hand, I improved the algebraic power of itensor, introducing a new notation that helps preserve index ordering in more complicated tensor equations. On the other hand, I added to both ctensor and itensor the capability to deal with not just the standard metric formalism, but also with rigid frames, torsion, and conformal nonmetricity. Time permitting, I'd also like to add more capabilities in the future, to make Maxima "competitive" with other well-known tensor packages, such as SHEEP, CLASSI, and grTensorII. Meanwhile, I added a third tensor package: atensor is a package that can deal with generalized (tensor) algebras, including Clifford, Grassmann, and Lie-algebras. I also fixed the cartan package, a package that deals with with differential forms. Last but not least, I changed these four packages so that their naming conventions now conform to that of commercial MACSYMA. I have drafted a paper that summarizes the work I've done. For additional reference, here's a link to the tensor package manuals (snapshot from the current development version of Maxima) and some demos: Algebraic tensor manipulation ( atensor) Component tensor manipulation ( ctensor) Indicial tensor manipulation ( itensor) Tensor package demos (text output, ~370 kB; as of March, 2008) The following are two more complete examples that demonstrate some of the new capabilities that I added to these packages: The Kaluza-Klein metric In 1919, Theodor Kaluza proposed an extension to general relativity: using an appropriately constructed fifth dimension, he was able to incorporate electromagnetism into Einstein's theory of gravity. Recently, I endeavored to replicate the most basic of Kaluza's results: the equation of motion for a particle in empty five-dimensional space, as seen from a four-dimensional perspective. Now that I am working with Maxima, the question arose: can the same result be reproduced using this computer algebra system? Surprisingly, the answer is a yes. With only minor changes to the current Maxima code base, I was able to complete the derivation. The Petrov classification One of the common problems in general relativity is determining the equivalence of two metrics. Because the same manifold can be mapped using drastically different coordinate systems, it is usually not at all evident whether or not two metrics describe the same manifold. A set of routines that I am working on make it possible to derive the Petrov class for a metric specified using an orthonormal tetrad base. Differentiation with respect to field variables The newest addition (March 2008) to Maxima is the ability to differentiate indexed expressions with respect to indexed variables, most notably the metric. This feature makes it possible to use Maxima to derive the Euler-Lagrange equations from a field Lagrangian; indeed, Maxima can now deduce the field equations of general relativity from the Einstein-Hilbert action, and also can deal with other theories based on a modified Lagrangian, such as Brans-Dicke theory. The power of the package can now be demonstrated by showing how Maxima, starting from the Lagrangian density of the gravitational field (the Einstein-Hilbert Lagrangian) can derive, and solve, the field equations in the spherically symmetric case, a plot of the celebrated Schwarzschild gravity well. In other words, starting with \[{\cal L}=\frac{1}{16\pi G}(R+2\Lambda)\sqrt{-g},\] and running the following code: if get('ctensor,'version)=false then load(ctensor); if get('itensor,'version)=false then load(itensor); remsym(g,2,0); remsym(g,0,2); remsym(gg,2,0); remsym(gg,0,2); remcomps(gg); imetric(gg); icurvature([a,b,c],[e])*gg([d,e],[])$ contract(rename(expand(%)))$ %,ichr2$ contract(rename(expand(%)))$ canform(%)$ contract(rename(expand(%)))$ components(gg([a,b],[]),kdels([a,b],[u,v])*g([u,v],[])/2); components(gg([],[a,b]),kdels([u,v],[a,b])*g([],[u,v])/2); %th(4),gg$ contract(rename(expand(%)))$ contract(canform(%))$ imetric(g); contract(rename(expand(%th(2))))$ remcomps(R); components(R([a,b,c,d],[]),%th(2)); g([],[a,b])*R([a,b,c,d])*g([],[c,d])$ contract(rename(canform(%)))$ contract(rename(canform(%)))$ components(R([],[]),%); decsym(g,2,0,[sym(all)],[]); decsym(g,0,2,[],[sym(all)]); ishow(1/(16*%pi*G)*((2*L+'R([],[])))*sqrt(-determinant(g)))$ L0:%,R$ canform(contract(canform(rename(contract(expand(diff(L0,g([],[m,n]))- idiff(diff(L0,g([],[m,n],k)),k)+idiff(rename(idiff(contract( diff(L0,g([],[m,n],k,l))),k),1000),l)))))))$ ishow(e([m,n],[])=canform(%*16*%pi/sqrt(-determinant(g))))$ EQ:ic_convert(%)$ ct_coords:[t,r,u,v]; lg:ident(4); lg[2,2]:-a^2/(1-k*r^2); lg[3,3]:-a^2*r^2; lg[4,4]:-a^2*r^2*sin(u)^2; dependencies(a(t)); cmetric(); derivabbrev:true; christof(false); e:zeromatrix(4,4); ev(EQ); expand(radcan(ug.e)); lg:ident(4); lg[1,1]:B; lg[2,2]:-A; lg[3,3]:-r^2; lg[4,4]:-r^2*sin(u)^2; kill(dependencies); dependencies(A(r),B(r)); cmetric(); christof(false); e:zeromatrix(4,4); ev(EQ); E:expand(radcan(ug.e)); exp:findde(E,2); solve(ode2(exp[1],A,r),A); %,%c=-2*M; a:%[1],%c=-2*M; ode2(ev(exp[2],a),B,r); b:ev(%,%c=rhs(solve(rhs(%)*rhs(a)=1,%c)[1])); factor(ev(ev(exp[3],a,b),diff)); lg:ev(lg,a,b),L=0$ ug:invert(lg)$ block([title: "Schwarzschild Potential for Mass M=2",M:2.], plot3d([r*cos(th),r*sin(th),1-ug[1,1]],[r,5.,50.],[th,-%pi,%pi], ['grid,20,30],['z,-2,0],[psfile],['legend,title])); we end up with this plot: I believe this capability is unique to the Maxima tensor package.
The foundation for my exposition comes from Mas-Colell's examples in Ch. 6. The maximization problem for your specific question can be generalized pretty easily. Consider the case with one risky asset and one riskless asset. Let $\beta$ be the wealth invested in the safe asset, normalized to 1 dollar per dollar invested. Let $\alpha$ be the wealth invested in the risky asset, which has some random payout $z$ such that: $$\int z \ \text{d}F(z) > 1$$ So that the mean return is greater than the riskless asset. So we express our maximization problem as: $$\max \ \int u(\alpha z + \beta) \ \text{d}F(z) \\\text{s.t.} \quad\alpha + \beta = w$$ You can take advantage of the fact that $w - \alpha = \beta$ $\implies \alpha z + \beta = w + \alpha(z - 1)$ and find first order conditions. If $u$ is concave (risk averse) then Kuhn-Tucker FOCs which combine to make: $$\int u'(w + \alpha(z - 1))\cdot(z - 1) \ \text{d}F(z) = 0 \quad \text{iff} \quad \alpha \in (0, w) $$ So for the generic case, you can do the same setup with $N$ risky assets and one riskless asset that is better than whatever other riskless assets are out there. Let's normalize it again to payout of 1. So the maximization is now: $$\max \int u(\alpha_1 z_1 + \cdots + \alpha_N z_N + \beta) \ \text{d}F(z_1, \cdots z_N) \\\text{s.t.} \quad \alpha_1 + \dots + \alpha_N + \beta = w$$ Notes: If you have already done the easy case, this general case should not be so bad, just some more work. For other readers, I'll state the definitions of constant absolute and relative risk aversion below, respectively. $$r_A(x) = -\frac{u''(x)}{u'(x)} = n \quad \forall x$$$$r_R(x) = -\frac{x \cdot u''(x)}{u'(x)} = n \quad \forall x$$
My textbook gives me the error term for the Composite Trapezoidal Rule as this: $-\frac{b-a}{12}h^2f''(\mu)$, where $\mu \in(a,b)$ and $f \in C^2 [a,b]$ I am using MatLab to produce approximations with the Composite Trapezoidal Rule for $\int_0^{0.99} \frac{1}{\sqrt{1-x^2}}{\rm d}x$ with the intervals $h = 0.01, 0.005, 0.0025, 0.00125, 0.000625$ Below is my table of the approximations by my code and the absolute error for each interval: ....h............S(h)...........abs. err... 0.010000 1.432052842622 0.002795989152 0.005000 1.429980957924 0.000724104453 0.002500 1.429439827337 0.000182973867 0.001250 1.429302728001 0.000045874530 0.000625 1.429268330467 0.000011476997 Evaluating the error with the error formula, however, gives me a very different number than what my code is spitting out. For example, evaluating the error term for $h = 0.01, a = 0, b = 0.99$, I end up with $0.437161725$. Should my approximation of the error be that off? Am I not using the error term properly?
Search for light bosons in decays of the \(125\GeV\) Higgs boson in proton-proton collisions at \(\sqrt{s} = 8\TeV\) (by Aaallah and 2000+ co-authors)They only look at events in which the Higgs boson discovered in 2012 is produced – the number of collisions of this type (which were not known at all before late 2011) is so high that the experimenters may look at small special subsets and still say something interesting about these subsets. Off-topic but fun chart of the day. Source. So they focus on events in which the \(125\GeV\) Higgs decays to four fermions, as if it were first decaying to two lighter bosons, \(h\to aa\). The final states they probe include "four taus", "two muons plus two taus", and "two muons and two bottom quarks". It's not quite clear to me why they omit the other combinations, e.g. "two taus and two bottom quarks" etc. (except that I know that "four muons" was focused on in a special paper), but there may be some mysterious explanation. They say that there's no statistical excess anywhere. But what this statement means should be interpreted a bit carefully because it potentially understates the deviations from the Standard Model they are seeing. By "no statistical excesses", they mean that there's no excess whose global significance, i.e. significance reduced by the look-elsewhere correction, exceeds 2 sigma. In other words, the statement "nothing can be seen here" is compatible with the existence of more than 2-sigma – and perhaps a bit higher – excesses if evaluated locally, i.e. without any look-elsewhere reduction of the confidence level. And yes, those are seen. Click to zoom in. This chart – Figure 6 on Page 18 (page 20 of 48 according to the PDF file) – shows the Brazil bands for the final state with \(\mu\mu\tau\tau\). The tau leptons quickly decay and they split the final channels according to the decay products of these \(\tau\) as well – although, even in this case, it doesn't quite seem to me that they have listed all the options. ;-) You see that the black, observed curves are sometimes smooth, sometimes very wiggly. The wiggles are sometimes unusually periodic – like in the upper left channel. But the most remarkable excess is seen in the upper right channel in which the two \(\tau\) leptons decay to one electron and one muon, respectively (plus neutrinos – missing energy). You see that the distance from the Brazil band – for the mass \(m_a\) of the new light bosons depicted on the \(x\)-axis that is around \(20\GeV\) – is substantial. If a Brazilian soccer player deviated from the Brazilian land this severely, he would surely get drowned in the Atlantic Ocean. It looks like a "many sigma" deviation locally and I am a bit surprised that it doesn't make it to 2 sigma globally. Four other channels show nothing interesting around \(m_a\sim 20\GeV\) but the last one, the lower left channel with both \(\tau\) decaying hadronically – shows a small local (and in this case, much narrower – the energy is measured accurately because no energy is lost to ghostly neutrinos in hadronic decays) excess for \(m_a\sim 19\GeV\). When these two excesses (and the flat graphs from the other channels) are added, we see the combined graph in the lower right corner which shows something like a locally 3-sigma excess for \(m_a\sim 19\GeV\). It's almost certainly a fluctuation. If it weren't one, it should be interpreted as the "second Higgs boson" in a general 2HDM (two-Higgs-doublet model) which is ugly and unmotivated by itself. But such models may be typically represented as the Higgs part of the NMSSM (next-to-minimal supersymmetric standard model) which is very nice and explains the hierarchy problem more satisfactorily than MSSM. Even though it also has two Higgs doublets and therefore two CP-even neutral Higgs bosons in them, MSSM itself cannot reproduce these rather general 2HDM models. It would be of course exciting if the LHC could suddenly discover a new \(20\GeV\) Higgs-like boson and potentially open the gates to truly new physics like supersymmetry but like in so many cases, I would bet on "probably not" when it comes to this modest excess. Don't you find it a bit surprising that now, in early 2017, we are still getting preprints based on the evaluation of the 2012 LHC data? The year was called "now" some five years ago. Are they hiding something? And when they complete an analysis like that, why don't they directly publish the same analysis including all the 2015+2016 = 4031 data as well? Surely the analysis of the same channel applied to the newer, \(13\TeV\) data is basically the same work. Maybe they're trying to pretend that they're writing more papers, and therefore doing more work? I don't buy it and neither should the sponsors and others. Things that may be done efficiently should be done efficiently. If it leads to the people's having more time to enjoy their lives instead of writing very similar long papers that almost no one reads, they should have more time to enjoy their life – and to collect energy needed to make their work better and more happily. Another new CMS paper searching for SUSY with top tagging shows no excess, not even 2-sigma excess locally, but there's a nice more than 1-sigma repulsion from the point with a \(600\GeV\) top squark and a \(300\GeV\) neutralino or so.
I'm trying to reproduce Kaluza & Klein's result of obtaining the electromagnetic field by introducing a fifth dimension. The basic idea is that the extra components of the five-dimensional metric will materialize in four dimensions as components of the electromagnetic vector potential. For instance, by postulating the appropriate five-dimensional metric and writing up the equation of motion for a particle in empty space, we should be able to recover the four dimensional equation of motion for a charged particle in an electromagnetic field. Dealing with a single particle, that's a rather special case. Texts on Kaluza-Klein usually focus instead on the relativistic action, which would be applicable to all mechanical systems. My goal here, however, was simply to outline the approach and demonstrate through a simple case how it works, not to develop a comprehensive theory; that has been done by Kaluza over 80 years ago. My first attempt was a naïve one: I thought I might be able to derive the desired result in flat space, without having to consider curvature with the associated computational complications. That is not so: as I now discovered, curvature, in particular the Christoffel-symbols, play an essential role in the theory, as it is due to the Christoffel-symbols that the electromagnetic field tensor will appear in the four dimensional equation of motion. We start with empty 5-space. We use upper-case indices for 5-dimensional coordinates (0...4), while lower-case indices will be used in four dimensions (0...3). The electromagnetic field tensor, $F_{ab}$, is defined as $F_{ab}=\nabla_aA_b-\nabla_bA_a=\partial_aA_b-\partial_bA_a$, the contributions of the Christoffel-symbols canceling out each other due to their symmetry in the first two indices. The metric tensor of 5-space is assumed to take the following form (the reason for this peculiar choice will become evident later on): \[G_{AB}=\begin{pmatrix}g_{ab}+g_{44}A_aA_b&g_{44}A_a\\g_{44}A_b&g_{44}\end{pmatrix},\] where $A_a$ is an arbitrary 4-vector. Writing up the metric tensor in this form does not imply any loss of generality. The inverse of the metric tensor takes the following form: \[G^{AB}=\begin{pmatrix}g^{ab}&-A_a\\-A_b&g_{44}^{-1}+A^2\end{pmatrix}.\] The result can be verified through direct calculation, i.e., by computing $G_{AB}G^{BC}$. What next? Why, computing the Christoffel-symbols of course: \[\Gamma_{AB}^C=G^{CD}\Gamma_{ABD}=\frac{1}{2}G^{CD}(\partial_AG_{BD}+\partial_BG_{AD}-\partial_DG_{AB}).\] Wherever the notation might appear ambiguous, I use an upper left index (4) or (5) to distinguish between the four-dimensional and the five dimensional Christoffel-symbols. Now is the time to make some assumptions about the 5-dimensional metric. First, we assume that the component $g_{44}$ remains constant everywhere. Second, we postulate that the fifth direction forms a so-called Killing field, meaning that the metric will not change with respect to the fifth coordinate: $\partial_4G_{AB}=0$. This is Kaluza's celebrated "cylinder condition". These identities imply that $\Gamma_{a44}=\Gamma_{4b4}=\Gamma_{44c}=0$. Now let's try some of the other Christoffel-symbols: \begin{align} {}^{(5)}\Gamma_{4b}^c&=G^{cD}\Gamma_{4bD}=G^{cd}\Gamma_{4bd}+G^{c4}\Gamma_{4b4}=\frac{1}{2}g^{cd}(\partial_4G_{bd}+\partial_bG_{4d}-\partial_dG_{4b})\\ &=\frac{1}{2}[\partial_b(g_{44}A_d)-\partial_d(g_{44}A_b)]=\frac{1}{2}g_{44}g^{cd}(\partial_bA_d-\partial_dA_b)=\frac{1}{2}g_{44}g^{cd}F_{bd}=\frac{1}{2}g_{44}F_b{}^c,\\ {}^{(5)}\Gamma_{a4}^c&=G^{cD}\Gamma_{a4D}=G^{cd}\Gamma_{a4d}+G^{c4}\Gamma_{a44}=\frac{1}{2}g^{cd}(\partial_aG_{4d}+\partial_4G_{ad}-\partial_dG_{a4})\\ &=\frac{1}{2}g^{cd}[\partial_a(g_{44}A_d)-\partial_d(g_{44}A_a)]=\frac{1}{2}g_{44}g^{cd}(\partial_aA_d-\partial_dA_a)=\frac{1}{2}g_{44}g^{cd}F_{ad}=\frac{1}{2}g_{44}F_a{}^c,\\ {}^{(5)}\Gamma_{44}^b&=G^{bD}\Gamma_{44D}=G^{bd}\Gamma_{44d}+G^{b4}\Gamma_{444}=0. \end{align} There are more, but these are all we're going to need. With the Christoffel-symbols at hand, we can begin to rewrite the five-dimensional equation of motion in the hope that we can extract something useful and interesting about motion in four dimensions. In explicit notation, the equation of motion takes the following form (geodesic equation): \[\frac{d^2x^A}{d\tau^2}+\Gamma_{BC}^A\frac{dx^B}{d\tau}\frac{dx^C}{d\tau}=0.\] But since we are trying to recover the equation of motion in four dimensions, we can just ignore the $A=4$ case: \[\frac{d^2x^a}{d\tau^2}+\Gamma_{BC}^a\frac{dx^B}{d\tau}\frac{dx^C}{d\tau}=0.\] Rewriting this in terms of Christoffel-symbols that we can evaluate, and making some dummy index substitutions, we get: \begin{align}\frac{d^2x^a}{d\tau^2}+\Gamma_{BC}^a\frac{dx^B}{d\tau}\frac{dx^C}{d\tau}&=\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}+\Gamma_{4c}^a\frac{dx^4}{d\tau}\frac{dx^c}{d\tau}+\Gamma_{b4}^a\frac{dx^b}{d\tau}\frac{dx^4}{d\tau}+\Gamma_{44}^a\frac{dx^4}{d\tau}\frac{dx^4}{d\tau}\\ &=\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}+\frac{1}{2}g_{44}F_c{}^a\frac{dx^c}{d\tau}\frac{dx^4}{d\tau}+\frac{1}{2}g_{44}F_b{}^a\frac{dx^b}{d\tau}\frac{dx^4}{d\tau}\\ &=\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}+g_{44}F_b{}^a\frac{dx^b}{d\tau}\frac{dx^4}{d\tau}=0, \end{align} i.e., \[\frac{d^2x^a}{d\tau^2}+{}^{(5)}\Gamma_{bc}^a\frac{dx^b}{d\tau}\frac{dx^c}{d\tau}=-g_{44}\frac{dx^4}{d\tau}F_b{}^a\frac{dx^b}{d\tau},\] which is formally identical to the equation of motion in 4D spacetime in an electromagnetic field characterized by $F_b{}^a$, for a particle with a charge-mass ratio of $-g_{44}dx^4/d\tau$ (in other words, the momentum in the fifth direction will be proportional to the charge.) There is, of course, some sleight of hand involved in what I have done, namely that what we see on the left is the five-dimensional Christoffel-symbol in what is supposed to be a 4-dimensional equation, consequently hiding a term in the form $g_{44}A_CF_b{}^a(dx^b/d\tau)(dx^c/d\tau)$, but this derivation nevertheless should suffice to demonstrate the basic idea: starting with empty 5-dimensional space, we can recover an equation of motion in four dimensions that contains the electromagnetic field tensor. In any case, I believe the sleigh of hand is necessary, because the case of a "pure" electromagnetic field would be a nonphysical situation in general relativity: the electromagnetic field itself carries energy and will also influence the particle's motion gravitationally by introducing curvature, which is what I suspect is hidden behind the unwanted term that I eliminated by cheating. By the way, all this is, by and large, the Kaluza part of the theory. Klein's contribution was with regards to the compactification of the fifth dimension. No, not for aesthetic reasons, though a compactified dimension certainly helped explaining why the fifth dimension couldn't be seen; no, the main reason was to account for the quantized electric charge. It was through compactification that Kaluza achieved a fifth dimension admitting only discrete solutions.
The alphabet for this language is $\Sigma=\{0,1\}$. We also have that $x_i,y_i,z_i\in\Sigma$. So, notice that$$\sum_{j=0}^{n-1}x_j2^j$$ is simply the decimal equivalent if we assume that the symbols of $x$ are expressed as a number in binary. Similar can be said for $y$ and $z$. Using this, one can see that the language is effectively the those strings such that if you take every third symbol and form $x,y,z$, the numerical equivalent adds up. For this to be true, for every $x_i,y_i$ and carry from the past additions, the corresponding $z_i$ should be the correct symbol. If it were not so, then irrespective of the future symbols, we cannot have $x+y=z$, and the string can never be accepted. So, at each iteration, the DFA needs to remember the current values of $x_i,y_i$ and the current carry. If the corresponding $z_i$ is correct and the carry is $0$, it can return to the start state and continue, as this is not going to affect future considerations in any way. This is a final state as at any point when the DFA is in this state, the current sets of $x_i$s, $y_i$s, and $z_i$s are such that the string would be accepted. If the value of $z_i$ is correct but the carry is not $0$, then the DFA goes to a different state corresponding to carry being $1$. This is not a final state as if the carry was $1$ at the end, it would imply that $x+y\ne z$ as $x,y,z$ must have the same number of symbols. If the value of $z_i$ is incorrect, then the symbol at that "bit" can never be corrected in the future, and the DFA can be sent to a sink state where the string is rejected irrespective of the remaining symbols. Using the above, the DFA I have drawn (using JFLAP) is as follows: The first column is when $x_i,y_i,z_i$ have been received. $q_0$ corresponds to a carry of $0$ and is that start and final state. $q_7$ corresponds to a carry of $1$. The second column is the set of states when $x_i$ has been received, and the third is when $y_i$ has been received. Then, when $z_i$ is received, if it matches $z_i=x_i+y_i+\text{carry}$, the DFA goes to state $q_0$ or $q_7$ depending on the carry. Else, it goes to $q_{14}$, which is a sink.
I'm currently trying to understand how the different incarnations of homology with local coefficients relate to one another. Let $X$ be a semi-locally simply connected space, and let $\pi_1 = \pi_1(X,x_0)$. Homology with local coefficients is usually built from one of the following three objects: A $\mathbb{Z}\pi_1$-Module A Bundle of discrete abelian groups $p:E\to X$ In other words, these are fiber bundles $G\hookrightarrow E\to X$ with fibers discrete abelian groups isomorphic to $G$, and whose structure group is some subgroup of $\operatorname{Aut}(G)$ so that the local trivializations $\phi_U:p^{-1}(U)\to U\times G$ restrict to homomorphisms on the fibers. A Functor $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$ where $\Pi_1(X)$ is the fundamental groupoid, and $\mathcal{L}(x)$ is always discrete abelian. Given a $\mathbb{Z}\pi_1$-module $M$, one can construct a bundle $\widetilde{X}\times_{\pi_1}M\to X$ of discrete abelian groups using the Borel construction. Conversely, given a bundle of discrete abelian groups $p:E\to X$, this is really a covering space, and so there is an action of $\pi_1$ on the fiber $G$, giving it the structure of a $\mathbb{Z}\pi_1$-module. This brings me to my questions: A. How does a bundle $p:E\to X$ of discrete abelian groups give rise to a functor $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$? Edit:So that the resulting homology groups $H_*(X;E)$ and $H_*(X;\mathcal{L})$ are isomorphic? Here is my guess: for a bundle $p:E\to X$ we set $\mathcal{L}(x) = p^{-1}(x)$, and for a homotopy class of paths $[\omega:I\to X]$ (a morphism from $\omega(0)=x_0$ to $\omega(1)=x_1$) we set $\mathcal{L}[\omega]$ to be the map $p^{-1}(x_0)\to p^{-1}(x_1)$ built from using the homotopy lifting property on $$h:p^{-1}(x_0)\times I\to X,\quad h(e,t) = \omega(t).$$ (lift this to $H:p^{-1}(x_0)\times I\to E$, and then $H(-,1):p^{-1}(x_0)\to p^{-1}(x_1)$ is the map I'm referring to.) However, I'm having a hard time showing that this is a homomorphism. This is probably not a good approach since there is no canonical identification of each fiber with $G$. B. How does a local system $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$ give rise to a bundle $p:E\to X$ of discrete abelian groups? (or a $\mathbb{Z}\pi_1$-module?) Edit:So that the resulting homology groups $H_*(X;E)$ and $H_*(X;\mathcal{L})$ are isomorphic? I gather from this discussion that it is possible in this case but I'm not sure how that would work. References [1.] Hatcher, Algebraic Topology, pg. 330 [2.] Whitehead, Elements of Homotopy Theory, pg. 257 (note: he calls functors $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$ "bundles of groups")
Great question. I assume that you mean the classical Heisenberg model (whose spins are just arrows). Jump down the last sentence if you only want the actual answer. The Heisenberg model does not have symmetry group $\mathrm{O}(3)$ - that's just the spin part of the symmetry group. The full symmetry group is $\mathrm{O}(3) \times S$, where $S$ is the space group of the lattice on which the model is defined. (The most common choice is a $d$-dimensional hypercubic lattice, for which $S$ is a hyperoctahedral group.) This is important, because while the spin and spatial symmetries act independently on the Hamiltonian, it's possible for the system's symmetry to be spontaneously broken down to a smaller symmetry group that combines them. This is sometimes called "spontaneously induced spin-orbit coupling". I'll give an example below. In the decomposition $\mathrm{O}(3) \cong \mathbb{Z}_2 \times \mathrm{SO}(3)$ of the spin-space symmetry, the $\mathrm{SO}(3)$ corresponds to a rigid rotation of all the spins in spin space, and the $\mathbb{Z}_2$ corresponds to time reversal (TR) - i.e. simultaneously flipping all the spins. It's impossible to spontaneously break the $\mathbb{Z}_2$ TR symmetry while leaving the $\mathrm{SO}(3)$ (and, implicitly, the space group $S$) unbroken. That's because an individual spin (which is the fundamental atomic unit as long as the space symmetry remains unbroken) is just an arrow, and transforms in the dipolar representation of $\mathrm{SO}(3)$: rotating a spin about its own axis doesn't do anything. Therefore, at the level of a single spin, the $\mathrm{Z}_2$ time-reversal transformation that flips the spin doesn't actually add anything, because you can do the same thing just using the $\mathrm{SO}(3)$. (E.g. if the spin starts out pointing in the $+\hat{z}$ direction, you can reverse it to $-\hat{z}$ via the time-reversal operator, but also by rotating by $180^\circ$ about the $x$- or $y$-axes. Mathematically, we say that the dipolar representations of $\mathrm{O}(3)$ and $\mathrm{SO}(3)$ are isomorphic.) Since the $\mathbb{Z}_2$ time-reversal symmetry doesn't do anything independently of the $\mathrm{SO}(3)$ in this case, it doesn't make any sense to break it individually. But once you factor in the space symmetry group, things get much more interesting. It can spontaneously break along with magnetic symmetry in a way that, roughly speaking, spontaneously enlarges the magnetic unit cell. And if the enlarged magnetic unit cell contains multiple spins, then it transforms under spin-space rotations as a multipolar (rather than dipolar) representation of $\mathrm{SO}(3)$. In these higher representations, the $\mathbb{Z}_2$ and $\mathrm{SO}(3)$ operators are not equivalent, because the enlarged magnetic unit cell can be chiral. For example, if you have three labeled spins, then any $\mathrm{O}(3)$ transformation of all three spins together will either be chiral and require a time-reversal spin-space inversion, or not, and the two cases can't be related by a simple $\mathrm{SO}(3)$ rotation. In this case you can indeed just break the $\mathbb{Z}_2$ while preserving the $\mathrm{SO}(3)$. But you can't see the effects by just considering a single spin, which will remain $\mathrm{SO}(3)$ symmetric and therefore "not point anywhere in particular". You need to look at the correlated behavior of several spins simultaneously to observe the TR-symmetry breaking. The most natural context in which this arises is when the lattice of spins contains triangles (i.e. either the triangular or kagome lattice). In this case we can consider each triangle's "scalar chirality" $\langle S_A \cdot (S_B \times S_C) \rangle$, where the $S_i$ are the spins at the three corners of the triangle. This collective order parameter is a pseudoscalar product which is invariant under $\mathrm{SO}(3)$ but changes sign under TR, so it becomes nonzero if TR is spontaneously broken, even if the $\mathrm{SO}(3)$ symmetry is preserved. (If the Hamiltonian only has $\mathrm{O}(2) \cong \mathbb{Z}_2 \times \mathrm{U}(1)$ symmetry, like the $XY$ model, then a simpler order parameter is the signed angle $\theta_i - \theta_j$ between two adjacent spins.) Such phases do indeed occur in the Heisenberg model on the Kagome lattice with both first- and second-nearest-neighbor couplings, as described in https://journals.aps.org/prb/abstract/10.1103/PhysRevB.72.024433.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Instead of arguing with other people's answers in the comments I thought it might be more productive to present my own point of view. I find myself completely unable to understand why anyone would take off points for this student's answer. Just to be clear, this isn't because I'm being somehow lax or generous as a grader. My opinion is that this is a model solution to the problem, written clearly and well, and I can imagine writing exactly what this student wrote as part of homework solution or exam solution that I distribute to a class. In the context of Calculus I, it's also how I would do this problem on the board during class if a student asked me about it. On the Status of Infinity Some of the other calculus teachers here have mentioned that they teach their students that "infinity isn't a number". I find this statement very strange, and I suppose that my position is that infinity is a number. It certainly isn't a real number, since it's not included in the usual real number system. But neither is the imaginary unit $i$, and I don't think many people would argue that $i$ isn't a number. The number $i$ is included in the system of complex numbers, and the number $\infty$ is included in the system of extended real numbers, which is the set $\mathbb{R}\cup\{-\infty,\infty\}$. I don't see the difference. Of course, there's no standard definition of "number" in mathematics, so there's no objective truth either way. This is part of why it strikes me as so odd that a teacher would say that "$\infty$ isn't a number". It's possible that what they mean is that "you can't do arithmetic with $\infty$". But of course you can do arithmetic with $\infty$. For example,$$\infty + \infty = \infty,\qquad \infty \cdot \infty = \infty,\qquad\text{and}\qquad 3\cdot \infty = \infty.$$These definitions are absolutely standard in mathematics, and I would feel free to use them in a conference talk or journal article without comment. I would hope that most calculus students would know how to do basic arithmetic with $\infty$ by the end of a first calculus course, but apparently this varies by instructor. There are also arithmetic operations involving $\infty$ that are undefined, such as$$\infty - \infty,\qquad \frac{\infty}{\infty},\qquad\text{and}\qquad 0\cdot\infty.$$The last is sometimes defined to be zero (e.g. in the theory of Lebesgue integration), but in the context of calculus it's better to leave it undefined. As far as I know, all of this is completely standard, and in my experience arithmetic involving $\infty$ and $-\infty$ is commonly used by mathematicians without further explanation or comment. I've seen lots of examples of this, but to cite a specific one it's certainly the case that Rudin's Real & Complex Analysis textbook (an extremely standard choice for a graduate analysis course) uses the extended real number system throughout. On the Student's Answer The student's answer depends primarily on the following theorem Theorem. Let $f\colon \mathbb{R}\to\mathbb{R}$ and $g\colon\mathbb{R}\to\mathbb{R}$ be functions, and let $a\in [-\infty,\infty]$. If$$\lim_{x\to a} f(x) = L\qquad\text{and}\qquad \lim_{x\to a} g(x) = M$$ for some $L,M\in[-\infty,\infty]$ and the product $LM$ is defined, then$$\lim_{x\to a} f(x)\,g(x) = LM.$$ This is a well-known and standard theorem in analysis. In the context of this theorem, the student's work constitutes a perfectly good proof of the fact that$$\lim_{x\to\infty} \bigl(x-\sqrt{x}\bigr) = \infty.$$It is no more or less correct than something like$$\lim_{x\to 0} \frac{x\sin x + 2 \sin x}{x} = \lim_{x\to 0} \,\bigl(x+2\bigr)\!\left(\frac{\sin x}{x}\right) = (2)(1) = 2.$$I don't see why this proof would require any more explanation or rigor, in either a calculus or real analysis course, and I feel the same way about the student's proof. I suppose it might be reasonable for an analysis professor to always require students to cite the theorems that they are using, as opposed to using theorems implicitly as part of a calculation. I certainly don't think this would be a reasonable requirement for student answers in a calculus course. Should we teach arithmetic with infinity to calculus students? I do, and I would certainly hope that most other calculus instructors do as well. Dealing with the concept of infinity is a major theme of calculus, and the rules for arithmetic involving infinity ultimately derive from the idea of a limit. How does it help to avoid talking about this? Actually, it seems to me that it would be difficult to cover the idea of an "indeterminate form" without covering this material. I guess at least some of the teachers here manage to avoid saying that "infinity plus infinity equals infinity" by always saying "the sum of two quantities that are both approaching infinity again approaches infinity", but what's the purpose of being so obtuse? If there's a simple way to say something, just say it that way. And in any case, the reality is that you can do arithmetic with infinity. Saying that $\infty+\infty$ is undefined or indeed anything other than $\infty$ is just wrong, both at an intuitive level and from the point of view of standard notation and terminology. Students will figure out that it's true on their own, and will try to guess what other arithmetic rules you're not telling them. If you tell students that $\infty + \infty$ isn't $\infty$, you lose your credibility, and they won't believe you later when you tell them that $\infty - \infty$ isn't $0$. Okay, but should we mark the student wrong? Even if you don't talk about arithmetic involving infinity in your calculus class, the fact remains that it is absolutely standard mathematical notation. Students often seek help from mathematics tutors, other math professors, online videos, and so forth, and any one of those sources might be teaching your students about how to use infinity in this fashion. Can you really justify deducting points from students who don't write their mathematics the way that you want it written? I feel like one of the most basic principles of grading is that correct answers should receive full credit, unless the answer explicitly violates the instructions for the question. This student's answer is completely correct, and in my opinion giving it anything less than 5/5 is just arbitrary and unfair.
We are here with you hands in hands to facilitate your learning & don't appreciate the idea of copying or replicating solutions. Read More>> Assignment no.1 (Lessons 1 – 15) Question 1: Marks: 3+2=5 a) The mean age of a group of 100 persons was found to be 32.02. Later it was discovered that age 57 was misread as 27. Find the corrected mean? b) The sum of deviations from an arbitrary value A = 15 for 10 values is 25, find the A.M. Question 2: Marks: 3+3=6 a) The mean of 200 items is 48 and their S.D is 3. Find the values of and b) A vehicle going up from Islamabad to Murree consumes petrol at the rate of One liter per 8 km, while going down from Murree to Islamabad one liter per 12 km. Find the average rate of consumption between the two cities. Question 3: Marks: 4 The S.D of the symmetrical distribution is 5. What must be the value of, and also explain that the distribution will be? (i) Meso-kurtic (ii) Platy-Kurtic (iii) Leptokurtic .+ http://bit.ly/vucodes + http://bit.ly/papersvu (Link for Past Papers, Solved MCQs, Short Notes & More) STA301 assignment solution 2015 all the answers are right in this solution? Ye correct solution hyyyyy..???? plz mujhe cs201 k group main add kary add me in group of mth202....or eng201 Ap khud himmat kr k group q ni join kr letin dosron ki asry pe bethin hn Q=1 Mean= ∑x/n ........ total age = 32.02*100 = 3202 misread as 27. so subtract 27 from 3202. 3202-27=3175 correct reading is 57 so Add 57 into 3175 3175+57=3232 now new corrected mean = 3232/100 = 32.32 answer A vehicle going up from Islamabad to Murree consumes petrol at the rate of One liter per 8 km, while going down from Murree to Islamabad one liter per 12 km. Find the average rate of consumption between the two cities. is ka ans kia hoga 10. just find simple mean kse explain it plz $\begin{gathered} \bar X = \frac{{\sum X }}{n} \hfill \\ = \frac{{20}}{2} = 10 \hfill \\ \end{gathered} $
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping): $$ \dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2} {\Gamma \vdash M : T_1 \wedge T_2} (\wedge I) \qquad\qquad \dfrac{} {\Gamma \vdash M : \top} (\top I) $$ Intersection types have interesting properties with respect to normalization: A lambda-term can be typed without using the $\top I$ rule iff it is strongly normalizing. A lambda-term admits a type not containing $\top$ iff it has a normal form. What if instead of adding intersections, we add unions? $$ \dfrac{\Gamma \vdash M : T_1} {\Gamma \vdash M : T_1 \vee T_2} (\vee I_1) \qquad\qquad \dfrac{\Gamma \vdash M : T_2} {\Gamma \vdash M : T_1 \vee T_2} (\vee I_2) $$ Does the lambda-calculus with simple types, subtyping and unions have any interesting similar property? How can the terms typable with union be characterized?
One way to prove this is to first show that $\kappa +\mu =\max\{\kappa ,\mu \}$ when either $κ$ or $μ$ are infinite cardinals. This is assumed in the proof below. I searched the web for this approach and found it here - theorem B3 in the appendix combines both, showing first that $\kappa +\mu =\max\{\kappa ,\mu \}$ and then that $\kappa \times \mu =\max\{\kappa ,\mu \}$. We begin with a lemma. Lemma 1: Let $B$ be a subset of an infinite set $A$ and $f: B \to B \times B$ a surjective function. Then $|B| \le |B \times B| \le |B| \le |A|$. Moreover, if $|B|$ is indeed less that $|A|$, then $f$ can be extended to a surjective function $D \to D \times D$, with $B$ a proper subset of $D$. Proof: For the first part, apply elementary cardinality theory. For the second part, the we can find an infinite set $U$ that is disjoint from $B$, so that $|U| = |B|$; we also have the identity $\tag 1 (B \cup U) \times (B \cup U) = (B \times B) \cup (B \times U) \cup (U \times B) \cup (U \times U)$ a disjoint union of four pieces all having a cardinality of $|B|$. The function $f$ takes care of the first piece, and a cardinality argument allows us to surjectively cover the remaining three pieces with a function operating on the set $U$ as a domain. So we can extend $f$ to $D = B \cup U$. $\quad \blacksquare$ We are now ready to prove the main result: Proposition 2: For any infinite set $A$, $\tag 2 | A \times A | = |A|$ Proof We only have to show that $|A| \ge |A \times A|$. Consider the collection of all $(B,\phi)$ where $B \subseteq A$ and $\phi : B \to B \times B$ is a surjection. This collection is nonempty since there is a surjection $ \mathbb N \to \mathbb N \times \mathbb N$. This collection can be partially ordered by $(B,\phi) < (C,\psi)$ if $B \subseteq C$ and $\psi|_B = \phi$. Every chain has an upper bound; simply take the union of the graphs of the functions in the chain, defining a surjective function $D \to D \times D$. By Zorn's lemma there is a maximal element $(\hat B,\hat \phi)$. By lemma 1, we can proceed under the assumption that $|B| \lt |A|$, since otherwise we can use $\hat \phi$ to establish (2). But then lemma 1 also provides a surjective extension of $\hat \phi$, contradicting that $(\hat B,\hat \phi)$ was a maximum element, i.e. no such extension can be found. $\quad \blacksquare$ This proof was arrived at by 'lifting' the proof that $|A \times \mathbb N| = |A|$, found here.
Choose n points randomly from a circle, how to calculate the probability that all the points are in one semicircle? Any hint is appreciated. A variation on @joriki's answer (and edited with help from @joriki): Suppose that point $i$ has angle $0$ (angle is arbitrary in this problem) -- essentially this is the event that point $i$ is the "first" or "leading" point in the semicircle. Then we want the event that all of the points are in the same semicircle -- i.e., that the remaining points end up all in the upper halfplane. That's a coin-flip for each remaining point, so you end up with $1/2^{n-1}$. There's $n$ points, and the event that any point $i$ is the "leading" point is disjoint from the event that any other point $j$ is, so the final probability is $n/2^{n-1}$ (i.e. we can just add them up). A sanity check for this answer is to notice that if you have either one or two points, then the probability must be 1, which is true in both cases. See for the general problem (when the points have any distribution that is invariant w.r.t. rotation about the origin) and for a nice application. As a curiosity, this answer can be expressed as a product of sines: Here's another way to do this: Divide the circle into $2k$ equal sectors. There are $2k$ contiguous stretches of $k$ sectors each that form a semicircle, and $2k$ slightly shorter contiguous stretches of $k-1$ sectors that almost form a semicircle. The number of the semicircles containing all the points minus the number of slightly shorter stretches containing all the points is $1$ if the points are contained in at least one of the semicircles and $0$ otherwise; that is, it's the indicator variable for the points all being contained in at least one of the semicircles. The probability of an event is the expected value of its indicator variable, which in this case is $$2k\left(\frac k{2k}\right)^n-2k\left(\frac{k-1}{2k}\right)^n=\frac k{2^{n-1}}\left(1-\left(1-\frac1k\right)^n\right)\;.$$ The limit $k\to\infty$ yields the desired probability: $$ \lim_{k\to\infty}\frac k{2^{n-1}}\left(1-\left(1-\frac1k\right)^n\right)=\lim_{k\to\infty}\frac k{2^{n-1}}\cdot\frac nk=\frac n{2^{n-1}}\;. $$ Bull, 1948, Mathematical Gazette, Vol 32 No 299 (Dec), pp87-88 solves this problem in the context of the broken stick problem (he uses polytopes and relative volumes in his argument). Rushton, 1949, Mathematical Gazette, Vol 33 No 306 (May), pp286-288 points out that the problem can be re-stated in terms of placing points at random on the circumference of a circle. Ruston's answer is the clearest I have seen. Place $n$ points randomly on the circumference. Label them $X_1, X_2, ..., X_n$. Open up the circle at $X_n$ and produce a straight line. Label the line $OX_n$ (where $O$ is the part of the circle previously immediately adjacent to $X_n$). There are $n$ line segments: $OX_1, X_1X_2, ..., X_{n-1}X_n$. Each segment is equally likely to be longer than half the length of $OX_n$ (and thus correspond to greater than a semi-circle of the orginal circle). The probability that the first segment fulfils this condition is the probability that the remaining $n-1$ points lie upon the second half of the line $OX_n$. That is $(\frac{1}{2})^{(n-1)}$. The probability that there is one segment (note there can be at most one) greater than half the length of the circumference is the sum of the probabilities that each particular segment could be so (because these are mutually exclusive): $n(\frac{1}{2})^{(n-1)}$. So, the favorable probability is $1 -n(\frac{1}{2})^{(n-1)}$. Another simpler approach, 1) Randomly pick $1$ out of $n$ points and call it $A$ : $\binom n1$ ways 2) Starting from $A$, mark another point $B$ on circumference, such that $length(AB) = \frac12(Cirumference)$ [so that $AB$ and $BA$ are two semi-circles] 3) Now out of remaining $(n-1)$ points, each point can lie on either $AB$ or $BA$ with probability $\frac12$ 4) For ALL the remaining $(n-1)$ points to lie on EITHER $AB$ OR $BA$ (i.e., all $(n-1)$ lie on same semi-circle), the joint probability is $\frac12*\frac12 ...(n-1) times$ $=$ $(\frac12)^{(n-1)}$ Since #1 above (randomly picking $A$) is an independent event, $\therefore$ $(\frac12)^{(n-1)}$ (expression in #4) will add $\binom n1$ times $\implies$ Required probability is $\binom n1(\frac12)^{(n-1)}$ $=$ $n(\frac12)^{(n-1)}$
I am trying to to get an intuitive understanding and feel for the difference and practical difference between the term consistent and asymptotically unbiased. I know their mathematical/statistical definitions, but I'm looking for something intuitive. To me, looking at their individual definitions, they almost seem to be the same thing. I realize the difference must be subtle but I just don't see it. I'm try to visualize the differences, but just can't. Can anyone help? They are similar; a consistent estimator that's biased must nevertheless be asymptotically unbiased (otherwise it could not be consistent), but an asymptotically unbiased estimator doesn't have to be consistent. For example, imagine an i.i.d. sample of size $n$ ($X_1, X_2, ..., X_n$) from some distribution with mean $\mu$ and variance $\sigma^2$. As an estimator of $\mu$ consider $T = X_1 + 1/n$. The bias is $1/n$ so $T$ is asymptotically unbiased, but it is not consistent. There are "unbiased but not consistent" estimators as well as "biased but consistent" estimators: So, they are not the same thing. Also, there is a long discussion about this topic here: Asymptotic unbiased: As $n \rightarrow \infty$, bias converges to $0$. Consistent: As $n \rightarrow \infty$, variance of the estimator converges to $0$. I would like to clarify that consistency in general does not imply asymptotic unbiasedness. Consider an estimator for $0$ taking value $0$ with probability $n/(n-1)$ and value $n$ with probability $1/n$. It is a biased estimator since the expected value is always equal to $1$ and the bias does not disappear even if $n\to\infty$. However, it is a consistent estimator since it converges to $0$ in probability as $n\to\infty$. Asymptotic unbiasedness does not imply consistency either as it is mentioned in other answers. For example, the periodogram is an asymptotically unbiased estimator of the spectral density, but it is not consistent. Roughly speaking, consistency means that for large values of $n$ we are going to be close to the true value of the parameter with a high probability, i.e. estimates are going to be close to the true value of the parameter. Asymptotic unbiasedness means that for large values of $n$ on average we are going to be close to the true value of the parameter, i.e. the average of estimates is going to be close to the true value of the parameter, but not necessarily the estimates themselves.
Each person might be allotted more than one share of the secret. Let $G$, $C$ and $D$ denote the number of shares allotted to a General, a Colonel, and a Desk Clerk respectively, and let $T$ denote the Threshold of the secret sharing scheme. Then, we have that\begin{align}T &\leq 5G,\\T &\leq 4C + 3D,\\T &\leq 3G + 3D.\end{align}Can you find suitable values for $G,C,D$,and $T$? The total number of shares is$N = 6G + 5C + 4D$ and we have a $(T,N)$ secret sharing scheme. $D=2, C=3, G=4, T=18$, and $N = 47$ seems to work. There are, of course, other combinations that would result in $18$ or more shares, e.g., $5C+2D$ or $4G+D$ but nothing in the problem statement says that this is not to be allowed. Note that the $5$ colonels (or the $4$ Desk Clerks for that matter) cannot stage a coup by themselves; the colonels have to have at least two Desk Clerks (or a General) as co-conspirators. Addendum: in the spirit of @IlmariKaronen's answer, Divide the secret $S$ into 6 shares in a $(5,6)$ secret-sharing scheme and give each of the six generals a share. Any 5 of them can reconstruct $S$. Create a random binary vector $X$ as long as the secret $S$, and then make five shares of $S\oplus X$ in a $(4,5)$ secret-sharing scheme, giving one share to each colonel. Any four of them can reconstruct $S\oplus X$. Create $4$ shares of $X$ in a $(3,4)$ secret-sharing scheme and hand one share to each desk clerk. Any $3$ desk clerks can reconstruct $X$, and together with $S\oplus X$ from the $4$ colonels, $S$ can be reconstructed. Create $6$ shares of $S\oplus X$ in a $(3,6)$ secret-sharing scheme and hand a share to each general. Any three generals can reconstruct $S\oplus X$, and together with $X$ from three desk clerks, can recreate $S$. Note that this has have fewer secret-sharing schemes to implement than Ilmari's method, and the desk clerks and colonels have only one share to have and hold. Only the generals have two different shares and must remember to use the correct one in the two different cases when they are acting by themselves and when in conjunction with three desk clerks. Also, the desk clerks, by themselves, can only reconstruct $X$.
Problem Description Given an array $A$ of $n$ integers, find the minimum number of operations to turn it into a new array $\widehat{A}$ with a (weakly) descending order: we require that $\hat{a}_i \geq \hat{a}_j$ for all $1\leq i<j\leq n$. Here an operation means either increasing or decreasing an element by $1$. For example, it requires at least $4$ operations to turn the array $[1,2,3,4]$ into descending order. One solution is increasing the first element by $2$, increasing the second element by $1$, and decreasing the last element by $1$ (then the new array is $[3,3,3,3]$, which is in decending order). My questions: Is the problem NP-hard? If not, what algorithm solves this problem with the best time complexity? My efforts We can write this problem as an integer linear programming. Say the elements in the array are $a_1,a_2,\ldots,a_n$, and those in the new array are $\hat{a}_1,\ldots,\hat{a}_n$ then the problem is essentially an interger programming: $$ \begin{align*} \text{minimize}\quad &\left|\hat{a}_1-a_1\right|+\cdots+\left|\hat{a}_n-a_n\right| \\ \text{subject to}\quad &\hat{a}_1\ge\cdots\ge\hat{a}_n \end{align*} $$ or equivalentlly $$ \begin{align*} \text{minimize}\quad &t_1+\cdots+t_n \\ \text{subject to}\quad &\hat{a}_i-a_i\le t_i, &i=1,\ldots,n\\ &-\left(\hat{a}_i-a_i\right)\le t_i, &i=1,\ldots,n\\ &\hat{a}_1\ge\cdots\ge\hat{a}_n \end{align*} $$ But it seems not to help because the coefficient matrix is not totally unimodular, thus we cannot relax it to linear programming. I also find a property of optimum solutions: If some consecutive elements are originally in non-descending order, then they must be the same in an optimum solution. I don't know whether this property helps.
Your intuition is correct: regular languages can't count. Indeed, a quick way to prove that $L$ isn't regular is to observe that $L\cap \{a^mb^n\mid m,n\geq 0\} = L'\cup\{a,b\}$. Since the intersection of any two regular langauges is regular, and $\{a^mb^n\mid m,n\geq 0\}$ is certainly regular, it must be that $L$ is not – because if it was regular, then its intersection with $\{a^mb^n\mid m,n\geq 0\}$ would have to be regular, and it isn't. To prove with the pumping lemma, you need to show that, for every $p>0$, there is a string $s$ of length at least $p$ such that every way of writing $s=xyz$ with $|y|\geq 1$ and $|xy|\leq p$ has $xy^nz\notin L$ for some $n\geq 0$. (I originally wrote "just need to show" but deleted "just" because the statement that follows is such a mouthful!) Your string $s$ works just fine. If we rewrite it in the required form $s=xyz$, the fact that $|xy|\leq p$ means that $x=a^k$, $y=a^\ell$, with $1\leq\ell\leq p$ and $0\leq k\leq p-\ell$. And $z$ is the rest of the string. But, now, for any $n\neq 1$, we have\begin{align*} xy^nz &= a^ka^{n\ell}a^{p-k-\ell}(aa)^pb^p(bb)^p\\ &= a^{3p+(n-1)\ell}b^{3p}\,,\end{align*}and this string is not in $L$, since the number of $a$s at the front is different from the number of $b$s at the back, so the number of $aa$ substrings is different from the number of $bb$s. Insofar as that proof works, there's no problem with your choice of $s$. But, if you read through the proof, you'll see that there would have been a bit less writing if you'd just chosen $a^pb^p$. The point is that, when something in the $a^p$ part gets "pumped", it breaks the required property of having the same number of $aa$s as $bb$s. It's hard to give general advice, because proof is a creative act and you can't just follow recipes. The general scheme is to pick a string where repeating any sequence of characters from the first $p$ will break the property that defines the language.
In response to a Quora question, I wrote the following: Given two Lagrangians that differ only by a full time derivative, i.e., $$L'=L+\frac{d}{dt}F(q,\dot{q},t),$$ the difference in their variation will be given by: $$\delta L'-\delta L=\delta\frac{d}{dt}F(q,\dot{q},t),$$ which, as the variation and the derivative operator commute, is equal to $$\delta L'-\delta L=\frac{d}{dt}\delta F(q,\dot{q},t).$$ The variation of the action is given by $\delta S=\int\delta L~dt$. The difference in the variation of the action, then, will be $$\delta S'-\delta S=\int_{t_1}^{t_2}\frac{d}{dt}\delta F(q,\dot{q},t)~dt.$$ Given that the integrand is a full time derivative, the integration can be trivially carried out: $$\delta S'-\delta S=\delta F(q,\dot{q},t)\bigg|_{t_1}^{t_2}.$$ The nature of the variational problem is such that the variation vanishes at the endpoints of the integration by definition. So $\delta F(q(t_1),\dot{q}(t_1),t_1)=\delta F(q(t_2),\dot{q}(t_2),t_2)=0.$ Therefore, $$\delta S'-\delta S=0,$$ and the Euler-Lagrange equations that correspond to $L$ and $L'$ will be the same. I.e., the two Lagrangians lead to identical equations of motion.
I would like to estimate the phase delay accurately for any random FIR filter. The definition of the phase delay is the continuous phase divided by the angular frequency (with a sign change). That means that you can't simply use the wrapped phase, between $-\pi$ and $+\pi$ you can get with a FFT to obtain the phase delay estimation. At least, you need to use an unwrapped version of the phase. For example, with a linear phase FIR filter, the phase delay is supposed to be a constant, so the phase you have to use to do the calculus has to be monophonic and decreasing. However, by doing some tests in MATLAB, I have been able to see that the unwrapped phase doesn't have this property everywhere. For a halfband lowpass filter for example, there are a few "jumps" in the stopband with a length not related with $\pi$, so they are untouched in the unwrapped phase, which gives me a wrong estimation of the phase delay, featuring jumps as well instead of a constant value. Indeed, the "continuous phase" I need isn't the same thing than the unwrapped phase, and is associated with the zero-phase amplitude of the FIR filter. I have not found yet a way to get it, and MATLAB with its zerophase function is giving wrong results as well, because it cheats ! If the filter is linear phase, it uses the formula we all know for the phase delay. If it is a FIR linear phase transformed into a minimum-phase filter for example, then the phase delay displayed features again this low size jumps. I'm thinking about doing something like detecting the jump length, assuming the phase is supposed to be smooth everywhere, and using the derivative at the sample before the jump to do something which might look good... But I would like to know if someone knows a rigorous way to get the continuous phase. EDIT : here is an example. You can find there the phase for a FIR linear phase filter, wrapped and unwrapped. As you can see on the unwrapped, some phase jumps are still here... EDIT 2 : my problem can be redefined this way. Most of the time, any FIR filter can be studied with its frequency response H($\omega$) = |H($\omega$)| $e^{j \phi( \omega)}$ with |H($\omega$)| the magnitude, always positive or zero, and $\phi(\omega)$ the wrapped phase. There is an alternative notation : H($\omega$) = A($\omega$) $e^{j \theta( \omega)}$ with A($\omega$) the amplitude or zero-phase amplitude and $\theta( \omega)$ the continuous phase representation, still between -$\pi$ and $\pi$. Apparently, it features only 2$\pi$ jumps, and can be fully continuous with a standard unwrap algorithm. That's because A($\omega$) can be positive or negative. Each time there is a zero in the FIR filter, A($\omega$) changes its sign. That's the zeroes which are causing the lowest jumps. Since A($\omega$) contains the sign information, the low jumps are no more present in the new phase representation $\theta( \omega)$. So, what I need now, is in fact a way to calculate accurately $\theta( \omega)$, maybe by calculating first A($\omega$). I might detect the low jumps easily and change the sign of |H($\omega$)|, then use it to get the remaining $e^{j \phi( \omega)}$. But there is maybe a better way of doing this, and I'd like to know if someone familiar with this notation has already done that.
I am interested in proving or finding a counter example for the following statement $$ \lim_{n \to \infty}{\frac{\ln{f(n)}}{\ln{g(n)}}} = \infty \implies \lim_{n \to \infty}{\frac{f(n)}{g(n)}} = \infty $$ It seems to make a lot of sense, but it isn't very clear how to prove this statement. If it isn't true, what if both $f(n)$ and $g(n)$ are monotonically increasing? I've tried looking at the contrapositive, as well as Taylor series expansions, but I'm not able to come to a complete conclusion. I would also imagine that this holds for any monotonic function, not just the logarithm (if it holds at all).
I have the following problem in $x \in \mathbb C^{205}$ $$\displaystyle\min_{x}x^HAx$$ subject to the following constraints $$x^HBx = 1$$ $$x^HC_ix = 0$$ for $i \in \{0,1,\dots,203\}$, where $A$ and $B$ are complex $205 \times 205$ matrices and can be assumed to be positive definite. The $C_i$'s are rank-$1$ matrices (each $C_i$ matrix actually only has a single row which is non-zero, namely row $i$) but there are $204$ of them and they are not definite. I know there is likely not a single best algorithm for dealing with this type of problem, but any suggestions for things to try out would be much appreciated!
I would like to know how Dirichlet conditions are normally applied when using the finite volume method on a cell-centered non-uniform grid, My current implementation simply imposes the boundary condition my fixing the value of the first cell, $$ \phi_1 = g_D(x_L) $$ where $\phi$ is the solution variable and $g_D(x_L)$ is the Dirichlet boundary condition value at the l.h.s. of the domain ( NB $x_L \equiv x_{1/2}$). However this is incorrect because the boundary condition should fix the value of the cell face not the value of the cell itself. What I should really apply is, $$ \phi_{L} = g_D(x_L) $$ For example, lets solve the Poisson equation, $$ 0 = (\phi_x)_x + \rho(x) $$ with initial condition and boundary conditions, $$\rho=-1\\ g_D(x_L)=0 \\ g_N(x_R)=0$$ (where $g_N(x_R)$ is a Neumann boundary condition on the right hand side). Notice how the numerical solution has fixed the value of the cell variable to the boundary condition value ($g_D(x_L)=0$) at the left hand side. This has the affect of shifting the whole solution upwards. The effect can be minimized by using a large number of mesh points but that is not a good solution to the problem. Question In what ways are Dirichlet boundary conditions applied when using the finite volume method? I assume I need to fix the value of $\phi_1$ by interpolating or extrapolating using $\phi_0$ (a ghost point) or $\phi_2$ such that the straight line going through these points has the desired value at $x_L$. Can you provide any guidance or an example of how to do this for a non-uniform cell-centered mesh? Update Here is my attempt at using a ghost cell approach you suggested, does it look reasonable? The equation for cell $\Omega_1$ is (where $\mathcal{F}$ represents the flux of $\phi$), $$ \mathcal{F}_{3/2} - \mathcal{F}_{L} = \bar{\rho}$$ We need to write $\mathcal{F}_{L}$ in terms of the boundary condition using a ghost cell $\Omega_0$, $$\mathcal{F}_{L} = \frac{\phi_1 - \phi_0}{h_{-}} \quad\quad \text{[1]} $$ But we ultimately need to eliminate the $\phi_0$ term from the equation. To do this we write a second equation which is the linear interpolation from the centre of cell $\Omega_0$ to the centre of cell $\Omega_1$. Conveniently this line passes through $x_L$, so this is how the Dirichlet conditions enters the discretistaion (because the value at this point is just $g_D(x_L)$), $$g_D(x_L) = \frac{h_1}{2h_{-}}\phi_0 + \frac{h_0}{2h_{-}}\phi_1 \quad\quad \text{[2]}$$ Combining equations 1 and 2 we can eliminate $\phi_0$ and find an expression for $\mathcal{F}_L$ in terms of $\phi_1$ and $g_D(x_L)$, $$\mathcal{F}_L = \frac{1}{h_{{-}}} \left(\phi_{1} - \frac{1}{h_{1}} \left(2 g_{D} h_{{-}} - h_{1} \phi_{1}\right)\right)$$ Assuming that we are free to choose the volume of the ghost cell we can set $h_0 \rightarrow h_1$ to give, $$ \mathcal{F}_L = -\frac{2 g_{D}}{h_{1}} + \frac{2 \phi_{1}}{h_{{-}}} $$ This can be simplified further because if the cells $\Omega_0$ and $\Omega_1$ are the same volume then we can set $h_{-}\rightarrow h_1$ finally giving, $$ \mathcal{F}_L = \frac{2}{h_{1}} \left( \phi_1 - g_D \right) $$ However, this approach has recovered the definition that is unstable so I'm not too sure how to proceed? Did I interpret your advice incorrectly (@Jan)? The strange thing is that is seems to work, see below, See below, it works,
I've just finished learning the physics behind the problem and would like to write a program in C++ than can solve the problem. I'm actually stuck at the start. I've quite a bit of research, the problem is there is not too many examples of code used to solve the problem. I'm going to solve the problem using finite-difference form. $$\dfrac{d^2\psi}{dx^2}\approx \dfrac{\psi_{n+1}+\psi_{n-1}-2\psi_{n}}{(\Delta x)^2}$$ Which allows us to rearrange in the form $$\psi_{n+1}=2\psi_{n} - \psi_{n-1}-2(\Delta x)^2(E-V_n)\psi_n$$ Using the even-parity solution, we have $$\psi(0)=1 \quad \quad \psi'(0)=0$$ at $n=0 \implies x=0$ Letting $m=1$ and $\hbar = 1$ I know that at the ground state $E_g =\frac{\pi^2}{8}$ I'm struggling to write code that can help me find $\psi_n$ for values of $n$. Can I assume that $V_0 = 0$ since we want the wave function to be inside the well? I'm not asking for someone to write the code for me, I'd just like tips on what I need to define and which method I should use. Thanks in advance EDIT: This is what I have so far #include <iostream>#include <string>#include <stdio.h>#include <unistd.h>#include <math.h>#include <stdlib.h>#include <stdarg.h>#include <assert.h>using namespace std;const double PI = 3.14159265358979323846264338327950;int main() { cout < "1D Particle-in-a-Box"\n; double psi0, dpsi0, N, dx, x_end, E; int number_steps, nr; double value_x [number_steps]; double psi [number_steps]; double V [number_steps]; //read parameters (even-parity) cout < "psi0 = "; cin > psi0; cout < "dpsi0 = "; cin > dpsi0; dx = x_end / number_steps; value_x [0] = 0; psi [0] = psi0; V [0] = 0.0 //finite-difference form while (nr < number_steps) { value_x [nr+1] = dx*(nr+1); psi[nr+1] = 2*psi[nr] - psi[nr-1] - (2*(dt*dt))*(E-V[nr])*(psi[nr]); nr = nr + 1; }
I suspect there is in general not much difference between GMRES and CG for an SPD matrix. Let's say we are solving $ Ax = b $ with $ A $ symmetric positive definite and the starting guess $ x_0 = 0 $ and generating iterates with CG and GMRES, call them $ x_k^c $ and $ x_k^g $. Both iterative methods will be building $ x_k $ from the same Krylov space $ K_k = \{ b, Ab, A^2b, \ldots \} $. They will do so in slightly different ways. CG is characterized by minimizing the error $ e_k^c = x - x_k^c $ in the energy norm induced by $ A $, so that\begin{equation} (A e_k^c, e_k^c) = (A (x - x_k^c), x - x_k^c) = \min_{y \in K} (A (x-y), x-y).\end{equation} GMRES minimizes instead the residual $ r_k = b - A x^g_k $, and does so in the discrete $ \ell^2 $ norm, so that\begin{equation} (r_k, r_k) = (b - A x_k^g, b - A x_k^g) = \min_{y \in K} (b - Ay, b - Ay).\end{equation}Now using the error equation $ A e_k = r_k $ we can also write GMRES as minimizing\begin{equation} (r_k, r_k) = (A e_k^g, A e_k^g) = (A^2 e_k^g, e_k^g)\end{equation}where I want to emphasize that this only holds for an SPD matrix $ A $. Then we have CG minimizing the error with respect to the $ A $ norm and GMRES minimizing the error with respect to the $ A^2 $ norm. If we want them to behave very differently, intuitively we would need an $ A $ such that these two norms are very different. But for SPD $ A $ these norms will behave quite similarly. To get even more specific, in the first iteration with the Krylov space $ K_1 = \{ b \} $, both CG and GMRES will construct an approximation of the form $ x_1 = \alpha b $. CG will choose\begin{equation} \alpha = \frac{ (b,b) }{ (Ab,b) }\end{equation}and GMRES will choose\begin{equation} \alpha = \frac{ (Ab,b) }{ (A^2b,b) }.\end{equation}If $ A $ is diagonal with entries $ (\epsilon,1,1,1,\ldots) $ and $ b = (1,1,0,0,0,\ldots) $ then as $ \epsilon \rightarrow 0 $ the first CG step becomes twice as large as the first GMRES step. Probably you can construct $ A $ and $ b $ so that this factor of two difference continues throughout the iteration, but I doubt it gets any worse than that.
Onsager's regression hypothesis “…the average regression of fluctuations will obey the same laws as the corresponding macroscopic irreversible process" comes vividly to life when experimentalists observe the Brownian motion $q(t)$ of a damped oscillator (as nowadays they commonly do). Setting $\qquad q(t)= x(t) \cos(\omega_0 t) - y(t) \sin(\omega_0 t)$ for $\omega_0$ the resonant frequency of the oscillator and $x(t),\,y(t)$ the (slowly varying) in-phase and quadrature amplitudes, these amplitudes are observed to satisfy $\displaystyle\qquad \langle x(t) x(t+\tau)\rangle = \langle y(t) y(t+\tau)\rangle = \left[\frac{k_\text{B}T}{m \omega_0^2}\right]\,e^{-\omega_0|\tau|/(2 Q)}$ where $m$ is the mass of the oscillator and $Q$ is its mechanical quality. This example illustrates Onsager's regression principle as follows “…the average regression of fluctuations (in the above oscillator example, the autocorrelation $\langle x(t) x(t+\tau)\rangle$) will obey the same laws (in the example, exponential decay of fluctuations with rate constant $\Gamma = \omega_0/(2 Q)$) as the corresponding macroscopic irreversible process (in the example, macroscopic damping of the oscillator motion with the same rate constant $\Gamma$)" It is common experimental practice to deduce $Q$ not from observations of macroscopic damping, but rather by statistical analysis of the observed regression of Brownian motion fluctuations. Thus, in this practical sense, Onsager's regression hypothesis nowadays is universally accepted. By a similar analysis of coupled fluctuations in larger-dimension dynamical systems, Onsager deduced certain reciprocity relations that bear his name (and for which he received the Nobel Prize in Chemistry in 1968). Accessible discussions of the Onsager relations in textbooks include Charles Kittel's Elementary statistical physics (see Ch. 33, "Thermodynamics of Irreversible Processes and the Onsager Reciprocal Relations") and Landau and Lifshitz' Statistical Physics: Part 1 (see Ch. 122, "The Symmetry of the Kinetic Coefficients"). In the context of separative transport (where these relations find common application) Onsager's principle demonstrates from general thermodynamic that if an imposed current $j_\text{A}$ of conserved quantity $\text{A}$ induces a current $j_\text{B}$ of conserved quantity $\text{B}$ via $j_\text{B} = L_\text{BA}\,j_\text{A}$, then a reciprocal flow induction occurs with $j_\text{A} = L_\text{AB}\,j_\text{B}$ and $L_\text{AB}=L_\text{BA}$. As Kittel and Landau/Lifshitz both discuss, this principle follows by considering the temporal decay of microscopic fluctuations (assuming local thermodynamic equilibrium). Physically speaking, if a flow of $A$ linearly induces a flow of $B$, then the reciprocal induction occurs too, with equal constant of proportionality. This relation apples in a great many physical systems, including for example (and non-obviously) the coupled transport of electrolytes and nutrients across cell membranes. Whether Onsager's dynamical assumptions hold in a given instance has to be carefully analyzed on a case-by-case basis. That is why Kittel's text cautions, prior to working through an example involving thermoelectric coupling (Chapters 33 and 34): It is rarely a trivial problem to find the correct choice of (generalized) forces and fluxes applicable to the Onsager relation. In consequence of this necessary admixture of physical reasoning in applying the Onsager relations in particular cases, it sometimes happens that practical applications of Onsager's formalism are accompanied by lively theoretical and/or experimental controversies , which are associated not to the Onsager formalism itself, but to the applicability (or not) of various microscopic dynamical models that justify its use. We thus see that the Onsager relations are not rigorous constraints in the sense of the First and Second Laws, but rather describe simplifying symmetries that emerge in a broad range of idealized (chiefly, linearized & spatially localized) descriptions of dynamical behavior; with these symmetries providing a vital key to the general description of a large set of transport processes that have great practical importance. Perhaps I should mention, that I would myself be very interested in any references that generalize Onsager's relation to the coupled dynamical flow of symbol-function measures; this is associated to the practical challenge of generating quantum spin hyperpolarization via separative transport processes.This post has been migrated from (A51.SE)
Quasirandomness Introduction Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma. In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density. Needless to say, this is not the only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit. These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem. A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
Below is an outline of the notes I wrote up on the basic theory Hardy—Littlewood maximal funtion and its variants for a seminar. The notes assume familiarity with measure theory. Download the notes here: MaximalFunctionTheory.pdf (29 pages) Let \(f:[a,b] \to \mathbb{R}\) be continuous. The states that fundamental theorem of calculus \[F(x) = \int_a^x f(y) \, dy\] is differentiable in \((a,b)\) and \(F'(x) = f(x)\) for all \(x \in (a,b)\). Could we generalize this result to a larger class of functions? Note that the above result is equivalent to \[(1) \hspace{2em} \lim_{r \to 0} \frac{1}{2r} \int_{x-r}^{x+r} f(y) \, dy = f(x)\] for all continuous functions \(f\). In this form, the fundamental theorem of calculus is a statement about the behavior of the integral mean value \[(2) \hspace{2em} (\mathcal{A}_{r} f)(x) = \frac{1}{2r} \int_{x-r}^{x+r} f(y) \, dy\] as the length of the interval \((x-r,x+r)\) centered at \(x\) decreases to 0. Since \((\mathcal{A}_rf)(x)\) is well-defined for all \(f \in L^1([x-r,x+r])\), it makes sense to try and prove (1.1) for all \(f \in L^1_{\mbox{loc}}(\mathbb{R})\), the space of measurable functions on \(\mathbb{R}\) whose integral is finite on each compact subset of \(\mathbb{R}\). We also consider \(d\)-dimensional generalizations of (1). To this end, we must determine what we wish to take as an \(n\)-dimensional generalization of intervals. We abstract three properties of intervals: compactness, convexity, and symmetry. Definition.A nonempty set \(B \subseteq \mathbb{R}^d\) is if, for each pair of points \(x_1\) and \(x_2\) in \(B\), the convex \((1-\lambda) x_1 + \lambda x_2\) is in \(B\) for all \(0 \leq \lambda \leq 1\). convex combination Definition.A nonempty set \(B \subseteq \mathbb{R}^d\) is with respect to \(p \in B\) if \(B\) is invariant under the affine transform \(x \mapsto 2p - x\). This is equivalent to saying that \(p + h \in B\) if and only if \(p - h \in B\). centrally symmetric We write to refer to a subset of \(\mathbb{R}^d\) that is compact, convex, and centrally symmetric with its center \(p\) at the origin. centrally symmetric convex body Since we can rewrite (2) as \[(\mathcal{A}_r f)(x) = \frac{1}{2r} \int_{-r}^r f(x+y) \, dy,\] it suffices to consider centrally symmetric convex bodies with respect to the origin in asking the following question: Question.Given a centrally symmetric convex body \(B \subseteq \mathbb{R}^d\), does the integral mean value \[(\mathcal{A}_{rB}f)(x) = \frac{1}{m(rB)} \int_{rB} f(x+y) \, dy\] of \(f \in L^1_{\mbox{loc}}(\mathbb{R}^d)\) converge pointwise to \(f(x)\) as \(r \to 0\)? Here we have used \(rB\) to denote the scaled set \(rB = \{ry : y \in B\}\). Whenever \(f\) is continuous, the argument for the one-dimensional fundamental theorem of calculus can be applied with minor modification to answer Question 1.5 in the affirmative. If \(f\) is merely \(L^1(\mathbb{R})\), then, for each \(\varepsilon > 0\), we can find a \(g \in \mathscr{C}_c(\mathbb{R}^d)\) such that \(\| f-g \|_1 < \varepsilon\). We can then rewrite \((\mathcal{A}_{rB}f)(x) - f(x)\) as \[\mathcal{A}_{rB}(f-g)(x) + (\mathcal{A}_{rB}g)(x) - g(x) + g(x) - f(x).\] By continuity, we have \((\mathcal{A}_{rB}g)(x) \to g(x)\) as \(r \to 0\), whence we have the estimate \[\begin{align*} &\limsup_{r \to 0} \vert (\mathcal{A}_{rb}f)(x) - f(x) \vert \\ \leq& \limsup_{r \to 0} \vert \mathcal{A}_{rb}(f-g)(x) \vert| \\ &+ \limsup_{r \to 0} \vert(\mathcal{A}_{rB}g)(x) - g(x) \vert \\ &+ \vert f(x) - g(x) \vert \\ =& \limsup_{r \to 0} \mathcal{A}_{rB}(\vert f - g \vert)(x) + \vert f(x) + g(x) \vert.\end{align*}\] Therefore, the study of the integral mean values of \(f\) at \(x \in \mathbb{R}^d\) depends crucially on the quantity \[\limsup_{r \to 0} \mathcal{A}_{rB}(\vert f - g \vert)(x),\] which is bounded from above by the maximal function \[\sup_{r > 0} \mathcal{A}_{rB}(\vert f - g \vert)(x).\] This motivates us to introduce our main object of study: Definition.The of \(h \in L^1_{\mbox{loc}}(\mathbb{R}^d)\) over a centrally symmetric convex body \(B \subseteq \mathbb{R}^d\) is Hardy—Littlewood maximal function \[\begin{align*}(\mathcal{M}_Bf)(x) &= \sup_{r > 0}(\mathcal{A}_{rB}\vert f \vert)(x) \\ &= \sup_{r > 0} \frac{1}{(m(rB)} \int_{rB} \vert f(x+y) \vert \, dy.\end{align*}\] What we have discussed so far assures us that if \(f \in L^1(\mathbb{R}^d)\), then, for each \(\varepsilon > 0\), there exists a \(g \in L^1(\mathbb{R}^d)\) that yields the estimate \[(3) \hspace{3em} \limsup_{r \to 0} \vert (\mathcal{A}_{rB}f)(x) - f(x) \vert \leq \mathcal{M}_B(f-g)(x) + \vert f(x) - g(x)\vert.\] Ideally, we would have liked to show that \[\mathcal{M}_B(f-g)(x) + \vert f(x) - g(x) \vert C \varepsilon\] for some constant \(C\) independent of \(\varepsilon\), thus proving that \(\mathcal{A}_{rB} f \to f\) pointwise everywhere. But this is too much to hope for, as \[(\mathcal{A}_{(-r,r)}\chi_{(0,1)}(0) = \frac{1}{2r} \int_0^r \, dy = \frac{1}{2}\] for all \(0 < r < 1\), which does not converge to \(\chi_{(0,1)}(0) = 0\) as \(r \to 0\). So then, if we hope to obtain an affirmative answer to the above Question, then we must settle for an almost-everywhere statement. This equivalent to the statement that the set \[\left\{x \in \mathbb{R}^d : \limsup_{r \to 0} \vert (\mathcal{A}_{rB}f)(x) - f(x) \vert > 0 \right\}\] is of Lebesgue measure zero. Since the above set is the intersection of the sets \[(4) \hspace{3em} E_k = \left\{x \in \mathbb{R}^d : \limsup_{r \to 0} \vert (\mathcal{A}_{rB}f)(x) - f(x)\vert > \frac{1}{k}\right\},\] it suffices to show that \(m(E_k) = 0\) for all \(k \in \mathbb{N}\). Now, Estimate (3) implies that \[(5) \hspace{3em} \begin{align*} m(E_k) \leq& m \left( \left\{ x : \mathcal{M}_B(f-g)(x) > \frac{1}{2k}\right\}\right) \\ &+ m \left( \left\{ x : \vert f(x) - g(x) \vert > \frac{1}{2k} \right\}\right) \end{align*}\] Observe that \[\begin{align*} m\left(\left\{ x : \vert f(x) - g(x) \vert > \frac{1}{2k} \right\}\right) &= \int_{\{x : \vert f(x) - g(x) \vert > \frac{1}{2k}\}} 1 \, dy \\ &\leq \int_{\{x : \vert f(x) - g(x) \vert > \frac{1}{2k}\}} \frac{\vert f(y) - g(y)\vert}{1/2k} \, dy \\ &\leq \int_{\mathbb{R}^d} \frac{\vert f(y) - g(y)\vert}{1/2k} \, dy \\ &= 2k \|f-g\|_1. \end{align*}\] It is therefore natural to hope for a bound of the form \[(6) \hspace{3em} m \left(\{x : \mathcal{M}_B(f-g)(x) > \frac{1}{2k}\}\right) \leq 2kA \|f-g\|_1\] for some constant \(A\) independent of \((f-g)\), so that (5) can be written as \[m(E_k) \leq 2k(A+1) \|f-g\|_1 < 2k(A+1)\varepsilon.\] Since \(\varepsilon\) was arbitrary, we can then conclude that \(m(E_k) = 0\). (6) is established by the following foundational result in maximal function theory: Theorem(Weak-type \((1,1)\)-bound on the maximal function). If \(B \subseteq \mathbb{R}^d\) is a centrally symmetric convex body, then there exists a constant \(A_{d,1,B}\) such that \[m(\{x \in \mathbb{R}^d : \mathcal{M}_Bf(x) > \alpha\}) \leq \frac{A_{d,1,B}}{\alpha}\|f\|_1\] for each \(\alpha > 0\) and every \(f \in L^1(\mathbb{R}^d)\). The constant \(A_{d,1,B}\) depends only on the dimension \(d\) and the body \(B\). The weak-type bound now implies the below pointwise (almost everywhere) convergence result for the \(L^1\) case. The general case of the theorem follows easily from the \(L^1\) case by considering the compact cutoff function \(f\chi_{B(0;k)}\) with respect to closed balls \(B(0;k)\) of radius \(k \in \mathbb{N}\) centered at the origin. Theorem(Lebesgue differentiation theorem). If \(f \in L^1_{\textrm{loc}}(\mathbb{R}^d)\), and if \(B \subseteq \mathbb{R}^d\) is a centrally symmetric convex body, then \[\lim_{r \to 0}(\mathcal{A}_{rB}f)(x) = f(x)\] for almost every \(x \in \mathbb{R}^d\). The method of establishing a weak-type bound to prove a pointwise convergence result turns out to be extremely powerful. In fact, we cannot do better: Theorem(Stein's \(L^1\) maximal principle). Let \(G\) be a compact, Hausdorff, abelian topological group equipped with the Haar measure \(\mu\). If \((\varphi_n)_{n=1}^\infty\) is a sequence of operators in \(L^\infty(G)\) such that, for each \(f \in L^1(G)\), we have the "pointwise convergence criterion" \[\limsup_{n \to \infty} \vert (f \ast \varphi_n)(x) \vert < \infty\] on a set \(E_f\) of positive measure, then the maximal operator \[Mf(x) = \sup_{n \in \mathbb{N}} \vert (f \ast \varphi_n)(x) \vert\] satisfies the weak-type \((1,1)\)-bound. The weak-type bound of a general maximal operator is typically established through a judicious use of the Hardy–Littlewood maximal operator. It is thus of interest to tighten the bound \[m(\{x \in \mathbb{R}^d : \mathcal{M}_Bf(x) > \alpha\}) \leq \frac{A_{d,1,B}}{\alpha}\|f\|_1\] by reducing the size of the constant \(A_{d, 1, B}\) as much as possible. The classical proof yields \(A_{d,1,B} = O(5^d)\) when \(B\) is the Euclidean ball. This is a consequence of the following lemma: Lemma(Infinitary Vitali covering lemma). If \(\{B(x_\beta,r_\beta)\}_{\beta}\) is a collection of Euclidean balls in \(\mathbb{R}^d\) whose radii are uniformly bounded, then there exists a pairwise-disjoint countable subcollection \(\{B(x_n,r_n)\}_n\) such that \[\bigcup_{\beta} B(x_\beta,r_\beta) \subseteq \bigcup_n B(x_n,5r_n).\] An 1988 result of Stein and Strömberg reduces the constant to \(O(d)\) for the Euclidean ball and \(O(d \log d)\) for an arbitrary centrally symmetric convex body. Naor and Tao showed in 2011 that \(d \log d\) is, in fact, optimal for a broad class of metric measure spaces. The question of finding a tight bound for the Euclidean-space case remains open. In 2003, Melas showed that \(\frac{11+\sqrt{61}}{21}\) is the optimal constant for the weak-type bound of the Hardy–Littlewood maximal function on the one-dimensional Euclidean ball. This remains the only known tight bound. Stein and Strömberg conjectured that the bound may, in fact, be \(O(1)\). While the conjecture has not been settled for the Euclidean ball case, Aldaz showed in 2011 that the constant grows without bound on the \(l^\infty\) ball as the dimension increase. Much work has been done on establishing dimension-independent bound for \(L^p\) when \(p > 1\). A classical proof using real interpolation methods gives us the bound \[\|\mathcal{M}_Bf\|_p \leq A_{d, p, B} \|f\|_p\] for all \(p > 1\) with constant \[A_{d, p, B} = 2^{\frac{p-1}{p}} A^{\frac{1}{p}}_{d, 1, B} \left( \frac{p}{p-1} \right)^{1/p}.\] Stein showed in 1982 that, given a fixed centrally symmetric convex body \(B\), the constant can be made \(O(1)\) with respect to the dimension \(d\). Moreover, Bourgain's far-reaching generalization in 1986 shows that, for each \(p > 3/2\), \[\sup_{d, p, B} A_{d, p, B} < \infty\] where the supremum is taken over all \(d \geq 1\) and centrally symetric convex bodies \(B \subseteq \mathbb{R}^d\). In 2014, Bourgain brought \(p\) down to \(p > 1\) for the \(l^\infty\) ball. The question of establishing the above uniform bound for a large class of centrally symmetric convex bodies in the range \(p > 1\) remains open.
Forgot password? New user? Sign up Existing user? Log in If z=3−4ι \large z = 3 - 4\iotaz=3−4ι is turned π2\dfrac{\pi}{2}2π in anti - clockwise direction , then the new position of zzz is Details : ι=−1\iota = \sqrt{-1}ι=−1 Problem Loading... Note Loading... Set Loading...
If you solve a given PDE (Navier stoke's, Euler, heat eqn, advection eqn, etc...) using FVM, is this PDE supposed to be valid at every cell in the discretized domain, or only in the global domain as a whole? As with many questions that are more about philosophy than science, there are at least three different ways of answering your question, each with a fairly solid argument: Given that the finite volume method requires discretising the problem, there is nowhere where the exact continuum PDE holds. However, we do create one equation representing a discretised version of the original PDE for each cell, so in this sense there is an equation "valid" in each cell. Unless your original system is very simple, or you make some very unusual choices, then the final system will couple the degrees of freedom representing each cell. It's in this sense, when attempting to solve the problem, that there are global solutions only. Lets work through producing a finite difference method and see when each bit kicks in. Since we're choosing to use the FVM, I'm going to assume that the original PDE looks like $$\frac{\partial \tau }{\partial t} +\nabla\cdot \mathbf{F}(\tau) = 0.$$ Fortunately all the equations you ask about can be written in this conservative form. Now for each cell we write an integral equation, $$\int_{e_i}\frac{\partial \tau }{\partial t} +\nabla\cdot \mathbf{F}(\tau)d\mathbf{x} = 0.$$ Integrating by parts, and pulling the differentiation outside the integral gives $$\frac{d \tau }{dt} \int_{e_i}\tau d\mathbf{x} =\int_{\delta e_i} \mathbf{F}(\tau)\cdot d\mathbf{S}.$$This is the last time we have an (integral) equation morally the same as the original PDE,(q.v. answer [1]). From this point onwards we choose to express the integral of the flux, $$\int_{f(e_i,e_j)} \mathbf{F}(\tau)\cdot d\mathbf{S},$$ directed through a facet, $f(e_i,e_j)$, bordering cells $e_i$ and $e_j$, in terms of our cell integrals, $\int_{e_i}\tau d\mathbf{x}$ and $\int_{e_j}\tau d\mathbf{x}$. Now the equation has been discretised. If we choose (weirdly) to have terms in $f(e_i,e_j)$ only given as functions of $\int_{e_i}\tau d\mathbf{x}$, then the individual degrees of freedom are uncoupled, and we just have one independently soluble equation for each of the $\int_{e_i}\tau d\mathbf{x}$ (strongly suggesting answer [2] for your original question). However, this means giving up local conservation of $\tau$, one of the desirable properties of the FVM, which needs terms in $f(e_i,e_j)$ and $f(e_j,e_i)$ to match. Once we do this, the variables are coupled, and we can only find a solution by attacking the global solution. At this point we could argue for answer [3] instead.
This question already has an answer here: In a few of the kinematic equations there is a $2$ or a $0.5$ coefficient. Why is this? For example the kinematic equation for distance is: $$\text{previous velocity} * \text{time} + \frac{1}{2} * \text{acceleration} * \text{time}^2$$ But why the $\frac{1}{2}$? If I use the equation for acceleration to get to that equation I don't have a $\frac{1}{2}$? Here: $$\frac{\Delta v}{t} = a | \cdot t$$ $$v_{new} - v_{old} = a \cdot t | + v_{old}$$ $$\frac{s}{t} = a \cdot t + v_{old} | \cdot t$$ $$s = a \cdot t^2 + v_{old} \cdot t$$ Are my calculations wrong? If so, could someone please show me where I went wrong or explain to me how the $\frac{1}{2}$ comes into play?
Difference between revisions of "Kakeya problem" (13 intermediate revisions by 4 users not shown) Line 1: Line 1: − + '''Kakeya set''' a subset <math>\subset{\mathbb F}_3^n</math> that contains an [[algebraic line]] in every direction; that is, for every <math>d\in{\mathbb F}_3^n</math>, there exists <math>\in{\mathbb F}_3^n</math> such that <math>,+d,+2d</math> all lie in <math></math>. Let <math>k_n</math> be the smallest size of a Kakeya set in <math>{\mathbb F}_3^n</math>. Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. − == + == == − Trivially, + Trivially, :<math>k_n\le k_{n+1}\le 3k_n</math>. :<math>k_n\le k_{n+1}\le 3k_n</math>. − Since the Cartesian product of two Kakeya sets is another Kakeya set, + Since the Cartesian product of two Kakeya sets is another Kakeya set, :<math>k_{n+m} \leq k_m k_n</math>; :<math>k_{n+m} \leq k_m k_n</math>; − this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. + this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. − + − To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, + To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence − :<math>k_n\ + :<math>k_n\3^{(n+1)/2}.</math> − One can + One can essentially the same conclusion using the "bush" argument. <math>N := (3^n-1)/2</math> different directions. <math>\mu</math> be the largest number of lines that are concurrent at a point . Eat least <math>3N/\mu</math>. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>2\mu+1</math>. one obtains + <math>\sqrt{6N} \approx 3^{(n+1)/2}</math>. − + The estimate − :<math>k_n \ge + :<math>k_n\ge (/</math> − == + + + + + + + == == We have We have Line 35: Line 42: since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. − This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal + This estimate can be improved using an idea due to Ruzsa . Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]])contains lines in positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>). Putting all this together, we seem to have Putting all this together, we seem to have Line 43: Line 50: or or − :<math>(1.8207 + :<math>(1.8207+o(1))^n \le k_n \le (1.+o(1))^n.</math> Latest revision as of 00:35, 5 June 2009 A Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. Basic Estimates Trivially, we have [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. Lower Bounds To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\ge 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. The better estimate [math]k_n\ge (9/5)^n[/math] is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements). A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math]
The other day, I asked myself the question: How would you write down Newtonian gravity as a classical field theory? The goal, of course, is to write down a Lagrangian density that yields, as the corresponding Euler-Lagrange equation, Poisson's equation for gravity: \begin{align} \nabla^2\phi=4\pi G\rho, \end{align} where $\phi$ is the gravitational potential field and $\rho$ is the mass density. The seemingly obvious choice for a Lagrangian is given by \begin{align} {\cal L}=\frac{1}{8\pi G}(\nabla\phi)^2+\phi\rho, \end{align} as variation of this Lagrangian with respect to $\phi$ indeed yields Poisson's equation. This Lagrangian is even presented in some Wikipedia articles on the topic. Unfortunately, it is not the complete picture: since $\rho$ is also a dynamical field, it must also be varied, and variations with respect to $\rho$ yield the rather nonsensical equation, \begin{align} \phi=0. \end{align} Oops. Edit (Nov 11, 2015): Consider what follows below a whimsical flight of fancy that is really misguided: by trying to construct a "matter" part for the gravitational Lagrangian, I made the mistake of assuming that I in fact have a theory of matter. The proper thing to do would have been to leave the matter part unspecified: \({\cal L}=\frac{1}{8\pi G}(\nabla\phi)^2+{\cal L}_M\), prescribing only that its variation with respect to \(\phi\) must yield \(\rho\): \(\delta{\cal L}_M/\delta\phi=\rho\). Poisson's equation automatically follows. How can this be fixed? Obviously, the second equation should capture the dynamics of how matter responds to gravity, just as the first equation told us what gravitational field is induced by matter. Now we know that matter actually responds to the gradient of the gravitational field. In fact, what we seek (for collisionless dust in the absence of pressure, viscosity, and other nasty things that would require the full machinery of fluid dynamics) is the equation \begin{align} \vec{a}=-\nabla\phi, \end{align} i.e., the standard acceleration law in the presence of gravity for the acceleration $\vec{a}$. This requirement immediately suggests, however, a Lagrangian density in the form \begin{align} {\cal L}=\frac{\rho_0}{8\pi G\rho}(\nabla\phi)^2+\phi\rho_0+\frac{\rho_0}{8\pi G}\int\frac{a^2}{\rho^2}\dot{\rho}dt, \end{align} where $t$ is time and the overdot is differentiation with respect to time. Now a few paragraphs above, I insisted that $\rho$ must be treated as a dynamical field and varied accordingly. Why is the same thing not true for $a$? Because it is not a new degree of freedom: it is just the acceleration associated with $\rho$. In other words, as we derive the Euler-Lagrange equation from this Lagrangian, we treat $a$ as though it was a function of $\rho$ (a multivalued function, to be precise, or rather, a multivalued relation or mapping, but the point is, it is not an independent degree of freedom.) Variation of this Lagrangian with respect to $\rho$ yields the equation, \begin{align} -\frac{\rho_0}{8\pi G}\frac{(\nabla\phi)^2}{\rho^2}+\frac{\rho_0}{8\pi G}\frac{a^2}{\rho^2}=0, \end{align} or \begin{align} |\vec{a}|=|\nabla\phi|, \end{align} which is almost the equation we sought. It is consistent with $\vec{a}=-\nabla\phi$ though other directions for $\vec{a}$ are not excluded so long as its magnitude remains the same. In principle, we could ensure that these quantities remain parallel by introducing it as a constraint through a Lagrange-multiplier term, such as $\lambda(\vec{a}\times\nabla\phi)^2$, which does not alter the preceding two field equations but does enforce the parallel constraint, but this goes beyond the point I am trying to get to here. Which is that there is a reason why I was interested in this approach. I was studying non-local Lagrangians. Non-local Lagrangians tend to have terms that contain integrals that are taken over all of space. The Green's function solution to Poisson's equation is just such an integral. What if we were to use it in place of $\phi$ in the above Lagrangian? The solution to Poisson's equation is given by \begin{align} \phi(\vec{r})=-G\int\frac{\rho(\vec{x}')}{|\vec{x}-\vec{x}'|}d^3\vec{x}', \end{align} so the Lagrangian then reads, \begin{align} {\cal L}=\frac{G\rho_0}{8\pi\rho}\left[\nabla\int\frac{\rho(\vec{x}')}{|\vec{x}-\vec{x}'|}d^3\vec{x}'\right]^2-G\rho_0\int\frac{\rho(\vec{x}')}{|\vec{x}-\vec{x}'|}d^3\vec{x}'+\frac{\rho_0}{8\pi G}\int\frac{a^2}{\rho^2}\dot{\rho}dt. \end{align} The second term in this Lagrangian density is obviously a surface term and thus, can be omitted, leaving us with \begin{align} {\cal L}=\frac{G\rho_0}{8\pi\rho}\left[\nabla\int\frac{\rho(\vec{x}')}{|\vec{x}-\vec{x}'|}d^3\vec{x}'\right]^2+\frac{\rho_0}{8\pi G}\int\frac{a^2}{\rho^2}\dot{\rho}dt, \end{align} and variation with respect to $\rho$ yields, after getting rid of further surface terms, the equation, \begin{align} |\vec{a}|=\left|G\nabla\int\frac{\rho(\vec{x}')}{|\vec{x}-\vec{x}'|}d^3\vec{x}'\right|, \end{align} which is the non-local, "action-at-a-distance" version of Newton's gravity. This now is a non-local Lagrangian of only one dynamical field. What this suggests is that it is possible to replace a dynamical field in a Lagrangian with a non-local term. Conversely, I guess it is also possible that formally non-local terms in a Lagrangian can be made strictly local, if the non-local term is the solution of a field equation, by introducing a new field variable.
Difference between revisions of "Capillary waves" m (Ref. to quotation added) m (+1 missing) Line 44: Line 44: Three contributions to the energy are involved: the [[surface tension]], gravity, and hydrodynamics. The part due to gravity is the simplest: integrating the potential energy density due to gravity, <math>\rho g z</math> from a reference height to the position of the surface, <math>z=h(x,y)</math>: Three contributions to the energy are involved: the [[surface tension]], gravity, and hydrodynamics. The part due to gravity is the simplest: integrating the potential energy density due to gravity, <math>\rho g z</math> from a reference height to the position of the surface, <math>z=h(x,y)</math>: − :<math>E_\mathrm{g}= \ + :<math>E_\mathrm{g}= \dx\, dy\, \int_0^h dz \rho g z = \frac{\rho g}{2} \int dx\, dy\, h^2.</math> (For simplicity, we are neglecting the density of the fluid above, which is often acceptable.) (For simplicity, we are neglecting the density of the fluid above, which is often acceptable.) An increase in area of the surface causes a proportional increase of energy: An increase in area of the surface causes a proportional increase of energy: − :<math>E_\mathrm{st}= \sigma \ + :<math>E_\mathrm{st}= \sigma \dx\, dy\ \sqrt{\left( \frac{dh}{dx} \right)^2+\left( \frac{dh}{dy} \right)^2} \approx \frac{\sigma}{2} \dx\, dy\ \left[ \left( \frac{dh}{dx} \right)^2+\left( \frac{dh}{dy} \right)^2 \right],</math> where the fist equality is the area in this ([[de Monge]]) representation, and the second where the fist equality is the area in this ([[de Monge]]) representation, and the second applies for small values of the derivatives (surfaces not too rough). applies for small values of the derivatives (surfaces not too rough). The last contribution involves the [[kinetic energy]] of the fluid: The last contribution involves the [[kinetic energy]] of the fluid: − :<math>T= \frac{\rho}{2} \ + :<math>T= \frac{\rho}{2} \dx\, dy\, \int_{-\infty}^h dz v^2,</math> where <math>v</math> is the module of the velocity field <math>\vec{v}</math>. where <math>v</math> is the module of the velocity field <math>\vec{v}</math>. − ====Wave solutions==== ====Wave solutions==== Revision as of 14:11, 14 April 2008 Contents Thermal capillary waves Thermal capillary waves are oscillations of an interfacewhich are thermal in origin. These take place at the molecular level, where only the surface tensioncontribution is relevant. Capillary wave theory (CWT) is a classic account of how thermalfluctuations distort an interface (Ref. 1). It starts from some intrinsic surfacethat is distorted. By performing a Fourier analysis treatment, normal modes are easily found.Each contributes a energy proportional to the square of its amplitude; therefore, according toclassical statistical mechanics, equipartition holds, and themean energy of each mode will be . Surprisingly, this result leads to a divergent surface (the width of the interface is bound to diverge with its area) (Ref 2).This divergence isnevertheless very mild: even for displacements on the order of meters the deviation of the surfaceis comparable to the size of the molecules. Moreover, the introduction of an external fieldremoves the divergence: the actionof gravity is sufficient to keep the width fluctuation on the orderof one molecular diameter for areas larger than about 1 mm 2 (Ref. 2). Recently, a procedure has been proposed to obtain a molecular intrinsicsurface from simulation data (Ref. 3). The density profiles obtainedfrom this surface are, in general, quite different from the usual mean density profiles. Gravity-capillary waves These are ordinary waves excited in an interface, such as ripples on a water surface. Their dispersion relation reads, for waves on the interface between two fluids of infinite depth: Derivation This is a sketch of the derivation of the general dispersion relation, see Ref. 4 for a more detailed description. The problem is unfortunately a bit complex. As Richard Feynman put it (Ref. 6): ...[water waves], which are easily seen by everyone and which are used as an example of waves in elementary courses... are the worst possible example... they have all the complications that waves can have Defining the problem Three contributions to the energy are involved: the surface tension, gravity, and hydrodynamics. The part due to gravity is the simplest: integrating the potential energy density due to gravity, from a reference height to the position of the surface, : (For simplicity, we are neglecting the density of the fluid above, which is often acceptable.) An increase in area of the surface causes a proportional increase of energy: where the fist equality is the area in this (de Monge) representation, and the second applies for small values of the derivatives (surfaces not too rough). The last contribution involves the kinetic energy of the fluid: where is the module of the velocity field . Wave solutions Let us try separation of variables: where is a two dimensional wave number vector, and the position. In this case, where a factor of that will appear every integration is dropped for convenience. To tackle the kinetic energy, suppose the fluid is incompressible and its flow is irrotational (often, sensible approximations) - the flow will then be potential: , and is a potential (scalar field) which must satisfy Laplace equation . If we try try separation of variables with the potential: with some function of time , and some function of vertical component (height) Laplace equation then requires on the later This equation can be solved with the proper boundary conditions: first, must vanish well below the surface (in the "deep water" case, which is the one we consider, otherwise a more general relation holds, which is also well known in oceanography). Therefore , with some constant . The less trivial condition is the matching between and : the potential field must correspond to a velocity field that is adjusted to the movement of the surface: . This means that , and that , so that . We may now find , which is . Performing the integration first we are left with where we have dropped a factor of in the last step. The problem is thus specified by just a potential energy involving the square of and a kinetic energy involving the square of its time derivative: a regular harmonic oscillator. Its equation of motion will be whose oscillatory solution is the same dispersion as above if is neglected. References F. P. Buff, R. A. Lovett, and F. H. Stillinger, Jr. "Interfacial density profile for fluids in the critical region" Physical Review Letters 15pp. 621-623 (1965) J. S. Rowlinson and B. Widom "Molecular Theory of Capillarity". Dover 2002 (originally: Oxford University Press 1982) ISBN 0486425444 E. Chacón and P. Tarazona "Intrinsic profiles beyond the capillary wave theory: A Monte Carlo study", Physical Review Letters 91166103 (2003) Samuel Safran "Statistical thermodynamics of surfaces, interfaces, and membranes" Addison-Wesley 1994 P. Tarazona, R. Checa, and E. Chacón "Critical Analysis of the Density Functional Theory Prediction of Enhanced Capillary Waves", Physical Review Letters 99196101 (2007) R.P. Feynman, R.B. Leighton, and M. Sands "The Feynman lectures on physics" Addison-Wesley 1963. Section 51-4.
Here's an example of why your question is difficult and depends on the framework. Case 1: Take the classical OLS case $Y = \beta X + \epsilon$. In this case, the $\epsilon$ can thought of as an error term which represents noise around the response. So, the mean of the response is $\beta X$ but on any particular response is not going to be exactly equal to that mean because of $\epsilon$ which represents the variation around that mean. Case 2: Take a simple dynamic model in econometrics such as the ADL(1,0) so that $Y_t = \rho Y_{t-1} + \beta X_{t} + \epsilon_t$. In this case, $\epsilon_t$ is not really noise because it's going to stay in $Y_{t}$ after that period is finished so it's actually part of the model. It's almost like an exogenous variable rather than noise so, to me, the best term in this case for $\epsilon$ would probably be innovation. It has what economists refer to as a "permanent" effect on the response. So, my point is that each case can be different. This paper by Qin explains all of this in much more detail and more clearly.
The Hahn-Banach Theorem is described as this: $X$ be a real vector space and $p$ a sublinear functional on $X$. Furthermore, let $f$ be a linear functional which is defined on a subspace $Z$ of $X$ and satisfies $f(x) \leq p(x)$ for all $x \in Z$. Then $f$ could be extended on the whole $X$. Now is the question, I cannot understand the following application of Hahn-Banach theorem: Let $X$ be a normed space and let $x_{0} \neq 0$ by any element of $X$. Then there exists a bounded linear functional $\tilde{f}$ such that $||\tilde{f}|| = 1$, $\tilde{f}(x_{0}) = ||x_{0}||$. The proof is stated as following: We consider the subspace of $Z$ of $X$ consisting of all elements $x = \alpha x_{0}$ where $\alpha$ is a scalar. On $Z$ we define a linear functional $f$ by $f(x) = f(\alpha x_{0}) = \alpha ||x_{0}||$. $f$ is bounded and has norm $||f|| = 1$ because $|f(x)| = |f(\alpha x_{0})| = |\alpha|||x_{0}|| = ||\alpha x_{0}|| = ||x||$. Then based on some extension of Hahn-Banach Theorem, $f$ has a linear extension $\tilde{f}$ from $Z$ to $X$ fulfill the condition. I think I do not fully understand Hahn-Banach theorem. From my understanding, the functional $f$ in the proof is not linear functional. Let $x = (-1 + 1) x_{0}$, then f(x) = f(0) = ||0|| = 0 instead of $f(-x_{0} + x_{0}) = f(-x_{0}) + f(x_{0}) = ||x_{0}|| + ||x_{0}|| = 2||x_{0}||$. Where I got wrong?
In the paper I'm writing I often encounter expressions like $$\int dx_1 \int dx_2 \ldots \int dx_N \mathrm{<something>}.$$ In this form it is not that cumbersome, but things get worse when indices are not just successive integers, but elements of some given set. Is there some mathematical notation to write in the form like $$\left( \prod_{n \in S} \int dx_n \right)\mathrm{<something>}$$ I am currently using $\prod$, like shown above, but I have to explain it in the beginning of the paper, and if there is some traditional notation for this, I would prefer to use it instead.
I have been reading about Big O notation. People writing about Big O often use the terms $f(x)$ and $g(x)$. For instance, I often see people write things like $f(x) = O(g(x))$ or $f(x) \in O(g(x))$. Obviously $f(x)$ is the running time of the given algorithm we are comparing against infinity, but what does $g(x)$ represent? Summary: $f$ and $g$ are typically functions, and $f$ is typically the runtime and $g$ is typically the asymptotic complexity of $f$. But if any of this is unclear from the description, I think it is best to either find another explanation, or to forget about application domains and intuitions for a second and go back to the definitions. They're both functions. The fact that they are named "f" and "g" is probably due to the usual naming convention of functions, just like "x", "y" and "z" are usual names given to variables. In this context, $f(x)$ is often the runtime of the algorithm under question, while $g(x)$ is used to denote the asymptotic complexity of $f(x)$; so $f(x) \in O(g(n))$ (or, $f(x) = O(g(n))$). But, convention or not, what $f$ and $g$ are, and they're relation to each other (if any), should be clear and unambiguous from the explanation. Not something that the reader has to guess based on what it usually means. (And the reader should have the mathematical maturity that is necessary to understand the concept.) Personally, when I am confused about terms and names that are being used, I like to try to forget everything that the terms and names are ultimately used to represent. In computer programming, big-o is often used to describe algorithms. But big-o notation itself abstracts such things away; it is only about functions and their relation to each other. Forget about "runtime", "complexity" or anything else that has to do with the intended application of the notation. Here is an example: Let $f$ and $g$ be two functions defined on some subset of the real numbers. One writes $f(x) = O(g(x))$ as $x \to \infty$, if and only if there is a positive constant $M$ such that for all sufficiently large values of $x$, $f(x)$ is at most $M$ multiplied by the absolute value of $g(x)$. That is, $f(x) = O(g(x))$ if and only if there exists a positive real number $M$ and a real number $x_0$ such that $|f(x)| \leq M \cdot |g(x)|$ for all $x \geq x_0$. We see that $f$ and $g$ are functions. Together with all the variables used, and the relations between them, what $f(x) = O(g(x))$ means is as clear as you might reasonably expect from a Wikipedia article on a mathematical topic. We don't have to consider what this notation is supposed to be used for, and neither what those functions are even supposed to represent/model. Now, going back to the application domain of algorithm analysis; can we use this notation to describe algorithms? Yes, since runtimes of algorithms are just functions on real numbers. Then what does it mean when $f(x) = O(g(x))$, in the context of algorithm analysis? If $f(x)$ is the runtime of the algorithm, then this notation means that $f(x)$ (a runtime) does not grow more quickly than a multiple of $g(x)$, for sufficiently large $x$. That's all from reading from the definition, and interpreting $f(x)$ as a runtime. Pitfalls of convention Sometimes, conventions can be misleading. In fact, we've seen a "violation" of a usual mathematical convention in this post. $f(x) = O(g(n))$ Here, the $=$ sign is used. At least for us non-mathematicians, we are used to this sign denoting a sort of equality between two things. This usually means that the two things are of the same type, e.g. they are both real numbers, and that they are equal in some sense. In particular, $=$ is often an equivalence relation. But this is not the case of this particular use of the $=$ sign. First of all, $f(x) = O(g(n))$ does not mean that $f(x)$ and $g(x)$ are equal: the closest "normal" relation to this is "$f(x)$ is less than or equal to $g(x)$", which obviously has nothing to do with equality (except as a special case). Secondly, $f(x)$ and $O(g(x))$ are not even of the same type: $f(x)$ is a function, while $O(g(x))$ is a class of functions. (An alternative notation to $f(x) = O(g(x))$ is $f(x) \in O(g(x))$, which I used in the beginning. It is arguably more "honest".) A statement like "$f \in O(g)$" types and binds the identifiers $f$ and $g$ by the definition of $O$. So you really have to look at the definition of $O$, e.g. here. You'll see that both the argument of $O$ and its elements (since $O(.)$ is a set) are functions. Once you agree that this makes sense, it does no longer matter¹ whether you write $f \in O(g)$, $u \in O(a)$ or $\mathrm{Jack} \in O(\mathrm{Martha})$. Using $f$, $g$ and $h$ for "generic" functions is a convention carried over from mathematics; in CS you also find $T$ in the context of algorithm analysis (representing a runtime function). Granted, disregarding conventions tends to confuse people. $f(x)$ and $g(x)$ are functions. $f$ typically models the time an algorithm takes to work on an input of size $x$. When we say $f(x) = O(g(x))$, what we mean is approximately "the time taken by the algorithm to work on an input of size $x$ grows no more quickly than a multiple of $g(x)$ for sufficiently large $x$."
First, a few things that may or may not help: Def: A language $L\subseteq \Sigma ^*$ is Recursive (R) if there is some algorithm that, when given $u\in \Sigma^*$, will return $1$ iff $u\in L$ and $0$ iff $u\not\in L$. Def: A language $L\subseteq \Sigma ^*$ is Recursively enumerable (RE) if there is some algorithm that, when given $u\in \Sigma^*$, will return $1$ iff $u\in L$. (If $u\not\in L$, ir may return $0$ or loop forever) Prop: A language $L$ is RE iff there is some algorithm that enumerates $L$. One way to formalize "$\mathcal A$ enumerates $L$" would be to say that the algorithm $\mathcal A$ takes an integer $n\in \mathbb N$ and returns some $\mathcal A(n)\in L$ for each $n$, and that for any $u\in L$ there is at least one $n$ so that $\mathcal A(n)=u$ (i.e. if you look at the algorithm as a function $\mathcal A:\mathbb N \to L$, it is surjective). Remark: Even though the notins of R and RE seem to depend on the alphabet, they do not. If you have an algorithm $\mathcal A$ that takes a word $u\in \Sigma$ as input, you create the one that takes $u\in \Sigma'^*$ by returning $0$ if $u\not\in \Sigma$ (which is decidable) and then running $\mathcal A$. And in the other direction, you can just reuse the same algorithm. Now, take some alphabet $\Sigma$ so that $;\not\in\Sigma$ and let $\Sigma':=\Sigma\sqcup \{;\}$. To show that your $L'$ is indeed RE, just take the algorithm $\mathcal A$ that proves that $L$ is RE. When given $x\in \Sigma^*$, you want to know if there is some $y\in \Sigma^*$ so that $x;y\in L$. The problem with simply trying for each $y\in \Sigma^*$ is that your first call to $\mathcal A$ may never terminate and you'll have looped on a possibly positive instance. What you want to do instead is this: for $n$ from $0$ to $+\infty $, for each word $y\in \Sigma^*$ of size $|y|\le n$, run $\mathcal A$ on $y$ for $n$ steps (or seconds) and if it returns $1$, return $1$. So you're trying bigger a bigger $y$ while giving more and more time to $\mathcal A$ to finish. If there is some $y$ so that $x;y\in L$, then you'll eventually try for this $y$ and give $\mathcal A$ enough time to tell you it's the corect $y$. And if it answers yes, then clearly, there is some $y$ so that $x;y\in L$. So we just proved that $L'$ is RE. Using the property I gave above, there is another simpler proof. You suppose that you can enumerate $L$ and want to enumerate $L'$. Well clearly, you just have to enumerate $L'$ and drop the $;$ and whatever is after it. If you suppose that $L$ is R then $L'$ may not be R. Recall that the halting problem $HALT=\{\langle M,w\rangle|M\text{ halts on input }w\}$ is undecidable (i.e. $HALT\not\in R$). Let $BHALT:=\{\langle M,w\rangle;n|M\text{ halts on input }w\text{ in at most }n\text{ steps}\}$. Clearly $BHALT\in R$: you can just simulate $M$ on $w$ for $n$ steps and then check if the last state is final. But if you take $L=BHALT\in R$, then $L'=HALT\not \in R$.
Fit statistics¶ Introduction¶ All functions compute per-bin statistics. If you want the summed statistics forall bins, call sum on the output array yourself. Here’s an example for the cash statistic: >>> from gammapy.stats import cash>>> data = [3, 5, 9]>>> model = [3.3, 6.8, 9.2]>>> cash(data, model)array([ -0.56353481, -5.56922612, -21.54566271])>>> cash(data, model).sum()-27.678423645645118 Gaussian data¶ TODO Poisson data¶ TODO Poisson data with background measurement¶ If you not only have a measurement of counts \(n_{\mathrm{on}}\) in the signal region, but also a measurement \(n_{\mathrm{off}}\) in a background region you can write down the likelihood formula as where \(\mu_{\mathrm{sig}}\) is the number of expected counts in the signalregions, and \(\mu_{\mathrm{bkg}}\) is the number of expected counts in thebackground region, as defined in the Introduction. By taking twotime the negative log likelihood and neglecting model independent and thusconstant terms, we define the WStat. In the most general case, where \(\mu_{\mathrm{src}}\) and \(\mu_{\mathrm{bkg}}\) are free the minimum of \(W\) is at Profile Likelihood¶ Most of the times you probably won’t have a model in order to get \(\mu_{\mathrm{bkg}}\). The strategy in this case is to treat \(\mu_{\mathrm{bkg}}\) as so-called nuisance parameter, i.e. a free parameter that is of no physical interest. Of course you don’t want an additional free parameter for each bin during a fit. Therefore one calculates an estimator for \(\mu_{\mathrm{bkg}}\) by analytically minimizing the likelihood function. This is called ‘profile likelihood’. This yields a quadratic equation for \(\mu_{\mathrm{bkg}}\) with the solution where Goodness of fit¶ The best-fit value of the WStat as defined now contains no information about the goodness of the fit. We consider the likelihood of the data \(n_{\mathrm{on}}\) and \(n_{\mathrm{off}}\) under the expectation of \(n_{\mathrm{on}}\) and \(n_{\mathrm{off}}\), and add twice the log likelihood to WStat. In doing so, we are computing the likelihood ratio: Intuitively, this log-likelihood ratio should asymptotically behave like achi-square with m-n degrees of freedom, where m is the number ofmeasurements and n the number of model parameters. Final result¶ Special cases¶ The above formula is undefined if \(n_{\mathrm{on}}\) or \(n_{\mathrm{off}}\) are equal to zero, because of the \(n\log{{n}}\) terms, that were introduced by adding the goodness of fit terms. These cases are treated as follows. If \(n_{\mathrm{on}} = 0\) the likelihood formulae read and WStat is derived by taking 2 times the negative log likelihood and adding the goodness of fit term as ever Note that this is the limit of the original Wstat formula for \(n_{\mathrm{on}} \rightarrow 0\). The analytical result for \(\mu_{\mathrm{bkg}}\) in this case reads: When inserting this into the WStat we find the simplified expression. If \(n_{\mathrm{off}} = 0\) Wstat becomes and For \(\mu_{\mathrm{sig}} > n_{\mathrm{on}} (\frac{\alpha}{1 + \alpha})\), \(\mu_{\mathrm{bkg}}\) becomes negative which is unphysical. Therefore we distinct two cases. The physical one where \(\mu_{\mathrm{sig}} < n_{\mathrm{on}} (\frac{\alpha}{1 + \alpha})\). is straightforward and gives For the unphysical case, we set \(\mu_{\mathrm{bkg}}=0\) and arrive at Example¶ The following table gives an overview over values that WStat takes in different scenarios >>> from gammapy.stats import wstat>>> from astropy.table import Table>>> table = Table()>>> table['mu_sig'] = [0.1, 0.1, 1.4, 0.2, 0.1, 5.2, 6.2, 4.1, 6.4, 4.9, 10.2,... 16.9, 102.5]>>> table['n_on'] = [0, 0, 0, 0, 0, 5, 5, 5, 5, 5, 10, 20, 100]>>> table['n_off'] = [0, 1, 1, 10 , 10, 0, 5, 5, 20, 40, 2, 70, 10]>>> table['alpha'] = [0.01, 0.01, 0.5, 0.1 , 0.2, 0.2, 0.2, 0.01, 0.4, 0.4,... 0.2, 0.1, 0.6]>>> table['wstat'] = wstat(n_on=table['n_on'],... n_off=table['n_off'],... alpha=table['alpha'],... mu_sig=table['mu_sig'])>>> table['wstat'].format = '.3f'>>> table.pprint()mu_sig n_on n_off alpha wstat------ ---- ----- ----- ------ 0.1 0 0 0.01 0.200 0.1 0 1 0.01 0.220 1.4 0 1 0.5 3.611 0.2 0 10 0.1 2.306 0.1 0 10 0.2 3.846 5.2 5 0 0.2 0.008 6.2 5 5 0.2 0.736 4.1 5 5 0.01 0.163 6.4 5 20 0.4 7.125 4.9 5 40 0.4 14.578 10.2 10 2 0.2 0.034 16.9 20 70 0.1 0.656 102.5 100 10 0.6 0.663