text
stringlengths
256
16.4k
Electrochemical Impedance Spectroscopy: Experiment, Model, and App Electrochemical impedance spectroscopy is a versatile experimental technique that provides information about an electrochemical cell’s different physical and chemical phenomena. By modeling the physical processes involved, we can constructively interpret the experiment’s results and assess the magnitudes of the physical quantities controlling the cell. We can then turn this model into an app, making electrochemical modeling accessible to more researchers and engineers. Here, we will look at three different ways of analyzing EIS: experiment, model, and simulation app. Electrochemical Impedance Spectroscopy: The Experiment Electrochemical impedance spectroscopy (EIS) is a widely used experimental method in electrochemistry, with applications such as electrochemical sensing and the study of batteries and fuel cells. This technique works by first polarizing the cell at a fixed voltage and then applying a small additional voltage (or occasionally, a current) to perturb the system. The perturbing input oscillates harmonically in time to create an alternating current, as shown in the figure below. An oscillating perturbation in cell voltage gives an oscillating current response. For a certain amplitude and frequency of applied voltage, the electrochemical cell responds with a particular amplitude of alternating current at the same frequency. In real systems, the response may be complicated for components of other frequencies too — we’ll return to this point below. EIS experiments typically vary the frequency of the applied perturbation across a range of mHz and kHz. The relative amplitude of the response and time shift (or phase shift) between the input and output signals change with the applied frequency. These factors depend on the rates at which physical processes in the electrochemical cell respond to the oscillating stimulus. Different frequencies are able to separate different processes that have different timescales. At lower frequencies, there is time for diffusion or slow electrochemical reactions to proceed in response to the alternating polarization of the cell. At higher frequencies, the applied field changes direction faster than the chemistry responds, so the response is dominated by capacitance from the charge and discharge of the double layer. The time-domain response is not the simplest or most succinct way to interpret these frequency-dependent amplitudes and phase shifts. Instead, we define a quantity called an impedance. Like resistance in a static system, impedance is the ratio of voltage to current. However, it uses the real and imaginary parts of a complex number to represent the relation of both amplitude and phase to the input signal and output response. The mathematical tool that relates the impedance to the time-domain response is a Fourier transform, which represents the frequency components of the oscillating signal. To explain the idea of impedance more fully for a simple case, consider the input voltage as a cosine wave oscillating at an angular frequency ( ω): Then the response is also a cosine wave, but with a phase offset ( φ). Compared to the time shift in the image above, the phase offset is given as \phi = -\omega \,\delta t . The magnitude of the current and its phase offset depend on the physics and chemistry in the cell. Now, let’s consider the resistance from Ohm’s law: This quantity varies in time with the same frequency as the perturbing signal. It equals zero at times when the numerator also equals zero and becomes singular when the denominator equals zero. So unlike the resistance in a DC system, it’s not a very useful quantity! Instead, from Euler’s theorem, let’s express the time-varying quantities as the real parts of complex exponentials, so that: and We denote the coefficients V_0 and I_0\,\exp(i\phi) as quantities \bar{V} and \bar{I}, respectively. These are complex amplitudes that can be understood in terms of the Fourier transformation of the original time-domain sinusoidal signals. They express the distinct amplitudes and phase difference of the voltage and current. Because all of the quantities in the system are oscillating sinusoidally, we understand the physical effects by comparing these complex quantities, rather than the time-domain quantities. To describe the oscillating problem (often called phasor theory), we define a complex analogue of resistance as: This is the impedance of the system and, as the name suggests, it’s the quantity we measure in electrochemical impedance spectroscopy. It’s a complex quantity with a magnitude and phase, representing both resistive and capacitive effects. Resistance contributes the real part of the complex impedance, which is in-phase with the applied voltage, while capacitance contributes the imaginary part of the complex impedance, which is precisely out-of-phase with the applied voltage. EIS specialists look at the impedance in the form of a spectrum, normally with a Nyquist plot. This plots the imaginary component of impedance against the real component, with one data point for every frequency at which the impedance has been measured. Below is an example from a simulation — we’ll discuss how it’s modeled in the next section. Simulated Nyquist plot from an electrochemical impedance spectroscopy experiment. Points toward the top right are at lower frequencies (mHz), while those toward the bottom left are at higher frequencies (>100 Hz). In the figure above, the semicircular region toward the left side shows the coupling between double-layer capacitance and electrode kinetic effects at frequencies faster than the physical process of diffusion. The diagonal “diffusive tail” on the right comes from diffusion effects observed at lower frequencies. EIS experiments are useful because information about many different physical effects can be extracted from a single analysis. There is a quantitative relationship between properties like diffusion coefficients, kinetic rate constants, and dimensions of the features in Nyquist plots. Often, EIS experiments are interpreted using an “equivalent circuit” of resistors and capacitors that yields a similar frequency-dependent impedance to the one shown in the Nyquist plot above. This idea was discussed in my colleague Scott’s blog post on electrochemical resistances and capacitances. When there is a linear relation between the voltage and current, only one frequency will appear in the Fourier transform. This simplifies the analysis significantly. For the simple harmonic interpretation of the experiment in terms of impedance, we need the current response to oscillate at the same frequency as the voltage input. This means that the system must respond linearly. For an electrochemical cell, we can usually accomplish this by ensuring that the applied voltage is small compared to the quantity RT/F — the ratio of the gas constant multiplied by the temperature to the Faraday constant. This is the characteristic “thermal voltage” in electrochemistry and is about 25 mV at normal temperatures. Smaller voltage changes usually induce a linear response, while larger voltage changes cause an appreciably nonlinear response. Of course, with simulation to predict the time-domain current, we can always consider a nonlinear case and perform a Fourier transform numerically to study the effect on the impedance. In practice, the interpretation in terms of impedance illustrated above is best suited to the harmonic assumption. Impedance measurements are therefore often used in a complementary manner with transient techniques, such as amperometry or voltammetry, which are better suited for investigating nonlinear or hysteretic effects. Let’s look at a simple example of the physical theory that underpins these ideas to see how the impedance spectrum relates to the real controlling physics. Electrochemical Impedance Spectroscopy: The Model To model an EIS experiment, we must describe the key underlying physical and chemical effects, which are the electrode kinetics, double-layer capacitance, and diffusion of the electrochemical reactants. In electroanalytical systems, a large quantity of artificially added supporting electrolytes keeps the electric field low so that solution resistance can be neglected. In this case, we can describe the mass transport of chemical species in the system using the diffusion equation (Fick’s laws) with suitable boundary conditions for the electrode kinetics and capacitance. In the COMSOL Multiphysics® software, we use the Electroanalysis interface together with an Electrode Surface boundary feature to describe these equations. For more details about how to set up this model, you can download the Electrochemical Impedance Spectroscopy tutorial example in the Application Library. Model tree for the Electroanalysis interface in an EIS model. Under Transport Properties, we can specify the diffusion coefficients of the redox species under consideration. We at least need the reduced and oxidized species for a single redox couple, such as the common redox couple ferro/ferricyanide, to use as an analytical reference. The Concentration boundary condition defines the fixed bulk concentrations of these species. The Electrode Reaction and Double Layer Capacitance subnodes for the Electrode Surface boundary feature contribute Faradaic and non-Faradaic current, respectively. For the double-layer capacitance, we typically use an empirically measured equivalent capacitance and specify the electrode reaction according to a standard kinetic equation like the Butler-Volmer equation. Note that we’re not referring to equivalent circuit properties at all here. In COMSOL Multiphysics, all of the inputs in the description of the electrochemical problem are physical or chemical quantities, while the output is a Nyquist plot. When analyzing the problem in reverse, we’re able to use an observed Nyquist plot from our experiments to make inferences about the real values of these physical and chemical inputs. In the settings for the Electrode Surface feature, we represent the impedance experiment by applying a Harmonic Perturbation to the cell voltage. Settings for the Electrode Surface boundary feature in an EIS model. Here, the quantity V_app is the applied voltage. The harmonic perturbation is applied with respect to a resting steady voltage (or current) on the cell. In this case, we have set this to a reference value of zero volts. With more advanced models, we might consider using the results of another COMSOL Multiphysics model, one that’s significantly nonlinear for example, to find the resting conditions to which the perturbation is applied. If you’re interested in understanding the mathematics of the harmonic perturbation in greater detail, my colleague Walter discussed them in a previous blog post. When studying lithium-ion batteries, for example, we can perform a time-dependent analysis of the cell’s discharge, studying its charge transport, diffusion and migration of the lithium electrolyte, and the electrode kinetics and diffusion of the intercalated lithium atoms. We can pause this simulation at various times to consider the impedance measured from a rapid perturbation. For further insight into the physics involved, you can read my colleague Tommy’s blog post on modeling electrochemical impedance in a lithium-ion battery. Electrochemical Impedance Spectroscopy: The Simulation App A frequent demand for electrochemical simulations is that they “fit” experimental data in order to determine unknown physical quantities or, more generally, to interpret the data at all. Even for experienced electroanalytical chemists, it can be difficult to intuitively “see” the physics and chemistry in the underlying graphs like the Nyquist plot. However, by simulating the plots under a range of conditions, the influence of different effects on the overall graph is revealed. Simulation is helpful for analyzing EIS, but it can also be time consuming for the experts involved. As was the case with my old research group, these experts can spend more time writing programs and running models to fit data together with experimental researchers than on the science. Wouldn’t it be nice if all electrochemical researchers could load experimental data into a simple interface, simulate impedance spectra for a given physical model and inputs, and even perform automatic parameter fitting? The good news is that we can! With the Application Builder in COMSOL Multiphysics, we can create an easy-to-use EIS app based on an underlying model. As a model can contain any level of physical detail, the app provides direct access to the physical data and isn’t confined to simple equivalent circuits. To highlight this, we have an EIS demo app based on the model available in the Application Library. The app user can set concentrations for electroactive species and tune the diffusion coefficients as well as the electrode kinetic rate constant and double-layer capacitance. After clicking the Compute button, the app generates results that can be visualized through Nyquist and Bode plots. The EIS simulation app in action. As well as enabling physical parameter estimation, this app is very helpful for teaching, since we can quickly change inputs and visualize the results that would occur in the experiment. A natural extension for the app is to import experimental data to the same Nyquist plot for direct comparison. We can also build up the underlying physical model to consider the influence of competing electrochemical reactions or follow-up homogeneous chemistry from the products of an electrochemical reaction. Concluding Thoughts Here, we’ve introduced electrochemical impedance spectroscopy and discussed some methods used to model it. We also saw how a simulation app built from a simple theoretical model can provide greater insight into the relationship between the theory of an electrochemical system and its behavior as observed in an experiment. Further Reading Explore other topics related to electrochemical simulation on the COMSOL Blog Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
ISSN: 2156-8472 eISSN: 2156-8499 Mathematical Control & Related Fields March 2015 , Volume 5 , Issue 1 Select all articles Export/Reference: Abstract: We consider single-observed cascade systems of hyperbolic equations. We first consider the class of bounded operators that satisfy a non negativity property $(NNP)$. Within this class, we give a necessary and sufficient condition for observability of the cascade system by a single observation. We further show that if the coupling operator does not satisfy $(NNP)$ (contrarily to [5], or also e.g.[3,4] for symmetrically coupled systems), the usual observability inequality through a single component may still occur in a general framework, under some smallness conditions, but it may also be violated. When the coupling operator is a multiplication operator, $(NNP)$ is violated whenever the coupling coefficient changes sign in the spatial domain. We give explicit constructive examples of such coupling operators for which unique continuation may fail for an infinite dimensional set of initial data, that we characterize explicitly. We also exhibit examples of couplings and initial data for which the observability inequality holds but in weaker norms. These examples extend to parabolic systems. Finally, we show that the two-level energy method [1,2] which involves different levels of energies for the observed and unobserved component, may involve the samelevels of energies of these respective components, if the differential order of the coupling is higher (operating here through velocities instead of displacements). We further give an application to controlled systems coupled in velocities. This shows that the answer to observability and unique continuation questions for single-observed cascade systems is much more involved in the case of coupling operators that violate $(NNP)$ or of higher order coupling operators, and that the mathematical properties of the coupling operator greatly influence the dynamics of the observed system even though it operates through lower order differential terms. We indicate several extensions and future directions of research. Abstract: In this paper necessary and sufficient conditions of $L^\infty$-controllability and approximate $L^\infty$-controllability are obtained for the control system $ w_{tt}=\frac{1}{\rho} (k w_x)_x+\gamma w$, $w(0,t)=u(t)$, $x>0$, $t\in(0,T)$. Here $\rho$, $k$, and $\gamma$ are given functions on $[0,+\infty)$; $u\in L^\infty(0,\infty)$ is a control; $T>0$ is a constant. These problems are considered in special modified spaces of the Sobolev type introduced and studied in the paper. The growth of distributions from these spaces is associated with the equation data $\rho$ and $k$. Using some transformation operator introduced and studied in the paper, we see that this control system replicates the controllability properties of the auxiliary system $ z_{tt}=z_{xx}-q^2z$, $z(0,t)=v(t)$, $x>0$, $t\in(0,T)$, and vise versa. Here $q\ge0$ is a constant and $v\in L^\infty(0,\infty)$ is a control. Necessary and sufficient conditions of controllability for the main system are obtained from the ones for the auxiliary system. Abstract: The paper gives a characterization of the uniform robust domain of attraction for a finite non-linear controlled system subject to perturbations and state constraints. We extend the Zubov approach to characterize this domain by means of the value function of a suitable infinite horizon state-constrained control problem which at the same time is a Lyapunov function for the system. We provide associated Hamilton-Jacobi-Bellman equations and prove existence and uniqueness of the solutions of these generalized Zubov equations. Abstract: In this paper we study an optimal control problem (OCP) associated to a linear elliptic equation on a bounded domain $\Omega$. The matrix-valued coefficients $A$ of such systems is our control in $\Omega$ and will be taken in $L^2(\Omega;\mathbb{R}^{N\times N})$ which in particular may comprises the case of unboundedness. Concerning the boundary value problems associated to the equations of this type, one may exhibit non-uniqueness of weak solutions--- namely, approximable solutions as well as another type of weak solutions that can not be obtained through the $L^\infty$-approximation of matrix $A$. Following the direct method in the calculus of variations, we show that the given OCP is well-possed and admits at least one solution. At the same time, optimal solutions to such problem may have a singular character in the above sense. In view of this we indicate two types of optimal solutions to the above problem: the so-called variational and non-variational solutions, and show that some of that optimal solutions can not be attainable through the $L^\infty$-approximation of the original problem. Abstract: A linear-quadratic (LQ, for short) optimal control problem is considered for mean-field stochastic differential equations with constant coefficients in an infinite horizon. The stabilizability of the control system is studied followed by the discussion of the well-posedness of the LQ problem. The optimal control can be expressed as a linear state feedback involving the state and its mean, through the solutions of two algebraic Riccati equations. The solvability of such kind of Riccati equations is investigated by means of semi-definite programming method. Abstract: We construct a patchy feedback for a general control system on $\mathbb{R}^d$ which realizes practical stabilization to a target set $\Sigma$, when the dynamics is constrained to a given set of states $S$. The main result is that $S$--constrained asymptotically controllability to $\Sigma$ implies the existence of a discontinuous practically stabilizing feedback. Such a feedback can be constructed in ``patchy'' form, a particular class of piecewise constant controls which ensure the existence of local Carathéodory solutions to any Cauchy problem of the control system and which enjoy good robustness properties with respect to both measurement errors and external disturbances. Abstract: This paper is addressed to a quantitative internal unique continuation property for stochastic parabolic equations, i.e., we show that each of their solutions can be determined by the observation on any nonempty open subset of the whole region in which the equations evolve. The proof is based on a global Carleman estimate. Abstract: In this paper we establish a global Carleman estimate for the fourth order Schrödinger equation with potential posed on a $1-d$ finite domain. The Carleman estimate is used to prove the Lipschitz stability for an inverse problem consisting in recovering a stationary potential in the Schrödinger equation from boundary measurements. Readers Authors Editors Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Given a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, let $\{X_n: n\ge 1\}$ be sequence of square integrable random variables, i.e., $X_n \in L^2(\Omega, \mathcal{F}, \mathbb{P})$ for each $n\ge 1$. Further assume that $\mathbb{E}[X_i X_j] = 0$ whenever $i\neq j$ and $\sup_n \mathbb{E}[X_n^2] < \infty$. For each $n \ge 1 $ set $S_n := \sum_{j=1}^n X_j$. I am trying to show that: (i) for each $\alpha > \frac{1}{2}$, $\frac{S_n}{n^{\alpha}}\rightarrow 0$ in probability. (ii) for each $\alpha > 1$, $\frac{S_n}{n^{\alpha}}\rightarrow 0$ a.s. (iii) $\{\frac{S_n}{n}: n \in\mathbb{N}\}$ is uniformly integrable. My questions: In part (i) and (ii) is the assumption "$\mathbb{E}(X_n) = 0$ for each $n$" missing? Or these results hold without this extra assumption? Note that the said assumption, along with $\mathbb{E}[X_i X_j] = 0$ imply that $X_j$'s are uncorrelated and hence one can use an argument similar to the proof of WLLN (Chebyshev) to show $\mathbb{P}\left(\frac{|S_n|}{n^\alpha}\ge \epsilon\right) \le \frac{n \sup_n \mathbb{E}[X_n^2]}{n^{2\alpha}\epsilon^2}, \alpha > 1/2$ for part (i), and an argument similar to the proof of SLLN (Rajchman) to show $\mathbb{P}\left(\frac{|S_n|}{n^\alpha}\ge \epsilon \text{ infinitely often}\right) \le \mathbb{P}\left(\frac{|S_n|}{n}\ge \epsilon \text{ infinitely often}\right) = 0, \alpha >1$. If the mentioned assumption is not indeed necessary, how can one go about solving this problem? Also any hint for part (iii) is appreciated.
Short answer no, different margins can produce the same odd's ratio and confidence interval. Some examples to follow. Here is a brief sketch of how to find the minimum possible N for the table. Note that as per your linked site, the standard error can be related to the cell contents by: $$\text{SE} = \sqrt{\frac{1}{a} + \frac{1}{b} + \frac{1}{c} + \frac{1}{d}}$$ Ignoring the actual odds ratio, this value is minimized when $a = b = c = d$. This then produces the relationship: $$\text{SE}^2 = \frac{1}{n/4} + \frac{1}{n/4} + \frac{1}{n/4} + \frac{1}{n/4} = \frac{4}{n/4} = \frac{16}{n}$$ So subsequently we can estimate the minimum possible N for a table as $16/\text{SE}^2$. In your example, the standard error can be recovered by the 95% confidence interval: $$[\log(\text{High}) - \log(\text{Low})]/[2 \cdot 1.96]$$ Which is just over $0.45$, and so the minimum possible N a table can have with that standard error is $16/0.45^2 = 80$ (taking the ceiling of this value). This logic though won't help with finding the maximum. Let's say one row of the table, $c$ and $d$, have really large values. Thus ($1/c + 1/d) \approx 0$, and so we just have: $$\text{SE}^2 = \frac{1}{a} + \frac{1}{b}$$ Let's say that $c/d = 4/9$, so to get an odds ratio of 2.25 we just need $a = b$. So for this example $\text{SE}^2 = 2/a$, and so $a \approx 10$. So the table below: positive negative exposed 10 10 not exposed 4e6 9e6 Produces an odd's ratio of 2.25 and a 95% CI of 0.9365 to 5.4058. Finally, even if you had the total N for the table, there is a symmetry in the standard error, you can simply swap the row totals and recalculate the cells to have the same odd's ratios. In many situations this will produce approximately the same standard error. So we could rewrite your original table: positive negative exposed 14 39 not exposed 11 69 As: positive negative exposed 23 57 not exposed 8 45 Which produces an odds ratio of 2.2697 and a 95% confidence interval of 0.9280 to 5.5516. Not exactly the same, but to a certain extent this identification is dependent on the amount of rounding in the reporting, so I would be hesitant to rely on it, and with larger row totals it will be progressively harder to find the exact N. If you know the N, you could always do a grid search, but I do not believe it will always result in a unique solution. I've even forgot the most obvious symmetry, you can simply flip the numbers on the diagonals of the table and still obtain the exact same odds ratio and standard error. E.g. positive negative exposed 69 11 not exposed 39 14 results in the same summary statistics as your original table.
Difference between revisions of "Math" m (→List of Formulas: Reduced width for better display) m (→List of Formulas) Line 301: Line 301: Gives: Gives: + <context> <context> \setuplayout[scale=0.8,width=13cm] \setuplayout[scale=0.8,width=13cm] Revision as of 06:01, 18 May 2006 Contents Introduction TeX was designed for ease of typesetting books that contained mathematics. As ConTeXt is built on top of TeX, it inherits all those features. In addition to these, ConTeXt adds lot of macros to make the typesetting of mathematics easier. For typesetting of mathematics follows different rules than that of normal text, TeX uses something called "math mode" where some characters get a different meaning to enable a simple syntax for complicated formulas. Simple Math Typesetting mathematics can be divided into two parts, inline math (mathematical formulas set within ordinary paragraphs as part of the text) and display math mathematics set on lines by themselves, often with equation numbers). Inline math consists of maths that is typed in a sentence. For example There are two ways of typing inline math. The TeX way is to surround what you want to type within $... $. Thus, the above will be typed as Pythagoras formula, stating $a^2 + b^2 = c^2$ was one of the first trignometric results ConTeXt also provides an alternative way of typing the same result. Instead of dollars, you can write the material for maths inside \mathematics. Thus, an alternate way to type the above is, Pythagoras formula, stating \mathematics{a^2 + b^2 = c^2} was one of the first trignometric results Choose the method that suits your style. ((I do not know if there are pros and cons of $..$ vs \mathematics{}. If someone knows, then please elaborate -- aditya )) The famous result (once more) is given by \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: Numbering Formulae The famous result (once more) is given by \placeformula \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: The \placeformula command is optional, and produces the equation number; leaving it off produces an unnumbered equation. Not so Simple Maths ConTeXt's base mathematics support is built on the mathematics support in plain TeX, thus allowing quite complicated formulas. (There are also some additional macros, such as the \text command for text-mode notes within math.) For instance: A more complicated equation: \placeformula \startformula {{\theta_{\text{\CONTEXT}}}^2 \over x+2} = \pmatrix{a_{11}&a_{12}&\ldots&a_{1n}\cr a_{21}&a_{22}&\ldots&a_{2n}\cr \vdots&\vdots&\ddots&\vdots\cr a_{n1}&a_{n2}&\ldots&a_{nn}\cr} \pmatrix{b_1 \cr b_2 \cr \vdots \cr b_n} + \sum_{j=1}^\infty z^j \left( \sum_{\scriptstyle n=1 \atop \scriptstyle n \ne j}^\infty Z_j^n \right) \stopformula which produces Context provides a wrapper around tex \pmatrix. The above can be typeset in a contextish way as A more complicated equation: \definemathmatrix[pmatrix][left={\left(\,},right={\,\right)}] \placeformula \startformula {{\theta_{\text{\CONTEXT}}}^2 \over x+2} = \startpmatrix \NC a_{11} \NC a_{12} \NC \ldots \NC a_{1n} \NR \NC a_{21} \NC a_{22} \NC \ldots \NC a_{2n} \NR \NC \vdots \NC \vdots \NC \ddots \NC \vdots \NR \NC a_{n1} \NC a_{n2} \NC \ldots \NC a_{nn} \NR \stoppmatrix \startpmatrix b_1 \NR b_2 \NR \vdots \NR b_n \NR \stoppmatrix + \sum_{j=1}^\infty z^j \left( \sum_{\scriptstyle n = 1 \atop \scriptstyle n \ne j}^\infty Z_j^n \right) \stopformula MathAlignment is covered on a separate page. Sub-Formula Numbering As mentioned above, formulas can be numbered using the \placeformula command. This (and the related \placesubformula command have an optional argument which can be used to produce sub-formula numbering. For example: Examples: \placeformula{a} \startformula c^2 = a^2 + b^2 \stopformula \placesubformula{b} \startformula c^2 = a^2 + b^2 \stopformula What's going on here is simpler than it might appear at first glance. Both \placeformula and \placesubformula produce equation numbers with the optional tag added at the end; the sole difference is that the former increments the equation number first, while the latter does not (and thus can be used for the second and subsequent formulas that use the same formula number but presumably have different tags). This is sufficient for cases where the standard ConTeXt equation numbers suffice, and where only one equation number is needed per formula. However, there are many cases where this is insufficient, and \placeformula defines \formulanumber and \subformulanumber commands, which provide hooks to allow the use of ConTeXt-managed formula numbers with plain TeX equation numbering. These, when used within a formula, simply return the formula number in properly formatted form, as can be seen in this simple example with plain TeX's \eqno. Note that the optional tag is inherited from \placeformula. More examples: \placeformula{c} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \eqno{\formulanumber} \stopformula In order for this to work properly, we need to turn off ConTeXt's automatic formula number placement; thus the \let command to empty \doplaceformulanumber, which must be placed after the start of the formula. In many practical examples, however, this is not necessary; ConTeXt redefines \displaylines and \eqalignno to do this automatically. For more control over sub-formula numbering, \formulanumber and \subformulanumber have an optional argument parallel to that of \placeformula, as demonstrated in this use of plain TeX's \eqalignno, which places multiple equation numbers within one formula. Yet more examples: \placeformula \startformula \eqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula Note that both \formulanumber and \subformulanumber can be used within the same formula, and the formula number is incremented as expected. Also, if an optional argument is specified in both \placefigure and \formulanumber, the latter takes precedence. More examples for left-located equation number: \setupformulas[location=left] \placeformula{d} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \leqno{\formulanumber} \stopformula and \placeformula \startformula \leqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula -- 23:46, 15 Aug 2005 (CEST) Prinse Wang List of Formulas You can have a list of the formulas contained in a document by using \placenamedformula instead of \placeformula. Only the formulas written with \placenamedformula are not put in the list, so that you can control precisely the content of the list. Example: \subsubject{List of Formulas} \placelist[formula][criterium=text,alternative=c] \subsubject{Formulas} \placenamedformula[one]{First listed Formula} \startformula a = 1 \stopformula \endgraf \placeformula \startformula a = 2 \stopformula \endgraf \placenamedformula{Second listed Formula}{b} \startformula a = 3 \stopformula \endgraf Gives: Other Methods There are two different math modules on CTAN, nath and amsl. And there's a new math module in the distribution. Context now has inbuilt support for Math_structures It is also possible to use most LaTeX equations in ConTeXt with a relatively small set of supporting definitions. The "native" ConTeXt way of math is MathML, an application of XML - rather verbose but mighty.
The Collatz Conjecture is well known with the sequence $$f(n) = \begin{cases} n/2 &;\text{if } n \equiv 0 \pmod{2}\\ k\,n+1 &; \text{if } n\equiv 1 \pmod{2} \end{cases}$$ and $k=3$; the sequence converging $1$ (so called oneness). Is there any conjecture/theorem on whether the sequence would converge for any other value of $k$; or could it be shown that the sequence diverges for values of $k$ other than $3$? By convergence here I mean that the sequence after finite steps ends with a stable fixed number such as in case of Collatz it is the case with the number $1$. Append: In the mean time I wrote out a conjecture on this over here >>>, for those who might be interested.
Latest (and likely final) edit: Fixing a typo, rewording the introduction Multiples of $2$ are placed in a checkerboard fashion: This is the parity condition, and is essential if there are an even number of total lattice sites. This answer sketches a proof that it is possible to place multiples of $3$ in all rectangular arrays in arbitrary dimension, with the exception of $2^4$ and $2^6$, and argues that higher primes can also be incorporated. Thus I conjecture that these two examples are the only cases where a solution is impossible. For lower dimensions there are many explicit solutions. 2D: All $(k,l)$ with $kl\leq 50$. These are from a numerical search (using parity but no other symmetries), except for $(21,2)$ and $(24,2)$ for which Masked Avenger (MA) found a Hamiltonian path in the comments below (in "first three exceptions" of previous version). All squares up to 25 numerically, except for 24. nsrt's answer gives an example for all odd primes. 3D: All $(k,l,m)$ with $klm\leq 50$ with $m=2$, numerically except for MA's final "exception" $(12,2,2)$. Explicit non-Hamiltonian solution for $(3,3,3)$: $\left(\begin{array}{ccc}27&10&9\\20&3&8\\21&4&15\end{array}\right)\left(\begin{array}{ccc}14&13&22\\11&2&1\\16&7&26\end{array}\right)\left(\begin{array}{ccc}5&6&23\\12&25&24\\17&18&19\end{array}\right)$ A numerical solution for $(4,4,3)$. Peter Mueller's answer gives solutions for $(4,3,3)$, $(5,3,3)$ and $(4,4,4)$. 4D: $(2,2,2,2)$ is impossible as per Zack Wolske's comment. $(3,2,2,2)$ has a solution:$\left(\begin{array}{ccc}1&6&13\\8&11&24\end{array}\right)\left(\begin{array}{ccc}4&7&8\\21&10&23\end{array}\right)$ $\left(\begin{array}{ccc}2&5&12\\15&14&19\end{array}\right)\left(\begin{array}{ccc}3&22&17\\16&9&20\end{array}\right)$ 5D: $(2,2,2,2,2)$ has a solution: $\left(\begin{array}{cc}23&14\\24&19\end{array}\right)\left(\begin{array}{cc}28&3\\25&4\end{array}\right)\qquad\left(\begin{array}{cc}22&15\\7&16\end{array}\right)\left(\begin{array}{cc}27&32\\26&21\end{array}\right)$ $\left(\begin{array}{cc}12&5\\1&6\end{array}\right)\left(\begin{array}{cc}17&2\\18&11\end{array}\right)\qquad\left(\begin{array}{cc}29&8\\30&13\end{array}\right)\left(\begin{array}{cc}20&9\\31&10\end{array}\right)$ 6D: $(2,2,2,2,2,2)$ is impossible, also due to placing multiples of $3$. There are $11$ odd multiples and $10$ even multiples to place. So if the domain is split in half, at least one half must have $11$ or more multiples of $3$. Now for a 5D domain: either (a) there are none of a given parity, in which case there can be up to $16$ of the other, or (b) there is one of a given parity, which has $5$ neighbours, hence restricting the other parity to $11$, or (c) the total of both parities is restricted to $10$ or less (by detailed checking). Thus for each of the six ways in which the 6D domain can be split, there are are zero or one of one parity and $10$ or more of the other. Losing at most one site at each of the remaining five splittings, we find at least $5$ in the intersection of the halves, a single site, which is a contradiction. General idea: For the remaining lattices, if one of the lengths is at least $3$ use checkerboard arrangements for all odd and even multiples of $3$ on opposite sides, as in the $(3,3,3)$ example above. For larger examples, these fill several layers on each side (total filling $2/3$ of the volume). If multiples of $15$ are placed as close together as possible, the remaining $13/15$ of the volume is available for other multiples of $5$, amongst the multiples of $3$ or in the empty space in the middle. Similarly with $7$ and higher primes, which become increasingly easy to add. So I conjecture that the only impossible cases are of the form $2^d$. But which $d$? Rather than opposite faces we now concentrate on corners. If we place odd multiples of 3 at distances $\{o_i\}$ from a specified corner and even multiples of 3 at distances $\{e_i\}$ from that corner, such that all $\{o_i\}$ are odd and all $\{e_i\}$ are even, it is possible to ensure that none are adjacent if $d\not\in\{4,6,8,10,12\}$. For example, the solution for $d=14$ is $o_i=1,3,5,11,13$ with a total of (binomial coefficients) $14+364+2002+364+14=2758$ locations, and $e_i=8$ with $3003$ locations. In this case we need $2731$ (roughly $2^{14}/6$) odd and $2730$ even. The remaining cases $8,10,12$ can be filled with multiples of 3 using the following construction: Choose a $2\times 2$ block, then odd multiples of 3 are at odd locations (required by parity) a distance $\delta=0,1,\ldots (d/2)-2$ from this block, and even multiples of 3 are at even locations at distances $\delta=(d/2),(d/2)+1,\ldots,d-2$. In each case the number of such locations is $2\sum_{\delta=0}^{d/2-2} \left(\begin{array}{c}d-2\\\delta\end{array}\right)$ so for example for $d=8$ we have $2+12+30=44$ for $\delta=0,1,2$ which is slightly more than sufficient as we need to place $43$ odd multiples of $3$ and $42$ even multiples. Thus, assuming that multiples of higher primes can be incorporated as above, the only counterexamples are $2^4$ and $2^6$.
You can obtain the $G=KAK$ decomposition from a decomposition of the type $F=UR$. To avoid unnecessary complications, let's assume that our reductive group $G$ is a selfadjoint subgroup of $\operatorname{GL}(n,\mathbb{R})$. Then the map $g \mapsto g^{-t}$ is an involution of $G$, which is called the Cartan involution and is typically denoted by $\theta$. The first observation to make is that the fixed-point set $K = \{ g \in G \colon \theta(g)=g \}$ of $\theta$ is a maximal compact subgroup of $G$. For example, if $G=\operatorname{GL}(n,\mathbb{R})$, then $K=\operatorname{O}(n)$. Next we observe that $\theta$ induces an involution (also denoted by $\theta$) at the Lie algebra level: explicitly, this is the map $X \mapsto -X^t$. If $\mathfrak{p}$ denotes the $-1$-eigenspace of this latter involution, then one has the following result. The map $K \times \mathfrak{p} \to G$ given by $(k, X) \mapsto k e^X$ is a diffeomorphism. In particular, every $g \in G$ can be expressed as $k e^X$ for some $k \in K$ and $X \in \mathfrak{p}$. This decomposition is known as the Cartan decomposition; it is a generalization of the polar decomposition to $G$ (and is, I presume, the $F=UR$ decomposition stated in the OP). Indeed, if $G = \operatorname{GL}(n,\mathbb{R})$, then $\mathfrak{p}$ is just the set of symmetric matrices, and thus the set $\exp \mathfrak{p}$ consists of symmetric, positive semidefinite matrices. Now let $\mathfrak{a}$ denote a maximal abelian subspace of $\mathfrak{p}$. Then it can be shown that $A = \exp \mathfrak{a}$ is a closed abelian subgroup of $G$ with Lie algebra $\mathfrak{a}$. It can also be shown that $\mathfrak{a}$ is unique up to conjugacy via an element of $K$. That is to say, if $\mathfrak{a}'$ is another maximal abelian subspace of $\mathfrak{p}$, then there is a $k \in K$ such that $\text{Ad}(k) \mathfrak{a} = \mathfrak{a}'$. With this information we can obtain the decomposition $G=KAK$: given $g \in G$, one observes that $p=gg^t \in \exp \mathfrak{p}$, say $p=e^X$. Thus there is a $k \in K$ such that $\text{Ad}(k)X \in \mathfrak{a}$, and then $e^{-\text{Ad}(k)X/2}kg \in K$ (because it is fixed by $\theta$), whence $g \in KAK$. This hopefully alleviates your 3-terms-vs-2-terms issue. I'm not aware of any relationship between the Iwasawa decomposition and the $KAK$ (polar) decomposition.
An example of methylation analysis with simulated datasets Part 2: Potential DMPs from the methylation signal Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches. 1. Background Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1. 2. Potential DMPs from the methylation signal using empirical distribution As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$. DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "ECDF") 3. Potential DMPs detected with Fisher’s exact test In Methyl-IT Fisher’s exact test (FT) is implemented in function FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing ( pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested ( tv.cut = 0.926). ft = FisherTest(LR = divs, tv.cut = 0.926, pAdjustMethod = "BH", pooling.stat = "mean", pvalCutOff = 0.05, num.cores = 4L, verbose = FALSE, saveAll = FALSE) ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None", tv.cut = 0.926, tv.col = 7, alpha = 0.05) There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality: $TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1]. where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1. So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$ 4. Potential DMPs detected with Weibull 2-parameters model Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values. nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)# Potential DMPs from 'Weibull2P' modelDMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Weibull2P")nlms.wb$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV## shape 0.991666258901194 0.996595712743823 34.7217494754823## scale ## AIC BIC COV.shape COV.scale## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06## scale -1.165129e-06 2.427280e-04## COV.mu n## shape NA 50000## scale NA 50000 5. Potential DMPs detected with Gamma 2-parameters model As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values $TV_d > 0.926$ are considered DMPs. nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P")# Potential DMPs from 'Gamma2P' modelDMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P")nlms.g2p$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282## scale 76.1580083 0.0642929555 1184.547 0 ## rho R.Cross.val DEV## shape 0.999998194084045 0.998331895911125 0.00752417919133131## scale ## AIC BIC COV.alpha COV.scale## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06## scale -8.581717e-06 4.133584e-03## COV.mu df## shape NA 49998## scale NA 49998 Summary table: data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),Gamma = unlist(lapply(DMPs.g2p, length))) ## ft ft.hd ecdf Weibull Gamma## C1 1253 773 63 756 935## C2 1221 776 62 755 925## C3 1280 786 64 768 947## T1 2504 1554 126 924 1346## T2 2464 1532 124 942 1379## T3 2408 1477 121 979 1354 6. Density graphic with a new critical value The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$. suppressMessages(library(ggplot2)) # Some information for graphic dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif") References Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427.
THE FRAMEWORK: Let $X_1$ be an observation from a normal random variable with mean zero and variance $\sigma^2$ and lets call the PDF $f(x)$. I want to minimize the Kullback Liebler Information criterion between a PDF $g(x, \beta) $ of a zero mean normal random variable with variance $\beta \sigma^2$ and $f(x)$. The minimization is with respect to $\beta$. The Kullback Liebler Information criterion is defined as t $$I(f: g, \beta):= E [ \log(f(X_1)/g(X_1, \beta) )]$$ THE MOTIVATION: The motivation for doing this is that Akaike showed that the maximum likelihood estimator $\hat{\beta}$ of a model that assumes $g$ as the parametric distribution generating the observations is a natural estimator for the value $\beta^*$ that minimizes the Kullback Liebler information criterion. THE PROBLEM: I wanted to do this simple calculation because I expected $\beta^* = 1$ (in this way the two distributions would be equal so their "distance" is minimized). But performing the computations I obtain $$ E \left[ \frac{1}{2} \log \beta X_1^2 - \frac{1}{2} \log \sigma^2 - \frac{\beta X_2^2 + \sigma^2}{2 \sigma^2 \beta} \right] $$ and it seems to me that the $\beta^* $ that minimizes this tends to minus infinity. Where is my mistake?
I would like to show that for $t > 0$, $$\int_{1}^{\infty}\frac{3e^{yt}}{y^4}\text{ d}y$$ diverges. [This is equivalent to showing that the MGF for $Y$, with pdf $$f_{Y}(y) = \dfrac{3}{y^4}\text{, } y \in (1, \infty)$$ does not exist.] My work: \begin{equation*} \int_{1}^{\infty}e^{yt} \cdot \dfrac{3}{y^4}\text{ d}y = \lim_{u \to \infty}3\int_{1}^{u}\dfrac{e^{yt}}{y^4}\text{ d}y\text{.} \end{equation*} Thus, it suffices to show that $\int_{1}^{\infty}\dfrac{e^{yt}}{y^4}\text{ d}y = \infty$. Recall that since $t > 0$ \begin{equation*} \lim_{y \to \infty}\dfrac{e^{yt}}{y^4} = \infty\text{.} \end{equation*} This means, by definition, that $\forall \alpha \in \mathbb{R}$, $\exists K > 0$ such that $\forall y > K$, \begin{equation*} \dfrac{e^{yt}}{y^4} > \alpha\text{.} \end{equation*} Thus, $\forall y > K$, obviously $\dfrac{e^{yt}}{y^4} > 1$. We may assume $K > 1$. Therefore, $\forall u \geq K$, \begin{align*} \int_{1}^{u}\dfrac{e^{yt}}{y^4}\text{ d}y &= \int_{1}^{K}\dfrac{e^{yt}}{y^4}\text{ d}y + \int_{K}^{u}\dfrac{e^{yt}}{y^4}\text{ d}y \\ &\geq \int_{1}^{K}\dfrac{e^{yt}}{y^4}\text{ d}y + \int_{K}^{u}1\text{ d}y \\ &\geq 0 + u - K \\ &= u - K\text{.} \end{align*} As $u \to \infty$, obviously $u - K \to \infty$. Hence, $\displaystyle\int_{1}^{u}\dfrac{e^{yt}}{y^4}\text{ d}y \to \infty$ as well, and the integral diverges. I'm just nervous about the sudden $K > 1$ assumption. Is this valid? Obviously $K$ must be quite large.
Unfortunately it is not necessary to invoke group selection to answer this question. This is one of the reasons that Dawkins likes this discussion so much - he does not believe in group selection and so the discussion in SG does not invoke group selection. ESSs are described in the book as the product of direct competition or interaction between genes.... It basically comes down to a question of the unit of selection.From the common viewpoint, in which natural selection is seen as acting on individual organisms, it's almost a tautology that the organisms favored by selection are those that maximize their own reproductive fitness. Thus, the possibility that some organisms might engage in acts that help ... In an infinite, well mixed population with single pairwise encounters, Grudger is indeed not an ESS. In fact, as you correctly note, in such a model the Grudger and Sucker strategies are indistiguishable, as the probability of anyone encountering the same individual twice is zero.To make it possible for the Grudger strategy to survive against invasion by ... The field most closely associated with game theoretic models in biology is evolutionary game theory. If modeling is required, then the typical paradigm is agent-based modeling, and a good introductory book is:Yoav Shoham and Kevin Leyton-Brown[2009], "Multiagent systems:algorithmic, game-theoretic, andlogical foundations", CambridgeUniversity ... The particular language a bioligist uses depends on the trade-offs between speed and ease of programming. Many models are written in C or Fortran if speed is paramount. On the other hand people will write models in higher level languages if speed is less important. These would be Python, R, MatLab, etc... In my models, which are written mostly in Python, ... Your question is quite broad and asks for explanations for various behaviours which can lead to self-sacrifice.Religious reasons: The genetic influence here may be a predisposition to let others influence you. This is what gives rise to culture in the first place, in other words: the predisposition to at some point maybe sacrifice yourself because you are ... There isn an effect called "Indirect reciprocity" where individuals just give to everyone they meet without direct requirement of reciprocity.This sort of benefit to others is common - hospitality to strangers, general politeness, good customer service all fall along these lines. You hope they will come back and benefit you again, but maybe they will ... First of all, there is a very heated debate about this in the field of social evolution at present, and you aren't likely to get a conclusive answer. One theorist may give you one answer, but another will vehemently disagree. I'll start by logically answering your questions in reverse order!Question 2: Can you please provide an intuitive explanation of why ... I'll try to beat @Remi.b to the suggestion that you review Understanding Evolution as a general overview of evolutionary topics.For a quick answer: no.Sometimes people confuse the great importance of natural selection in evolution with an equivalency between natural selection and evolution. However, there are many many contributors to evolution, many of ... What you are describing usually falls under the category of computational biology or just mathematical biology. Unfortunately, the biggest part of this field is bioinformatics, or the application of statistical and/or dynamical programming techniques to sequence data. You exclude this in your question, and I would agree with you that it is a "boring" topic ... It sounds like what you may be referring to is Fermi Paradox:The Fermi paradox — or Fermi's paradox — is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilizations, such as in the Drake equation, and the lack of evidence for such civilizations. The basic points of the argument, made by ... Joan Strassman's work is probably the route to go for this.The short of your answer is that several things mediate who ends up where in the slug:Cheaters are limited from exploiting otherclones by high relatedness, kin discrimination, pleiotropy, nobleresistance, and lottery-like role assignment.Here's the most relevant paper:Strassmann, J. E.... Nothing is at a genome-wide local equilibrium. Graham Bell wrote fairly extensively on this (IIRC).Some lociwill be at what are likely global optimums (e.g. Cytochrome oxidase)will be at local but not global optimums (e.g. low-fitness malaria resistance vs. high fitness malaria resistance: for the extremely cool story check out this page)will not be ... The $sR$ that your looking at is the average relatedness of the next generation. This assumes that the new immigrants into the the population are completely unrelated. So if the population is completely viscous ($s=1$) the average relatedness of the next generation equals that of the current generation. On the other hand, if the population is not viscous ($... Rephrasing the questionDoes evolution only give rise to traits that confer fitness?The phrasing is actually a little nonsensical, but it is easy to understand what you mean. The reason is that "fitness" is not a characteristic of individuals but a measure (a variable if you wish) of a characteristic. Imagine you are talking about Shaquille O'Neal and ... The cat is probably just having fun. So, we could ask "What is the evolutionary benefit for having fun?" Through play, animal learn (Fagen 1974, Spinka et al. 2001, Pellegrini 2007, Hirsh-Pasek and Golinkoff, 2008). This is particularly true for juvenile. Adults play is not uncommon though and might follow from the same principles (Bekoff and Byers, 1998). ... Tracing it backwards this was the earliest reference I found via google scholar, it's from 1988 and uses the $r B C$ notation in the format we are used to.Hamilton's rule states that for a socialaction to be favored under natural selection, rb - c > 0, where c is the cost to theactor in terms of the effect on (usually a reduction in) his expected ... Have you looked into "Fundamental of Molecular Evolution" by Dan Graur & Li.Another suggestion in line of population genetics and different evo. theories would be -Evolutionary Genetics: Concepts and Case studies(Multi-author book. Editor Fox & Wolf) Actually the derivation is pretty straightforward. It's easier to use the fact that $Cov(X,Y) = E(XY) - E(X)E(Y)$ to derive this result. Suppose$x_{j} = \sum_{i} s_{ij}$.\begin{align*}Cov (x_j, q_{j}) &= E (x_{j}q_{j}) - E (x_{j}) E (q_{j}) \\&= \frac{1}{n}\sum x_{j} q_{j} - \frac{1}{n}\sumx_{j} q \\... Are kin selection and group selection the same thing? Yes and no.Yes: These days people tend to use the "direct fitness approach" (Taylor and Frank JTB 1996). It turns out that this is based on EXACTLY the same equation as is contextual analysis, which is the currently favored approach for measuring multilevel selection in natural populations (... The prototypical example of this is t, whose existence was predicted by Robert Trivers and featured prominently in the Selfish Gene. The current dominant point of view in evolutionary biology is that genes act in their own interest and even the 'self' is just a manifestation of the gene's reproductive properties.t is a locus in some male mice. The t ... He defines lineage selection as selection for traits which increase the fitness of a group of plasmids, rather than an individual plasmid with in a cell or a particular cell containing plasmids. He says that the unit of selection are "plasmid-host clades" : in other words the unit of selection is the group of closely related plasmids in separate cells. It is ... The writings by Samir Okasha (philosopher of biology/science) could be a good starting point. In his book Evolution and the Levels of Selection, he explicitly uses the Price equation to discuss selection at multiple levels (e.g. chapter 2.3: Price's equation in a hierarchical setting), and also derives a multi-level version of the Price equation:$$\bar{w}\... It is just worded a little wierdly in my opinion. The key line in the paper is: 'Fitness components are also defined for all individuals, for example, $C$ is defined, even for a non-altruist, as the cost it would incur if were altruistic.'Essentially, if it doesn't matter what individual you are you always pay the same cost, then $C$ is a constant. This is ... I don't know much about this particular species of snake but here are some info that may help.What do we mean by right-handed snakes?I doubt that in this context a snake could be ambidextrous. What they call handedness in these snakes (Pereatids) concerns the density and size of teeth on each side of the jaw (pictures from Hoso et al. (2010))This ... Selfish behaviour is not necessarily preferred. It depends on the game (game theory). For example, in absence of relatedness and reciprocity, we would expect:All the population defect in the Prisoner's dilemnaSome defect and some cooperate in the snow-drift gameEither all inds defect or all inds cooperate (depending on initial condition because it is an ... I think the jury is out on this one, there are examples of evidence both for and against reachability of local equilibrium and even these examples can be interpreted in many ways. I present three pieces of evidence and some interpretations. In general, my feeling after reading about this is that the assumption of equilibrium is ingrained in mathematical ... What I think the question is "Why wouldn't an organism be more efficient from selection than the environment demands?" Please let me know if I'm hitting the mark here.A scenario suggested by the question is this: If there is selection pressure on say an animal to resist a disease, and then it evolves two resistance systems to the disease. This could be ... Is it interesting? Perhaps, but "complexity" is a vague notion. If you want to simplify and just say "variation" then sure, sex increases variation. But so does that random mutation you brought in. Really, all you need for increased variability is some difference between generations and genetic Drift will take care of the rest. Mutation is enough, which ...
Deriving the heat diffusion equation was one of the most interesting things I learned about in my undergraduate degree. The fact that you can sit in your chair and discover something about how the world works with nothing more than your brain, a pen, and some paper astonishes me. The heat diffusion equation descrbes how heat diffuses through objects in time and space. In this post I will be deriving the partial differential equation which is the 1-D heat diffusion equation. The first thing we must consider is the system which we want to describe. This is a material in which heat diffuses. We want to be able to describe the transient nature of heat diffusion as well as the direction. Consider a small 1-D control volume into which heat enters on one side and leaves on the other. Some heat will accumilate within the control volume over time. We can therefore write a balance equation for this system: For an infinitesimally small period of time , for a material with no internal heat generation or consumption we can say that: (I will cover internal heat generation and consumption in another post) So what is the amount of heat in? And what is the amount of heat out? Recall Fourier’s law for heat flux: Where $q$ is the heat flux ($W/m^2$), $k$ is the thermal conductivity ($W/mK$) and $\frac{\partial T}{\partial x}$ is the temperature gradient in the $x$ direction at the point of the heat flux in question. Why the partial derivatives I hear you ask? This is because temperature is not solely a function of $x$, it is also a function of time because we want to describe the transient nature of heat diffusion. Note the units of heat flux ($q$). These can be converted into an $amount \ of \ heat \ in/out$ by multiplying by time ($\delta t$) and area ($A$). The resulting unit will simply be in joules: Whether this is heat in or heat out will depend on $\frac{\partial T}{\partial x}$ which will be either positive or negative, indicating the direction of heat transfer (remember, heat flows from across a temperature gradient!). Now let us consider the temperature gradients at the beginning and end of our control volume. The temperature gradient in x at the beginning of our control volume will simply be $\frac{\partial T}{\partial x}$. The temperature gradient at the end of the control volume will be the temperature gradient at the beginning \textit{plus some change in the temperature gradient with x}. Let’s unpack that, what we are saying is that the temperature gradient itself will change across the control volume (the change in temperature with x is itself changing with x). The change in the temperature gradient is the second derivative of the temperature gradient itself: Note that the units of this second derivative are Therefore the temperature gradient at the end is the temperature gradient at the beginning plus the change in the temperature gradient multiplied by some small change in $x$ (the size of our control volume): We can now plug the temperature gradients for the inlet and the outlet into Fourier’s law to calculate the heat flux in and out, and subsequently multiply by area and time to find the total heat in and out, and subsequently the accumulation So what is the accumulated heat term? Well the amount of heat that a material accumulates is proportional to the change in temperature it undergoes and its heat capacity. You might be familiar with this in the form of the equation $q=m \ c \Delta T$ where $m$ is mass, and $c$ is heat capacity. This can be rewritten as , and noting that the volume of our control volume is we see that: . This equation describes the amount of heat accumulated within a material given a specific change in temperature. Substituting this into the original energy balance, we obtain out full balance: Note that area is not a factor in this equation and drops out. We can then collect terms: Cancelling and collecting terms Taking the limits as the right hand side of the above equation goes to zero we see a derivative appear and thus we get: The constant terms on the left hand side can be grouped into a factor called the thermal diffusivity: $\alpha = \frac{k}{\rho c}$ which has units , and so the final 1-D heat diffusion equation with no heat generation is: Okay, that wasn’t too difficult was it? We applied a balance equation, also known as conservation of energy, to our system and we ended up with aa second order partial differential equation describing how temperature changes in space and time. How do we solve this equation though? By applying boundary conditions of course! Notice that this PDE is 2nd order in $x$ and 1st order in time. Therefore we need 2 spacial boundary conditions and 1 tempoeral boundayr condition (also known as an initial condition) to fully find a solution to this equation. The types of boundary conditions we apply to this equation will affect the types of solutions that we find. We will explore these boundary condition in the next posts and see how changing them also changes the type of solution we find.
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
First of all, there's no reason to restrict the definition of finite additivity to finite spaces. It's just that if a finitely additive measure is defined on a finite space, then, trivially, it's countably additive. We can extend finite additivity as you've defined it by induction. Your axiom implies that, for any $n \in \mathbb{N}$, if $A_1,...,A_n$ is a sequence of pairwise disjoint events, then $$\sum_nP(A_n) = P(\cup_n A_n).$$ But finite additivity does not imply countable additivity. Indeed, consider the "fair integer lottery" (the uniform distribution over integers) that was of interest to De Finetti. This is a finitely additive probability measure defined on all subsets of $\mathbb{Z}$ such that $P(\{ z\})=0$ for all $z \in \mathbb{Z}$. Countable additivity fails because$$P(\mathbb{Z}) = 1 \neq 0 = \sum_z P(\{z \}).$$ Note that this is analogous to Lebesgue measure on $[0,1]$, for which we have countable additivity but not uncountable additivity. Now, it's a little bit difficult to show rigorously that the measure described above actually exists. A relatively easy way to do it uses an ultrafilter. The existence of the required ultrafilter is usually established by using the axiom of choice or its equivalent, Zorn's lemma. (I wrote a little bit about this here.) In fact, any ultrafilter $\mathcal{U}$ defines a finitely additive measure by setting $P(U)=1$ if $U \in \mathcal{U}$ and $P(U) = 0$ if $U \notin \mathcal{U}$. See this for example. On the other hand, it has been shown that any purely finitely additive measure is non-constructible: the existence of such a measure cannot be proved with the ZF axioms of set theory alone. See this paper.
Let us consider the surface $\mathbb{A}^{2}/\mu_{6}$ where the action is given by $$\begin{array}{ccc}\mu_{6}\times\mathbb{A}^{2} & \longrightarrow & \mathbb{A}^{2}\\(\epsilon,x_{1},x_{2}) & \longmapsto & (\epsilon^{2}x_{1},\epsilon^{4}x_{2})\end{array}$$The invariant polynomials with respect to this action are $x_{1}^{3},x_{2}^{3},x_{1}x_{2}$. Therefore we can interpret the surface $\mathbb{A}^{2}/\mu_{6}$ as$$S = \{f(x,y,z) = z^{3}-xy = 0\}\subset\mathbb{A}^{3}.$$We have $$Ext^{1}(\Omega_{S},\mathcal{O}_{S})\cong k[x,y,z]/(f,\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z}) = k[x,y,z]/(z^{3}-xy,-y,-x,3z^{2})\cong k[z]/(z^{2}).$$Therefore $Ext^{1}(\Omega_{S},\mathcal{O}_{S})\neq 0$ and $S$ admits non-trivial infinitesimal first order deformations. Is any deformation in $Ext^{1}(\Omega_{S},\mathcal{O}_{S})$ locally trivial ? Let us consider the surface $\mathbb{A}^{2}/\mu_{6}$ where the action is given by $$\begin{array}{ccc}\mu_{6}\times\mathbb{A}^{2} & \longrightarrow & \mathbb{A}^{2}\\(\epsilon,x_{1},x_{2}) & \longmapsto & (\epsilon^{2}x_{1},\epsilon^{4}x_{2})\end{array}$$The invariant polynomials with respect to this action are $x_{1}^{3},x_{2}^{3},x_{1}x_{2}$. Therefore we can interpret the surface $\mathbb{A}^{2}/\mu_{6}$ as$$S = \{f(x,y,z) = z^{3}-xy = 0\}\subset\mathbb{A}^{3}.$$We have $$Ext^{1}(\Omega_{S},\mathcal{O}_{S})\cong k[x,y,z]/(f,\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z}) = k[x,y,z]/(z^{3}-xy,-y,-x,3z^{2})\cong k[z]/(z^{2}).$$Therefore $Ext^{1}(\Omega_{S},\mathcal{O}_{S})\neq 0$ and $S$ admits non-trivial infinitesimal first order deformations. This is a $A_2$ surface singularity, in fact it is isomorphic to the quotient $\mathbb{A}^{2}/\mu_{3}$ where the action is given by $$ \begin{array}{ccc} \mu_{3}\times\mathbb{A}^{2} & \longrightarrow & \mathbb{A}^{2}\\ (\epsilon,(x_{1},x_{2})) & \longmapsto & (\epsilon x_{1},\epsilon^{2}x_{2}). \end{array} $$ The explicit expression of its versal deformation is given by $$a+xy-3bz+z^3=0,$$ where $(a, \, b) \in \mathbb{C}^2$. Taking derivatives, one easily checks that if $a= \pm 2 b \sqrt{b}$ and $(a, b) \neq (0,0)$, the corresponding germ has a ordinary double point at $P_{\pm}=(0, \, 0, \, \pm \sqrt{b})$ and no further singularities, whereas if $a^2-4b^3 \neq 0$ the germ is smooth. So we have a $A_2$-singularity if and only if $(a, \,b)=(0, \, 0)$, which corresponds to the trivial deformation. In other words there are no equisingular deformations apart from the trivial one, hence no locally trivial deformations.
Difference between revisions of "Group cohomology of dihedral group:D8" (→Over the integers) (→Over the integers) Line 9: Line 9: ===Over the integers=== ===Over the integers=== + + + + The first few homology groups are given below: The first few homology groups are given below: Revision as of 04:50, 15 January 2013 Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups over the integers are given as follows: Failed to parse (syntax error): H_q(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} (\mathbb{Z}/2\mathbb{Z})^{(q + 3)/2}, & <math>(\mathbb{Z}/2\mathbb{Z})^{(q + 1)/2} \oplus \mathbb{Z}/4\mathbb{Z}, & q \equiv 3 \pmod 4 \\ q \equiv 1 \pmod 4 \\(\mathbb{Z}/2\mathbb{Z})^{q/2}, & q \equiv 2 \pmod 4 \mbox{ even }, q > 0 \\ \mathbb{Z}, & q = 0 \\\end{array} The first few homology groups are given below: Over an abelian group The first few homology groups with coefficients in an abelian group are given below: ? ? ? ? ? Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The first few cohomology groups are given below: 0 ? ? ? ? Over an abelian group The first few cohomology groups with coefficients in an abelian group are: ? ? ? ? ? Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Schur multiplier This has implications for projective representation theory of dihedral group:D8. Schur covering groups The three possible Schur covering groups for dihedral group:D8 are: dihedral group:D16, semidihedral group:SD16, and generalized quaternion group:Q16. For more, see second cohomology group for trivial group action of D8 on Z2, where these correspond precisely to the stem extensions. Second cohomology groups for trivial group action Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action (as an abstract group) Order of second cohomology group Extensions Number of extensions up to pseudo-congruence, i.e., number or orbits under automorphism group actions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 6 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 elementary abelian group:E8 8 direct product of D8 and Z4, nontrivial semidirect product of Z4 and Z8, SmallGroup(32,5), central product of D16 and Z4, SmallGroup(32,15), wreath product of Z4 and Z2 6 second cohomology group for trivial group action of D8 on Z4 Klein four-group 4 2 elementary abelian group:E64 64 [SHOW MORE] 11 second cohomology group for trivial group action of D8 on V4 Baer invariants Subvariety of the variety of groups General name of Baer invariant Value of Baer invariant for this group abelian groups Schur multiplier cyclic group:Z2 groups of nilpotency class at most two 2-nilpotent multiplier cyclic group:Z2 groups of nilpotency class at most three 3-nilpotent multiplier trivial group any variety of groups containing all groups of nilpotency class at most three -- trivial group
I have some true or false questions and would like to have your help to check on it. A). in a ring R, if $x^2=x$, $\forall x\in R$, then R is commutative For (A), when looking at $(x+y)^2$, it has $x+y=(x+y)^2=x^2+xy+yx+y^2$and then yx+xy=0, and from 2x=4x, therefore 2x=0. how this play a role here? B) In an integral domain, it $\exists m\in N, s,t, mx=0, \forall x\in R$, then it's a finite integral domain. I think this one is correct by definition of characteristic for a ring/field. C) commutative ring with unity has at least two elements, and cancellation holds, then it's an integral domain. I think it's correct, first it's a commutative ring with unity, second, it has at least two element, and there's a property saying that an integral domain must have at least two elements, and the third, cancellation holds implies it has no zero divisor. So the three above looks like fit the profile of integral domain. D) f is a homomorphism from group G to group H, then f(G) is a normal subgroup in H I'm not quite sure about this one, kind of remember that normal subg roup of G under homomorphism is normal subgroup of H, but don't know if f(G) will be normal in H Are these answer or argument correct? Thanks for your help.
If the six-pointed star is regular, then the answer is $r^2(\pi-\sqrt{3})$. If it is not, then the answer can be larger, up to a limit of $r^2\big(\pi-\frac{3}{4}\sqrt{3}\big)$. Proof The required area is the area of the circle ($\pi r^2$) minus the area of the star.The area of the star is the area of a large equilateral triangle (A) plus the area of three small ones. Each small one has a side-length $\frac{1}{3}$ of the large triangle's and therefore an area $\frac{1}{9}$ of its area. So the area we want is $\pi r^2 - A(1+\frac{3}{9}) = \pi r^2 - \frac{4A}{3}$. A is equal to 6 times the area of the right-angled triangle shown. Stand it on its short side. Its area is $B=\frac{\mathrm{base * height}}{2} = \frac{r^2\sin{30^\circ}\cos{30^\circ}}{2}=r^2(\frac{1}{2})\frac{\sqrt{3}}{2}\frac{1}{2} = r^2(\frac{\sqrt{3}}{8})$. So $A=6B=\frac{3r^2\sqrt{3}}{4}$ So the required area is $\pi r^2 - \big(\frac{3r^2\sqrt{3}}{4}\big)\frac{4}{3} = r^2(\pi-\sqrt{3})$ However, all you say is that the triangles overlap to make a star. You do not define a star. If we allow the overlap such that the points on the circle are unequally spaced, then we might get a shape like this: At the limit, the area of the star is the area of a large triangle, A, giving required area $\pi r^2 - A = r^2\big(\pi-\frac{3}{4}\sqrt{3}\big)$
If a function is a combination of other functions whose derivatives are known via composition, addition, etc., the derivative can be calculated using the chain rule and the like. But even the product of integrals can't be expressed in general in terms of the integral of the products, and forget about composition! Why is this? Here is an extremely generic answer. Differentiation is a "local" operation: to compute the derivative of a function at a point you only have to know how it behaves in a neighborhood of that point. But integration is a "global" operation: to compute the definite integral of a function in an interval you have to know how it behaves on the entire interval (and to compute the indefinite integral you have to know how it behaves on all intervals). That is a lot of information to summarize. Generally, local things are much easier than global things. On the other hand, if you can do the global things, they tend to be useful because of how much information goes into them. That's why theorems like the fundamental theorem of calculus, the full form of Stokes' theorem, and the main theorems of complex analysis are so powerful: they let us calculate global things in terms of slightly less global things. The family of functions you generally consider (e.g., elementary functions) is closed under differentiation, that is, the derivative of such function is still in the family. However, the family is not in general closed under integration. For instance, even the family of rational functions is not closed under integration because you $\int 1/x = \log$. Answering an old question just because I saw it on the main page. From Roger Penrose ( Road To Reality): ... there is a striking contrast between the operations of differentiation and integration, in this calculus, with regard to which is the ‘easy’ one and which is the ‘difficult’ one. When it is a matter of applying the operations to explicit formulae involving known functions, it is differentiation which is ‘easy’ and integration ‘difficult’, and in many cases the latter may not be possible to carry out at all in an explicit way. On the other hand, when functions are not given in terms of formulae, but are provided in the form of tabulated lists of numerical data,then it is integration which is ‘easy’ and differentiation ‘difficult’, and the latter may not, strictly speaking, be possible at all in the ordinary way. Numerical techniques are generally concerned with approximations, but there is also a close analogue of this aspect of things in the exact theory, and again it is integration which can be performed in circumstances where differentiation cannot. I guess the OP asks about the symbolic integration. Other answers already dealt with the numeric case where integration is easy and differentiation is hard. If you recall the definition of the differentiation, you can see it's just a subtraction and division by a constant. Even if you can't do any algebraic changes, it won't get any more complex than that. But usually you can do many simplifications due to the zero limit, as many terms fall out as being too small. From this definition it can be shown that if you know the derivative of $f(x)$ and $g(x)$, then you can use these derivatives to express the derivative of $f(x) \pm g(x)$, $f(x)g(x)$ and $f(g(x))$. This makes symbolic differentiation easy as you just need to apply the rules recursively. Now about integration. Integration is basically an infinite sum of small quantities. So if you see an $\int f(x) \; \d x$. You can imagine it as an infinite sum of $(f_1 + f_2 + ...) \; \d x$ where $f_i$ are consecutive values of the function. This means if you need to calculate integral of $\int (a f(x) + b g(x)) \; \d x$. Then you can imagine the sum $((af_1 + bg_1) + (af_2 + bg_2) + ...) \, \d x$. Using the associativity and distributivity, you can transform this into: $a(f_1 + f_2 +...)\d x + b(g_1 + g_2 + ...)\, \d x$. So this means $\int (a f(x) + b g(x)) \, \d x = a \int f(x) \d x + b \int g(x) \, dx$. But if you have $\int f(x) g(x) \, \d x$, you have the sum $(f_1 g_1 + f_2 g_2 + ...) \; \d x$. From which you cannot factor out the sum of $f$s and $g$s. This means there is no recursive rule for multiplication. Same goes for $\int f(g(x)) \; \d x$. You cannot extract anything from the sum $(f(g_1) + f(g_2) + ...) \; \d x$ in general. So far, only linearity is the useful property. What about the analogues of the Differentiation rules? We have the product rule: $$\frac{d f(x)g(x) }{\, d x} = f(x) \frac{d g(x)}{\, d x} + g(x) \frac{d f(x)}{\, d x}.$$ Integrating both sides and rearranging the terms, we get the well-known integral by parts formula: $$\int f(x) \frac{d g(x)}{\, d x} \, d x = f(x)g(x) - \int g(x) \frac{d f(x)}{\, d x} \, d x.$$ But this formula is only useful if $\frac{d f(x)}{dx} \int g(x) \, d x$ or $\frac{d g(x)}{dx} \int f(x) \, d x$ is easier to integrate than $f(x)g(x)$. And it's often hard to see when this rule is useful. For example, when you try to integrate $\mathrm{ln}(x)$, it's not obvious to see that it's $1 \mathrm{ln}(x)$. The integral of $1$ is $x$ and the derivative of $\mathrm{ln}(x)$ is $\frac{1}{x}$, which lead to a very simple integral of $x\frac{1}{x} = 1$, whose integral is again $x$. Another well-known differential rule is the chain rule $$\frac{d f(g(x))}{\, d x} = \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x}.$$ Integrating both sides, you get the reverse chain rule: $$f(g(x)) = \int \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x} \, d x.$$ But again it's hard to see when it is useful. For example what about the integration of $\frac{x}{\sqrt{x^2 + c}}$? Is it obvious to you that $\frac{x}{\sqrt{x^2 + c}} = 2x \frac{1}{2\sqrt{x^2 + c}}$ and this is the derivative of $\sqrt{x^2 + c}$? I guess not, unless someone showed you the trick. For differentiation, you can mechanically apply the rules. For integration, you need to recognize patterns and even need to introduce cancellations to bring the expression into the desired form and this requires lot of practice and intuition. For example how would you integrate $\sqrt{x^2 + 1}$? First you turn it into a fraction: $$\frac{x^2 + 1}{\sqrt{x^2+1}}$$ Then multiply and divide by 2: $$\frac{2x^2 + 2}{2\sqrt{x^2+1}}$$ Separate the terms like this: $$\frac{1}{2}\left(\frac{1}{\sqrt{x^2+1}}+\frac{x^2+1}{\sqrt{x^2+1}}+\frac{x^2}{\sqrt{x^2+1}} \right)$$ Play with 2nd and 3rd term: $$\frac{1}{2} \left( \frac{1}{\sqrt{x^2+1}}+ 1\sqrt{x^2+1}+ x2x\frac{1}{2\sqrt{x^2+1}} \right)$$ Now you can see the first bracketed term is the derivative of $\mathrm{arsinh(x)}$. The second and third term is the derivative of the $x\sqrt{x^2+1}$. Thus the integral will be: $$\frac{\mathrm{arsinh}(x)}{2} + \frac{x\sqrt{x^2+1}}{2} + C$$ Were these transformations obvious to you? Probably not. That's why differentiation is just a mechanic while integration is an art. In the MIT lecture 6.001 "Structure and Interpretation of Computer Programs" by Sussman and Abelson this contrast is briefly discussed in terms of pattern matching. See the lecture video (at 3:56) or alternatively the transcript (p. 2 or see the quote below). The book used in the lecture does not provide further details. Edit: Apparently, they discuss the Risch algorithm. It might be worthwhile to have a look at the same question on mathoverflow.SE: Why is differentiating mechanics and integration art? And you know from calculus that it's easy to produce derivatives of arbitrary expressions. You also know from your elementary calculus that it's hard to produce integrals. Yet integrals and derivatives are opposites of each other. They're inverse operations. And they have the same rules. What is special about these rules that makes it possible for one to produce derivatives easily and integrals why it's so hard? Let's think about that very simply. Look at these rules. Every one of these rules, when used in the direction for taking derivatives, which is in the direction of this arrow, the left side is matched against your expression, and the right side is the thing which is the derivative of that expression. The arrow is going that way. In each of these rules, the expressions on the right - hand side of the rule that are contained within derivatives are subexpressions, are proper subexpressions, of the expression on the left - hand side. So here we see the derivative of the sum, with is the expression on the left - hand side is the sum of the derivatives of the pieces. So the rule of moving to the right are reduction rules. The problem becomes easier. I turn a big complicated problem it's lots of smaller problems and then combine the results, a perfect place for recursion to work. If I'm going in the other direction like this, if I'm trying to produce integrals, well there are several problems you see here. First of all, if I try to integrate an expression like a sum, more than one rule matches.Here's one that matches. Here's one that matches. I don't know which one to take. And they may be different. I may get to explore different things. Also, the expressions become larger in that direction. And when the expressions become larger, then there's no guarantee that any particular path I choose will terminate, because we will only terminate by accidental cancellation.So that's why integrals are complicated searches and hard to do. I will try to bring this to you in another way .Let us start by thinking in terms of something as simple as a straight line . If I give you the equation of a line y = mx + c , it's slope can be easily determined which in this case is nothing but m . .Now let me make the question a bit trickier .Let me say that the line given above intersects the x and y axis at some points .I ask you to give me the area between the line,the abcissa and the ordinate This is obviously not as easy as finding the slope .You shall have to find the intersection of the line with the axis and get two points of intersection and then taking the origin as a third point find the area . This is not the only method of finding the area as we know there are loads of formulas for finding the area of a triangle . Let us now view this in terms of curves .If the simple process of finding the slope in case of a line is translated to curves we get differential calculus which is a bit more complicated than the method of finding slopes of straight lines . Add finding the area under the curve to that and you get integral calculus which by our experience from straight lines we know should be much harder than finding the slope ie differentiation .Also there is no one fixed method for finding the area of a figure .hence the Many methods of. Integration.
Tokyo Journal of Mathematics Tokyo J. Math. Volume 24, Number 1 (2001), 291-308. Lévy Processes with Negative Drift Conditioned to Stay Positive Abstract Let $X$ be a Lévy process with negative drift starting from $x>0$, and let $\tau$ and $\tau_s$ be the first passage times to $(-\infty,0]$ and $(s,\infty)$, respectively. Under appropriate exponential moment conditions of $X$, we show that, for every $A\in\mathcal{F}_t$, the conditional laws $P_x(X\in A | \tau>s)$ and $P_x(X\in A | \tau>\tau_s)$ converge to different distributions as $s\rightarrow\infty$. Both of them can be regarded as the laws of $X$ conditioned to stay positive. We characterize these limit laws in terms of $h$-transforms, by the renewal functions, of some Lévy processes killed at the entrance time into $(-\infty,0]$. Article information Source Tokyo J. Math., Volume 24, Number 1 (2001), 291-308. Dates First available in Project Euclid: 19 October 2009 Permanent link to this document https://projecteuclid.org/euclid.tjm/1255958329 Digital Object Identifier doi:10.3836/tjm/1255958329 Mathematical Reviews number (MathSciNet) MR1844435 Zentralblatt MATH identifier 1020.60040 Citation HIRANO, Katsuhiro. Lévy Processes with Negative Drift Conditioned to Stay Positive. Tokyo J. Math. 24 (2001), no. 1, 291--308. doi:10.3836/tjm/1255958329. https://projecteuclid.org/euclid.tjm/1255958329
If we have a urn with $N$ balls of two colours ($D$ red and $N-D$ black balls respectively), then probability of having $k$ red out of $n$ balls drawn at once without replacement follows the Hypergeometric distribution: $Pr(X = k) = \dfrac{\binom{D}{k} \binom{N - D}{n-k}}{\binom{N}{n}}$ Now assume we have some a priori distribution of the balls: $p_i$ – probability of drawing ball $i$, $i \in \{1, \ldots, N \}$. (Note that it's unrelated to colours.) Let's make an experiment with drawing balls again. As a result of the experiment we have the following: $P_n = \{p_{i_1}, \ldots, p_{i_n}\}$ – probabilities of $n$ drawn balls $P_k = \{p_{j_1}, \ldots, p_{j_k}\}$ – probabilities of $k$ red drawn balls, $P_k \subset P_n $ Let $\mathrm{Pr}(P_k \mid P_n)$ probability of having this result, i.e. having $k$ balls out of $n$ drawn balls which probabilities turned out to be exactly $P_k$ and $P_n$ respectively. Exact form of $\mathrm{Pr}(P_k \mid P_n)$ can be written basing on probability of drawing $m$ balls with certain probabilities out of urn with $M$ balls: $$\mathrm{Pr}(\text{pull k balls with certain probabilities out of M balls}) = \frac{\prod_{i \in \text{drawn}} p_i \times \prod_{i \notin \text{drawn}} (1 - p_i)}{\sum_{\text{all subsets S of size m from M}} \prod_{i \in S} p_i \times \prod_{i \notin S} (1 - p_i)}$$ But this formula will have exponential computation time, so it doesn't fit the problem with settings (more likely settings we will work with): $$N \sim 6000, D \sim 1000, n \sim 2000$$ Since that, we're interesting in finding such function $f$, that: $$\mathrm{Pr}(P_{k_1} \mid P_{n_1}) > \mathrm{Pr}(P_{k_2} \mid P_{n_2}) \Rightarrow f(\mathbf{k_1}, \mathbf{n_1}) > f(\mathbf{k_2}, \mathbf{n_2})$$ and vice versa, where $\mathbf{k_i}, \mathbf{n_i}$ – corresponding sets of balls. In other words, we're trying to reduce that scaring formula to other (more simple) remaining this «comparator» property. Note that absolute value doesn't matter: comparing only is required. Do you have any idea of how to reach our goal? Appreciate any thoughts about the solution.
Difference between revisions of "Linear representation theory of symmetric group:S5" (→Interpretation as projective general linear group of degree two) (→Family contexts) (14 intermediate revisions by the same user not shown) Line 8: Line 8: ==Summary== ==Summary== + {| class="sortable" border="1" {| class="sortable" border="1" ! Item !! Value ! Item !! Value |- |- − | [[Degrees of irreducible representations]] over a [[splitting field]] || 1,1,4,4,5,5,6<br>[[maximum degree of irreducible representation|maximum]]: 6, [[lcm of degrees of irreducible representations|lcm]]: 60, [[number of irreducible representations equals number of conjugacy classes|number]]: 7, [[sum of squares of degrees of irreducible representations equals order of group|sum of squares]]: 120 + | [[Degrees of irreducible representations]] over a [[splitting field]] || 1,1,4,4,5,5,6<br>[[maximum degree of irreducible representation|maximum]]: 6, [[lcm of degrees of irreducible representations|lcm]]: 60, [[number of irreducible representations equals number of conjugacy classes|number]]: 7, [[sum of squares of degrees of irreducible representations equals order of group|sum of squares]]: 120 |- |- | [[Schur index]] values of irreducible representations || 1,1,1,1,1,1,1<br>[[maximum Schur index of irreducible representation|maximum]]: 1, [[lcm of Schur indices of irreducible representations|lcm]]: 1 | [[Schur index]] values of irreducible representations || 1,1,1,1,1,1,1<br>[[maximum Schur index of irreducible representation|maximum]]: 1, [[lcm of Schur indices of irreducible representations|lcm]]: 1 Line 23: Line 24: | Smallest size [[splitting field]] || [[field:F7]], i.e., the field of 7 elements. | Smallest size [[splitting field]] || [[field:F7]], i.e., the field of 7 elements. |} |} − + ==Family contexts== ==Family contexts== Line 29: Line 30: ! Family name !! Parameter values !! General discussion of linear representation theory of family ! Family name !! Parameter values !! General discussion of linear representation theory of family |- |- − | [[symmetric group]] || 5 || [[linear representation theory of symmetric groups]] + | [[symmetric group]] || 5|| [[linear representation theory of symmetric groups]] |- |- − | [[projective general linear group of degree two]] || [[field:F5]] || [[linear representation theory of projective general linear group of degree two]] + | [[projective general linear group of degree two]] || [[field:F5]]|| [[linear representation theory of projective general linear group of degree two ]] |} |} Line 81: Line 82: | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign |- |- − + Total || NA || NA || NA || NA || <math>q + 2</math> || 7 || <math>q^3 - q</math> || 120 || NA + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + |} |} Latest revision as of 05:41, 16 January 2013 This article gives specific information, namely, linear representation theory, about a particular group, namely: symmetric group:S5. View linear representation theory of particular groups | View other specific information about symmetric group:S5 This article describes the linear representation theory of symmetric group:S5, a group of order . We take this to be the group of permutations on the set . Summary Item Value Degrees of irreducible representations over a splitting field (such as or ) 1,1,4,4,5,5,6 maximum: 6, lcm: 60, number: 7, sum of squares: 120 Schur index values of irreducible representations 1,1,1,1,1,1,1 maximum: 1, lcm: 1 Smallest ring of realization for all irreducible representations (characteristic zero) -- ring of integers Smallest field of realization for all irreducible representations, i.e., smallest splitting field (characteristic zero) -- hence it is a rational representation group Criterion for a field to be a splitting field Any field of characteristic not equal to 2,3, or 5. Smallest size splitting field field:F7, i.e., the field of 7 elements. Family contexts Family name Parameter values General discussion of linear representation theory of family symmetric group of degree linear representation theory of symmetric groups projective general linear group of degree two over a finite field of size , i.e., field:F5, so the group is linear representation theory of projective general linear group of degree two over a finite field Degrees of irreducible representations FACTS TO CHECK AGAINST FOR DEGREES OF IRREDUCIBLE REPRESENTATIONS OVER SPLITTING FIELD: Divisibility facts: degree of irreducible representation divides group order | degree of irreducible representation divides index of abelian normal subgroup Size bounds: order of inner automorphism group bounds square of degree of irreducible representation| degree of irreducible representation is bounded by index of abelian subgroup| maximum degree of irreducible representation of group is less than or equal to product of maximum degree of irreducible representation of subgroup and index of subgroup Cumulative facts: sum of squares of degrees of irreducible representations equals order of group | number of irreducible representations equals number of conjugacy classes | number of one-dimensional representations equals order of abelianization Note that the linear representation theory of the symmetric group of degree four works over any field of characteristic not equal to two or three, and the list of degrees is . Interpretation as symmetric group Common name of representation Degree Corresponding partition Young diagram Hook-length formula for degree Conjugate partition Representation for conjugate partition trivial representation 1 5 1 + 1 + 1 + 1 + 1 sign representation sign representation 1 1 + 1 + 1 + 1 + 1 5 trivial representation standard representation 4 4 + 1 2 + 1 + 1 + 1 product of standard and sign representation product of standard and sign representation 4 2 + 1 + 1 + 1 4 + 1 standard representation irreducible five-dimensional representation 5 3 + 2 2 + 2 + 1 other irreducible five-dimensional representation irreducible five-dimensional representation 5 2 + 2 + 1 3 + 2 other irreducible five-dimensional representation exterior square of standard representation 6 3 + 1 + 1 3 + 1 + 1 the same representation, because the partition is self-conjugate. Interpretation as projective general linear group of degree two Compare and contrast with linear representation theory of projective general linear group of degree two over a finite field Description of collection of representations Parameter for describing each representation How the representation is described Degree of each representation (general odd ) Degree of each representation () Number of representations (general odd ) Number of representations () Sum of squares of degrees (general odd ) Sum of squares of degrees () Symmetric group name Trivial -- 1 1 1 1 1 1 trivial Sign representation -- Kernel is projective special linear group of degree two (in this case, alternating group:A5), image is 1 1 1 1 1 1 sign Nontrivial component of permutation representation of on the projective line over -- -- 5 1 1 25 irreducible 5D Tensor product of sign representation and nontrivial component of permutation representation on projective line -- -- 5 1 1 25 other irreducible 5D Induced from one-dimensional representation of Borel subgroup ? ? 6 1 36 exterior square of standard representation Unclear a nontrivial homomorphism , with the property that for all , and takes values other than . Identify and . unclear 4 2 32 standard representation, product of standard and sign Total NA NA NA NA 7 120 NA Character table FACTS TO CHECK AGAINST (for characters of irreducible linear representations over a splitting field): Orthogonality relations: Character orthogonality theorem | Column orthogonality theorem Separation results(basically says rows independent, columns independent): Splitting implies characters form a basis for space of class functions|Character determines representation in characteristic zero Numerical facts: Characters are cyclotomic integers | Size-degree-weighted characters are algebraic integers Character value facts: Irreducible character of degree greater than one takes value zero on some conjugacy class| Conjugacy class of more than average size has character value zero for some irreducible character | Zero-or-scalar lemma Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 1 1 1 1 1 1 sign representation 1 -1 1 1 -1 1 -1 standard representation 4 2 0 1 -1 -1 0 product of standard and sign representation 4 -2 0 1 1 -1 0 irreducible five-dimensional representation 5 1 1 -1 1 0 -1 irreducible five-dimensional representation 5 -1 1 -1 -1 0 1 exterior square of standard representation 6 0 -2 0 0 1 0 Below are the size-degree-weighted characters, i.e., these are obtained by multiplying the character value by the size of the conjugacy class and then dividing by the degree of the representation. Note that size-degree-weighted characters are algebraic integers. Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 10 15 20 20 24 30 sign representation 1 -10 15 20 -20 24 -30 standard representation 1 5 0 5 -5 -6 0 product of standard and sign representation 1 -5 0 5 5 -6 0 irreducible five-dimensional representation 1 2 3 -4 4 0 -6 irreducible five-dimensional representation 1 -2 3 -4 -4 0 6 exterior square of standard representation 1 0 -5 0 0 4 0 GAP implementation The degrees of irreducible representations can be computed using GAP's CharacterDegrees function: gap> CharacterDegrees(SymmetricGroup(5)); [ [ 1, 2 ], [ 4, 2 ], [ 5, 2 ], [ 6, 1 ] ] This means that there are 2 degree 1 irreducible representations, 2 degree 4 irreducible representations, 2 degree 5 irreducible representations, and 1 degree 6 irreducible representation. The characters of all irreducible representations can be computed in full using GAP's CharacterTable function: gap> Irr(CharacterTable(SymmetricGroup(5))); [ Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, -1, 1, 1, -1, -1, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, -2, 0, 1, 1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, -1, 1, -1, -1, 1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 6, 0, -2, 0, 0, 0, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, 1, 1, -1, 1, -1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, 2, 0, 1, -1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, 1, 1, 1, 1, 1, 1 ] ) ]
I am interested in computing a normalizing constant (of a Gaussian density in dimension $3$). Such normalizing constants often do not have a closed form. In dimension $2$, this normalizing constant can be computed in closed form. The integral I would like to compute is : $$ \int_{\mathbb{R}^{3}} e^{-(r_{1}^{2} + r_{2}^{2} + r_{3}^{3})/2\sigma^{2}} \sinh\Big( \frac{\vert r_{1}-r_{2} \vert}{2} \Big)\sinh\Big( \frac{\vert r_{1}-r_{3} \vert}{2}\Big)\sinh\Big( \frac{\vert r_{2}-r_{3} \vert}{2} \Big)dr_{1}dr_{2}dr_{3} $$ Using the spherical coordinates $(r_{1},r_{2},r_{3}) = (r\sin\varphi\cos\theta,r\sin\varphi\sin\theta,r\cos\varphi)$, I find myself trying to compute the following integral : $$ \int_{0}^{+\infty}\int_{0}^{2\pi}\int_{0}^{\pi} e^{-r^{2}/2\sigma^{2}} \sinh\Big( \frac{r\sin\varphi\vert \cos\theta - \sin\theta\vert}{2}\Big)\sinh\Big( \frac{r\vert\sin\varphi\cos\theta-\cos\phi\vert}{2}\Big) \times \sinh\Big(\frac{r\vert\sin\varphi\sin\theta-\cos\varphi\vert}{2}\Big)r^{2}\sin\varphi \, drd\theta d\varphi $$ I'm not an expert but, looking at this integral, I doubt there exist a closed expression.
For a parametric model ${\cal M} = \{p(\cdot \mid \theta, \alpha)\}$ with two parameters $\theta$ and $\alpha$ equipped with a prior distribution $\pi(\theta, \alpha)$ then the ("joint") likelihood on $(\theta, \alpha)$ after $x$ has been observed is defined by $$L(\theta, \alpha \mid x) \overset{\theta,\alpha}{\propto} p(x \mid \theta, \alpha).$$ See here about my notation $\overset{\theta,\alpha}{\propto}$. The marginal likelihood on $\alpha$ is obtained by integrating the joint likelihood over the conditional prior distribution $\pi(\theta \mid \alpha)$: $$\tilde L(\alpha \mid x) \overset{\alpha}{\propto} \int L(\theta,\alpha \mid x) \pi(\theta \mid \alpha) d\theta.$$This is nothing but the "ordinary" likelihood for a new model $\tilde{\cal M} = \{\tilde p(\cdot \mid \alpha)\}$ with parameter $\alpha$, obtained by integrating the original sampling distribution over the conditional prior distribution $\pi(\theta \mid \alpha)$: $$\tilde p(x \mid \alpha) = \int p(x \mid \theta, \alpha)\pi(\theta \mid \alpha) d\theta$$which is also the conditional prior predictive distribution (of $x$ given $\alpha$).Using the marginal prior distribution $\pi(\alpha)$ of $\alpha$ for this model yields exactly the same posterior distribution:$$\pi(\alpha \mid x) \overset{\alpha}{\propto} \pi(\alpha)\tilde L(\alpha \mid x).$$ To sum up, the marginal likelihood is the likelihood of the model whose sampling distribution is the conditional prior predictive distribution.
Abbreviation: BanSp A is a normed vector space $\mathbf{A}=\langle A,+,-,0,s_r (r\in F),||\cdot||\rangle$ that is Banach space : any Cauchy sequence has a limit. complete Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$ A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Hilbert spaces]] [[...]] expansion [[Normed vector spaces]] supervariety [[...]] subreduct
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
There is a lot to say here, so I will break my answer down in stages: $(1)$. Is it possible to construct solutions to a PDE by making a sequence of a change of coordinates and then eventually write the solution in a Fourier series? Answer: N0 This technique might work for some linear PDEs, but this would not work for non-linear PDEs, even ones as simple as the non-linear transport equation (Burger's equation) $u_t+uu_x=0$. $(2)$. Is it possible to solve PDEs without using the tools from real analysis/set theory? Answer: "No", and the exceptions are not as useful Analysis, especially Functional Analysis is extremely important to PDEs. When you want to calculate the "energy" of a solution, you need to understand how a function's energy is related to other quantities. Typically, this is based on the concept of the norm. Without analysis, we don't really have any good way of analyzing "energy" of a solution. In addition, we could not comment on regularity/smoothness. If we get a solution to a PDE, does it have a derivative, jump discontinuities? These questions are able to be answered in some cases, but not in general, and all use analysis. In addition, when studying an unknown PDE, you may not even know a solution exists. For some PDEs, there might be some "tricks" to transform the PDE to a simpler one, but for a general non-linear PDE, you may be in the dark. Before we actually try to construct a solution, sometimes we just want to show one exists. This often requires using "fixed-point" methods. What this does is takes an operator $T$ on some function space, and maps it to another function space. If the mapping is a contraction, then we have a fixed point, and a solution is guaranteed to exists. Mathematically, suppose we wanted to find a a solution to the Heat Equation on an infinite metal bar. Such a function must have at minimum two continuous derivatives (solutions to the Heat Equation smooth out, but this is another topic). Functions which are continuous everywhere on the real line, and continuous for a time greater than $0$ can be all grouped together into one set: $X_1=C^2(\mathbb{R} \times [0,T])$. All functions which obey the property exists in this space, so any possible solution to the heat equation would lie in this space. Another question you can ask is, does the solution lie in a more general space, such as one that includes $X_1$ as a subset? This has a well developed theory as well. So to prove a solution to the heat equation exists, you want to construct a contraction mapping on $X_1$ using some suitable operator. Again, the details require norms, which is all analysis. If the operator $T:X_1\rightarrow X_1$ is contractive, then you have a fixed point, which means a solution to the PDE exists (even if we cant construct it explicitly). Next, PDEs study functions with derivatives, which is literally a concept in analysis. You might sometimes be able to solve PDEs with just advanced calculus techniques, but these techniques do not address the idea of uniqueness of solution, regularity of solution and other important questions. For example $u(x,t)=x^2t+2t$ is a polynomial which solves the heat equation in the sense that differentiating $u(x,t)$ will solve the heat equation, but it doesn't solve it an interesting way. This relates to the topic of well-posedness. When studying PDEs, you dont want an infinite number of solutions, which can be constructed by multiplying the polynomial I gave via some suitable constants. We also want the solution $u(x,t)\rightarrow 0$ as $t\rightarrow \infty$. This means the solution will "vanish at infinity", so the value of the function decreases at time goes on. Functions that "vanish at infinity" have their own space as well, so yes, more Analysis to study them. Polynomials do not vanish at infinity. Finally, sometimes it is very challenging to solve certain boundary conditions. This is especially true in higher dimensions. Here is an example of a non-linear wave equation in three dimensions for illustration, but the example I present can be made far more complicated in higher dimensions. There is no standard way to do a coordinate transformation that will make a complicated PDE easier to solve. $\begin{cases} u_{tt}+(\lambda u^2)u_{xx}=0 \\ u(0,0,t)=\phi(x), u(1,0,t)=\psi(x) \\u(x,y,0)=h(x) \end{cases}$ Let $C(x)$ denote the Cantor function. Define $\phi(x)$ as $\phi(x)=C(x), |x|\leq1$ and $\phi(x)=0$ for $|x|>1$, and $\psi(x)=sin(exp(c|x|^2))$, $|x|\leq 1$ $\psi(x)=0, |x|>1$ It is very easy to cook up PDEs which are catastrophically difficult to solve even numerically, let alone analytically. Coordinate transformations wouldn't likely be of much use if your boundary conditions involve odd functions such as the one I constructed Sometimes you use calculus-themed techniques and clever transforms, but no, most of PDEs requires some use of analysis, and certainly all general properties of PDEs are understood through the context of analysis. Also note coordinate transformations often involve differential geometry techniques, which are equally as sophisticated in their own right.
So, I am having trouble (again) with the domain for a triple integral of a function, bounded by the paraboloid $2y^2=x$ and the $x+2y+z=4$ and $z=0$ planes I have tried to guess the bounds for x,y and z in cartesian coordinates with no luck, and it's somewhat apparent that this needs to be done in cylindrical polar coordinates. The issue here is that I haven't been able to see any clear values of the bounds for the integrals for either $\theta, r$, or $z$ ,so I am not sure if I am missing something else. So far, my polar conversion is: $$\begin{aligned} & 2r^2 \sin^2 \theta = r \cos \theta \\ & r \cos \theta + 2 r \sin \theta+z=4 \\ & 0 \leq z \leq 4 - r \cos \theta - 2 r \sin \theta \\ & 0 \leq r \leq \text{(?)} \\ & 0 \leq \theta \leq \text{(?)} \end{aligned}$$ Any help is welcome.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
Nonlinear Schrödinger equations on a finite interval with point dissipation Department of Mathematics, Virginia Polytechnic Institute and State University, Blacksburg, VA USA $ iu_t+u_{xx}+f(u) = 0 , \;\;\;\; u ( x, 0 ) = w_0 (x) $ $ x\in [0, L] $ $ L^2 $ $ u(0, t) = \beta u(L, t), \beta u_x(0, t)-u_x(L, t) = i\alpha u(0, t), $ $ L>0 $ $ \alpha, \beta $ $ \alpha\beta<0 $ $ \beta\neq \pm 1 $ $ f(u) $ $ \mathbb{C} $ $ \mathbb{C} $ $ s \in \left ( \frac12, 1\right ] $ $ w_0 (x) \in H^s(0, L ) $ $ u \in C([0, T]; H^s (0, L )) $ $ t \rightarrow + \infty $ Mathematics Subject Classification:Primary: 35Q55; Secondary: 35Q93. Citation:Jing Cui, Shu-Ming Sun. Nonlinear Schrödinger equations on a finite interval with point dissipation. Mathematical Control & Related Fields, 2019, 9 (2) : 351-384. doi: 10.3934/mcrf.2019017 References: [1] J. L. Bona, S. M. Sun and B.-Y. Zhang, Nonhomogeneous boundary-value problems for one-dimensional nonlinear Schrödinger equations, [2] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to non-linear evolution equations, part Ⅰ: Schrödinger equations, [3] [4] [5] [6] [7] [8] [9] T. Cazenave, D. Fang and Z. Han, Continuous dependence for NLS in fractional order spaces, [10] [11] [12] N. Dunford and J. T. Schwartz, [13] [14] G. Gao and S. M. Sun, A Korteweg-de Vries type of fifth-order equations on a finite domain with point dissipation, [15] [16] [17] [18] [19] [20] [21] [22] T. Kato, On nonlinear Schrödinger equations, [23] [24] [25] V. Komornik, A generalization of Ingham's inequality, in [26] H. Lange and H. Teismann, Controllability of the nonlinear Schrödinger equation in the vicinity of the ground state, [27] [28] [29] [30] L. Rosier and B.-Y. Zhang, Local exact controllability and stabilizability of the nonlinear Schrödinger equation on a bounded interval, [31] D. L. Russell, Controllability and stabilizability theory for linear partial differential equations: Recent progress and open questions, [32] D. L. Russell and B. Y. Zhang, Controllability and stabilizability of the third-order linear dispersion equation on a periodic domain, [33] D. L. Russell and B. Y. Zhang, Smoothing and decay properties of solutions of the Korteweg-de Vries equation on a periodic domain with point dissipation, [34] [35] [36] [37] [38] V. E. Zakharov and A. B. Shabat, Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media, show all references References: [1] J. L. Bona, S. M. Sun and B.-Y. Zhang, Nonhomogeneous boundary-value problems for one-dimensional nonlinear Schrödinger equations, [2] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to non-linear evolution equations, part Ⅰ: Schrödinger equations, [3] [4] [5] [6] [7] [8] [9] T. Cazenave, D. Fang and Z. Han, Continuous dependence for NLS in fractional order spaces, [10] [11] [12] N. Dunford and J. T. Schwartz, [13] [14] G. Gao and S. M. Sun, A Korteweg-de Vries type of fifth-order equations on a finite domain with point dissipation, [15] [16] [17] [18] [19] [20] [21] [22] T. Kato, On nonlinear Schrödinger equations, [23] [24] [25] V. Komornik, A generalization of Ingham's inequality, in [26] H. Lange and H. Teismann, Controllability of the nonlinear Schrödinger equation in the vicinity of the ground state, [27] [28] [29] [30] L. Rosier and B.-Y. Zhang, Local exact controllability and stabilizability of the nonlinear Schrödinger equation on a bounded interval, [31] D. L. Russell, Controllability and stabilizability theory for linear partial differential equations: Recent progress and open questions, [32] D. L. Russell and B. Y. Zhang, Controllability and stabilizability of the third-order linear dispersion equation on a periodic domain, [33] D. L. Russell and B. Y. Zhang, Smoothing and decay properties of solutions of the Korteweg-de Vries equation on a periodic domain with point dissipation, [34] [35] [36] [37] [38] V. E. Zakharov and A. B. Shabat, Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media, [1] Pasquale Palumbo, Pierdomenico Pepe, Simona Panunzi, Andrea De Gaetano. Robust closed-loop control of plasma glycemia: A discrete-delay model approach. [2] Filippo Cacace, Valerio Cusimano, Alfredo Germani, Pasquale Palumbo, Federico Papa. Closed-loop control of tumor growth by means of anti-angiogenic administration. [3] Hanxiao Wang, Jingrui Sun, Jiongmin Yong. Weak closed-loop solvability of stochastic linear-quadratic optimal control problems. [4] Justine Yasappan, Ángela Jiménez-Casas, Mario Castro. Stabilizing interplay between thermodiffusion and viscoelasticity in a closed-loop thermosyphon. [5] Riccardo Adami, Diego Noja, Nicola Visciglia. Constrained energy minimization and ground states for NLS with point defects. [6] Xiaochen Sun, Fei Hu, Yancong Zhou, Cheng-Chew Lim. Optimal acquisition, inventory and production decisions for a closed-loop manufacturing system with legislation constraint. [7] Yi Jing, Wenchuan Li. Integrated recycling-integrated production - distribution planning for decentralized closed-loop supply chain. [8] Wenbin Wang, Peng Zhang, Junfei Ding, Jian Li, Hao Sun, Lingyun He. Closed-loop supply chain network equilibrium model with retailer-collection under legislation. [9] Xiaohong Chen, Kui Li, Fuqiang Wang, Xihua Li. Optimal production, pricing and government subsidy policies for a closed loop supply chain with uncertain returns. [10] [11] Masoud Mohammadzadeh, Alireza Arshadi Khamseh, Mohammad Mohammadi. A multi-objective integrated model for closed-loop supply chain configuration and supplier selection considering uncertain demand and different performance levels. [12] [13] Zhijian Yang, Pengyan Ding, Xiaobin Liu. Attractors and their stability on Boussinesq type equations with gentle dissipation. [14] Salvatore A. Marano, Sunra J. N. Mosconi. Multiple solutions to elliptic inclusions via critical point theory on closed convex sets. [15] Alexandre N. Carvalho, Jan W. Cholewa. NLS-like equations in bounded domains: Parabolic approximation procedure. [16] [17] [18] Cleverson R. da Luz, Gustavo Alberto Perla Menzala. Uniform stabilization of anisotropic Maxwell's equations with boundary dissipation. [19] Bo-Qing Dong, Jiahong Wu, Xiaojing Xu, Zhuan Ye. Global regularity for the 2D micropolar equations with fractional dissipation. [20] 2018 Impact Factor: 1.292 Tools Metrics Other articles by authors [Back to Top]
Difference between revisions of "IX-6315 "Dawn" Electric Propulsion System" (Fixed up electrical consumption description, minor thrust-related updates.) m (typo) (10 intermediate revisions by 8 users not shown) Line 5: Line 5: == Usage == == Usage == − This engine has a phenomenal fuel efficiency (4200 s I<sub>sp</sub>), but very low thrust and requires a substantial amount of [[electricity]] to operate. Xenon gas is provided by xenon containers like the [[PB-X50R_Xenon_Container|PB-X50R Xenon Container]], [[PB-X150 Xenon Container]] or [[PB-X750 Xenon Container]]. Electricity can be obtained using [[solar panel]]s, [[PB-NUK Radioisotope Thermoelectric Generator|radioisotope batteries (RTGs)]], and [[Fuel cell] + This engine has a phenomenal fuel efficiency (4200 s I<sub>sp</sub>), but very low thrust and requires a substantial amount of [[electricity]] to operate. Xenon gas is provided by xenon containers like the [[PB-X50R_Xenon_Container|PB-X50R Xenon Container]], [[PB-X150 Xenon Container]] or [[PB-X750 Xenon Container]]. Electricity can be obtained using [[solar panel]]s, [[PB-NUK Radioisotope Thermoelectric Generator|radioisotope batteries (RTGs)]], and [[Fuel cell]. Solar panels are recommended in this case, as RTGs are heavy and will tax a lot on its already small TWR. The amount of electricity needed to keep one ion engine running at full thrust is roughly equivalent to half the output of one [[Gigantor XL Solar Array]] (however one array will power two engines only near peak sun exposure around Kerbin), 12 [[PB-NUK Radioisotope Thermoelectric Generator]]s, or 25 [[OX-STAT Photovoltaic Panels]] at peak output. When pairing with solar panels, it is highly recommended to bring more than it needs (slightly less than 9 Ec/s). Not all panels are at peak output during operations, and maximum available power falls of with the square of the distance from the star. − Batteries can be used to store the electricity since there may be times the solar panels will be blocked from [[Kerbol|the Sun]] by objects or the dark side of [[Celestial body|celestial bodies]]. + Batteries can be used to store the electricity since there may be times the solar panels will be blocked from [[Kerbol|the Sun]] by objects or the dark side of [[Celestial body|celestial bodies]]. − The ion engine is good for fine tuning of orbits. It was also a popular propulsion method for planes on planets on which [[jet engine]]s don't work, though its current thrust falloff reduces the value there. Due to its great fuel efficiency it is also well-suited for interplanetary travel, but maneuvers tend to take a long time to complete due to its very low [[thrust-to-weight ratio]] -- it is advised to use it for very small craft, and to use [[Time warp#Physical Time Warp|physics-warp]] while propelling with it. Usually the engine is used on long range + The ion engine is good for fine tuning of orbits. It was also a popular propulsion method for planes on planets on which [[jet engine]]s don't work, though its current thrust falloff reduces the value there. Due to its great fuel efficiency it is also well-suited for interplanetary travel, but maneuvers tend to take a long time to complete due to its very low [[thrust-to-weight ratio]] -- it is advised to use it for very small craft, and to use [[Time warp#Physical Time Warp|physics-warp]] while propelling with it. Usually the engine is used on long range due to its high efficiency. But when less delta-v is required, overall, it can be easily surpassed by smaller liquid fueled engines such as the [[48-7S "Spark" Liquid Fuel Engine|48-7S "Spark"]] engine with a lot better TWR. It is impossible to build an ion-rocket which can defeat gravity on [[Kerbin]], because the engine isn't even strong enough to lift itself against gravity, let alone itself and its fuel, a battery and a probe core. But when on a low-gravity moon like [[Minmus]] or [[Gilly]] it is possible to land, start, enter orbit and reach escape velocity with ion-propulsion alone. Since 0.23.5, it is technically possible to create an ion-powered probe, albeit with a minimum of parts, which will be able to defy [[Mun]] gravity. With that in mind, it is also possible to resist gravity with lightweight Ion craft on Duna, Moho, Dres, Eeloo, and every in-game moon with two exceptions: Laythe and Tylo have too high gravity for a single ion thruster, xenon tank, probe core, and battery. It is impossible to build an ion-rocket which can defeat gravity on [[Kerbin]], because the engine isn't even strong enough to lift itself against gravity, let alone itself and its fuel, a battery and a probe core. But when on a low-gravity moon like [[Minmus]] or [[Gilly]] it is possible to land, start, enter orbit and reach escape velocity with ion-propulsion alone. Since 0.23.5, it is technically possible to create an ion-powered probe, albeit with a minimum of parts, which will be able to defy [[Mun]] gravity. With that in mind, it is also possible to resist gravity with lightweight Ion craft on Duna, Moho, Dres, Eeloo, and every in-game moon with two exceptions: Laythe and Tylo have too high gravity for a single ion thruster, xenon tank, probe core, and battery. Line 15: Line 15: While it is possible to build an airplane powered solely by this engine, the ion engine's efficiency is awful in the [[atmosphere]]. Unless you are going very far from the KSC, jet planes are much cheaper and more efficient. While it is possible to build an airplane powered solely by this engine, the ion engine's efficiency is awful in the [[atmosphere]]. Unless you are going very far from the KSC, jet planes are much cheaper and more efficient. − Ion-powered "ferries" may also be useful for moving fuel, oxidizer and/or kerbonauts between two larger vessels, by keeping the large craft at such range that only one of them is within draw distance from the "ferry" at any moment, performance loss can be avoided. It is generally more fuel-efficient to move fuel and oxidizer between two ships using a ferry than it would be to dock the larger + Ion-powered "ferries" may also be useful for moving fuel, oxidizer and/or kerbonauts between two larger vessels, by keeping the large craft at such range that only one of them is within draw distance from the "ferry" at any moment, performance loss can be avoided. It is generally more fuel-efficient to move fuel and oxidizer between two ships using a ferry than it would be to dock the larger together using their own engines and RCS. Because it uses only about 0.485 units of xenon per second, one [[PB-X50R Xenon Container]] with 400 units of xenon can supply the engine for almost 14 minutes. The other larger tank [[PB-X150 Xenon Container]] with 700 units of xenon has enough to supply the engine more than 24 minutes. Because it uses only about 0.485 units of xenon per second, one [[PB-X50R Xenon Container]] with 400 units of xenon can supply the engine for almost 14 minutes. The other larger tank [[PB-X150 Xenon Container]] with 700 units of xenon has enough to supply the engine more than 24 minutes. Line 24: Line 24: Calculating the mass and unit flow from specific impulse and thrust: Calculating the mass and unit flow from specific impulse and thrust: <math>I_{sp} = \frac{F}{\dot m \cdot g_0} \Rightarrow \frac{F}{I_{sp} \cdot g_0} = \dot m \Rightarrow \frac{2000 N}{4200 s \cdot 9.80665 \frac{m}{s}} = 0.04856 \frac{kg}{s}</math> <math>I_{sp} = \frac{F}{\dot m \cdot g_0} \Rightarrow \frac{F}{I_{sp} \cdot g_0} = \dot m \Rightarrow \frac{2000 N}{4200 s \cdot 9.80665 \frac{m}{s}} = 0.04856 \frac{kg}{s}</math> − <math>Xe flow = \ + <math>Xe flow = \{\dot m}{\rho} = \frac{0.04856\frac{kg}{s}}{0.1\frac{kg}{l}} = 0.4856\frac{l}{s}<math> <math>Electricity flow = Xe * ratio = 0.4856 * 18 = 8.740 zaps/s</math> <math>Electricity flow = Xe * ratio = 0.4856 * 18 = 8.740 zaps/s</math> Line 48: Line 48: == Changes == == Changes == + + ;[[0.25]] ;[[0.25]] * Moved to Propulsion from Utility. * Moved to Propulsion from Utility. Latest revision as of 12:35, 14 May 2018 IX-6315 "Dawn" Electric Propulsion System Ion engine by Ionic Symphonic Protonic Electronics Radial size Tiny Cost (total) 8 000.00 Mass (total) 0.25 t Drag 0.2 Max. Temp. 2000 K Impact Tolerance 7 m/s Research Ion Propulsion Unlock cost 16 800 Since version 0.18 Part configuration ionEngine Ion engine Maximum thrust (1 atm) 0.05 kN (vacuum) 2.00 kN I sp (1 atm) 100 s (vacuum) 4200 s Fuel consumption 0.486 /s Thrust vectoring × No Electricity required 367.098 ⚡/s 8.740 ⚡/s Testing Environments On the surface × No In the ocean × No On the launchpad × No In the atmosphere × No Sub orbital ✓ Yes In an orbit ✓ Yes On an escape ✓ Yes Docked × No Test by staging ✓ Yes Manually testable ✓ Yes Contents Usage This engine has a phenomenal fuel efficiency (4200 s I sp), but very low thrust and requires a substantial amount of electricity to operate. Xenon gas is provided by xenon containers like the PB-X50R Xenon Container, PB-X150 Xenon Container or PB-X750 Xenon Container. Electricity can be obtained using solar panels, radioisotope batteries (RTGs), and fuel cells. Solar panels are recommended in this case, as RTGs are heavy and will tax a lot on its already small TWR. The amount of electricity needed to keep one ion engine running at full thrust is roughly equivalent to half the output of one Gigantor XL Solar Array (however one array will power two engines only near peak sun exposure around Kerbin), 6 Fuel Cells, 12 PB-NUK Radioisotope Thermoelectric Generators, or 25 OX-STAT Photovoltaic Panels at peak output. When pairing with solar panels, it is highly recommended to bring more than it needs (slightly less than 9 Ec/s). Not all panels are at peak output during operations, and maximum available power falls of with the square of the distance from the star. Batteries can be used to store the electricity since there may be times the solar panels will be blocked from the Sun by objects or the dark side of celestial bodies. For extended burns in the darkness the fuel cells happen to be a good choice. When the power is provided by the fuel cells the majority of the mass flow (about 69.2%) is liquid fuel and oxidizer used by the fuel cell. Thus the ion engine powered by the fuel cell may be seen as having much more modest but still impressive effective Isp of 1293 sec. However if the burn doesn't take more than a couple of hours the stack of RTGs providing the same amount of power (and thus thrust) tend to be heavier than the fuel cell array and its fuel tank. The RTGs should be reserved for very long low-thrust burns in the deep space. The ion engine is good for fine tuning of orbits. It was also a popular propulsion method for planes on planets on which jet engines don't work, though its current thrust falloff reduces the value there. Due to its great fuel efficiency it is also well-suited for interplanetary travel, but maneuvers tend to take a long time to complete due to its very low thrust-to-weight ratio -- it is advised to use it for very small craft, and to use physics-warp while propelling with it. Usually the engine is used on long range craft due to its high efficiency. But when less delta-v is required, overall, it can be easily surpassed by smaller liquid fueled engines such as the 48-7S "Spark" engine with a lot better TWR. It is impossible to build an ion-rocket which can defeat gravity on Kerbin, because the engine isn't even strong enough to lift itself against gravity, let alone itself and its fuel, a battery and a probe core. But when on a low-gravity moon like Minmus or Gilly it is possible to land, start, enter orbit and reach escape velocity with ion-propulsion alone. Since 0.23.5, it is technically possible to create an ion-powered probe, albeit with a minimum of parts, which will be able to defy Mun gravity. With that in mind, it is also possible to resist gravity with lightweight Ion craft on Duna, Moho, Dres, Eeloo, and every in-game moon with two exceptions: Laythe and Tylo have too high gravity for a single ion thruster, xenon tank, probe core, and battery. While it is possible to build an airplane powered solely by this engine, the ion engine's efficiency is awful in the atmosphere. Unless you are going very far from the KSC, jet planes are much cheaper and more efficient. Ion-powered "ferries" may also be useful for moving fuel, oxidizer and/or kerbonauts between two larger vessels, by keeping the large craft at such range that only one of them is within draw distance from the "ferry" at any moment, performance loss can be avoided. It is generally more fuel-efficient to move fuel and oxidizer between two ships using a ferry than it would be to dock the larger craft together using their own engines and RCS. Because it uses only about 0.485 units of xenon per second, one PB-X50R Xenon Container with 400 units of xenon can supply the engine for almost 14 minutes. The other larger tank PB-X150 Xenon Container with 700 units of xenon has enough to supply the engine more than 24 minutes. Electrical vs Xenon consumption While the consumption ratios listed in the part.cfg files are normally relative mass flows (1.8 electricity and 0.1 xenon here), this breaks down somewhat with massless resources like electricity. Rather, the entire mass flow goes to the xenon, with the relative ratio (1.8 to 0.1, or 18 in total) creating a seemingly disproportionate drain. Calculating the mass and unit flow from specific impulse and thrust: Product description “ By emitting ionized xenon gas through a small thruster port, Dawn can produce incredibly efficient propulsion, but with a downside of very low thrust and high energy usage. According to ISP Electronics sales reps, the rumours of this engine being powered by "dark magic" are largely exaggerated. — ” “ By emitting ionized xenon gas through a small thruster port, the PB-ION can produce incredibly efficient propulsion, but with a downside of very low thrust and high expense. The general perception of this engine as being powered by "witchcraft" has unfortunately given it a sour reputation. — ” Trivia Although the 2 kN IX-6315 "Dawn" Electric Propulsion System is considered to have very low thrust in KSP, real-life Hall effect thrusters typically have orders of magnitude less thrust usually below 1 N (0.001 kN). They make up for this by having a service life of thousands of hours of continuous operation and consuming fuel extremely slowly, which would have been impractical in the game. The name "Dawn" is a reference to NASA's Dawn spacecraft, which is NASA's first interplanetary space probe to use an ion engine for propulsion. Gallery The Ion-Powered Space Probe stock craft, powered by an ion engine. Changes Moved from Utility to Engines Moved to Propulsion from Utility. Ionic Protonic Electronics renamed Ionic Symphonic Protonic Electronics, description changed, thrust increased from 0.5 to 2, relative electricity reduced from 12 to 1.8, description changed. Initial Release
Astérisque Volume: 341; 2012; 113 pp; Softcover MSC: Primary 35; 37; Print ISBN: 978-2-85629-335-5 Product Code: AST/341 Product Code: AST/341 List Price: $45.00 AMS Member Price: $36.00 A Quasi-Linear Birkhoff Normal Forms Method. Application to the Quasi-Linear Klein-Gordon Equation on \(\mathbb{S}^{1}\)Share this page J.-M. Delort A publication of the Société Mathématique de France Consider a nonlinear Klein-Gordon equation on the unit circle, with smooth data of size \(\epsilon \to 0\). A solution \(u\) which, for any \(\kappa \in \mathbb{N}\), may be extended as a smooth solution on a time-interval \(]-c_\kappa \epsilon ^{-\kappa },c_\kappa \epsilon ^{-\kappa }[\) for some \(c_\kappa >0\) and for \(0<\epsilon <\epsilon _\kappa \), is called an almost global solution. It is known that when the nonlinearity is a polynomial depending only on \(u\), and vanishing at order at least \(2\) at the origin, any smooth small Cauchy data generate, as soon as the mass parameter in the equation stays outside a subset of zero measure of \(\mathbb{R}_+^*\), an almost global solution, whose Sobolev norms of higher order stay uniformly bounded. The goal of this book is to extend this result to general Hamiltonian quasi-linear nonlinearities. These are the only Hamiltonian nonlinearities that depend not only on \(u\) but also on its space derivative. To prove the main theorem, the author develops a Birkhoff normal form method for quasi-linear equations. A publication of the Société Mathématique de France, Marseilles (SMF), distributed by the AMS in the U.S., Canada, and Mexico. Orders from other countries should be sent to the SMF. Members of the SMF receive a 30% discount from list. Readership Graduate students and research mathematicians interested in Birkhoff normal forms, quasi-linear Hamiltonian equations, almost global existence, and Klein-Gordon equations.
To put things in context I'll first expose a straightforward method inspired by the classical evaluation of square roots (shortly : "if we know that $a^2 \le N <(a+1)^2$ then the next digit $d$ will have to verify $(10a+d)^2 \le 10^2 N <(10a+d+1)^2$. This means that we want the largest digit $d$ such that $(20a+d)d\le 10^2(N-a^2)$") : To evaluate the cubic root of $N$ let's suppose that $a^3 \le N <(a+1)^3$ then the next digit $d$ will have to verify $(10a+d)^3 \le 10^3 N <(10a+d+1)^3$. So that we want the largest digit $d$ such that $\left(30a(10a+d)+d^2\right)d \le 10^3(N-a^3)$. To get a feeling of this method let's evaluate $\sqrt[3]{2}$ starting with $N=2,\ a=1$ : $\begin{array} {r|l}2.000.000 & 1\\\hline \\-1.000.000 & 1.25\\1.000.000 & \\-728.000 & \\272.000 & \\-225.125 & \\46.875 & \\\end{array}$ $a=1$ so that the first decimal must verify $(30(10+d)+d^2)d \le 1000$ that is $d=2$. $a=12$ and the second decimal must verify $(360(120+d)+d^2)d \le 272000$ so that $d=5$. (let's notice that this is 'nearly' $360\cdot 120\cdot d \le 272000$ so that $d=5$ or $d=6$ : we don't really need to try all the digits!) I could have continued but observed that for $d=6$ the evaluation returned $272376$ so that the relative error on $d$ is $\epsilon_1 \approx \frac{376}{272376+360\cdot 6^2}\approx 0.001318$ giving $d\approx 5.9921$ and the solution $\sqrt[3]{2}\approx 1.259921$. Now let's give a chance to Nirbhay Sngh Nahar's method exposed here. Let's consider $N=2000$ then $x=1\cdot 10=10$ The NAHNO approximative formula is :$$A= \frac 12\left[x+\sqrt{\frac{4N-x^3}{3x}}\right]= \frac 12\left[10+\sqrt{\frac{4\cdot 2000-10^3}{3\cdot 10}}\right]\approx 12.6376$$ Doesn't look very good... Let's give the formula a second chance by providing a much better value of $x=12.5$ then the formula returns $A=12.5992125$ not so far from $2^{\frac 13}= 12.59921049894873\cdots$ but $x=12.5$ is really near the solution so let's compare this method with Newton's iterations $\displaystyle x'=x-\frac{x^3-N}{3x^2}$ $x_0=12.5\to x_1=12.6\to x_2=12.599210548\cdots \to x_3=12.5992104989487318\cdots$ EDIT: I missed the 'Precise Value of Cube Root' using following formula :$$P=A\frac{4N-A^3}{3N}$$ (I updated the picture and added this formula as well as the third Newton iteration) The NAHNO approximative formula is better than the first Newton iteration but weaker than the second. The precise NAHNO formula is beaten only by the third Newton approximation as you may see in this picture (the curves are from top to bottom : first Newton iteration, Approximative NAHNO, second Newton iteration, Precise NAHNO, third Newton iteration ; the NAHNO curves are darker, the vertical scale is logarithmic and 'lower is better') : The vertical axis shows $\ \log \left| \frac {A(N)}{N^{\frac 13}}-1\right|$ for $N$ in $(1000,50000)$. The vertical lines are values $N$ such that $2\sqrt[3]{N}$ is integer (when the initial estimation is nearly the solution). So that, considered as approximate formulas, NAHNO formulas are rather good and could be made more precise with a better first approximation (especially for $x$ between $1$ and $2.5$ more values should be provided in the table). Avoiding extravagant claims could be an advantage too! :-)
I was recently helping a college math student with her homework. Her teacher had offered an extra-credit question: Find two alternating series $\sum_{n=1}^\infty (-1)^{n-1}a_n$ such that $a_{n+1} \leq a_n$ for all $n$, but $\lim_{n\to\infty} a_n \neq 0$. One of the provided series should converge, and the other should diverge. A divergent series was easy to find: $\sum_{n=1}^\infty (-1)^{n-1} \left(1+\frac{1}{n}\right)$. I'm having a much harder time coming up with a convergent series, though. In fact, I suspect there isn't one. Informally (since it's been many years since I myself studied this topic): Since $\lim_{n\to\infty}a_n \neq 0$, then it either diverges or converges to some other number. Since the series is positive and monotone nonincreasing, it cannot diverge. Let $L$ be the positive number to which it converges. Then the odd terms of the alternating series converge to $L$ from above, and the even terms converge to $-L$ from below. Each term of the sequence of partial sums then differs from the previous term by at least $2L$, so the series does not converge. So... Did the teacher offer an impossible problem on purpose, or is there a flaw in my reasoning?
I don't claim to have a full answer (yet! I hope to update this, as it's an interesting issue to try and explain well). But let me start with a few clarifying comments... But if it really is just constructive interference of complicated states, why not just perform this interference with classical waves? The glib answer is that it's not just interference. I think what it really comes down to is that quantum mechanics uses different axioms of probability (probability amplitudes) to classical physics, and these are not reproduced in the wave scenario. When someone writes about "waves", I naturally think about water waves, but that may not be the most helpful picture to have. Let's think instead about an ideal guitar string. On a string of length $L$ (pinned at both ends), this has wavefunctions$$y_n(x,t)=A_n\sin\left(\omega_nt\right)\cos\left(\frac{n\pi x}{L}\right).$$Let's define the concept of a w-bit ("wave bit"). We can limit ourselves to, say, 4 modes, on the string, so you can associate$$|00\rangle\equiv y_1 \qquad |01\rangle\equiv y_2\qquad |10\rangle\equiv y_3 \qquad |11\rangle\equiv y_4$$Now since we can prepare the initial shape of the string to be anything we want (subject to the boundary conditions), we can create any arbitrary superposition of those 4 states. So, the theory certainly includes things that look like superposition and entanglement. However, they are not superposition and entanglement as we understand them in quantum theory. A key feature of quantum theory is that it contains indeterminism - that the results of some outcomes are inherently unpredictable. We don't start or end our computation from these points, but we must go through them somewhere during the computation$^*$. For example, experimental tests of Bell's Theorem have proven that the world is not deterministic (and, so far, conforms to what quantum theory predicts). The wave-bit theory is entirely deterministic: I can look at the string of my guitar, whatever weird shape it might be in, and my looking at it does not change its shape. Moreover, I can even determine the values of the $\{A_n\}$ in a single shot, and therefore know what shape it will be in at all later times. This is very different to quantum theory, where there are different bases that can give me different information, but I can never access all of it (indeterminism). $^*$ I don't have a complete proof of this. We know that entanglement is necessary for quantum computation, and that entanglement can demonstrate indeterminism, but that's not quite enough for a precise statement. Contextuality is a similar measure of indeterminism but for single qubits, and results along those lines have started to become available recently, see here, for broad classes of computations. Another way to think about this might be to ask what computational operations we can perform with these waves? Presumably, even if you allow some non-linear interactions, the operations can be simulated by a classical computer (after all, classical gates include non-linearity). I assume that the $\{A_n\}$ function like classical probabilities, not probability amplitudes. This might be one way of seeing the difference (or at least heading in the right direction). There's a way of performing quantum computation classed measurement-based quantum computation. You prepare your system in some particular state (which, we've already agreed, we could do with our w-bits), and then you measure the different qubits. Your choice of measurement basis determines the computation. But we can't do that here because we don't have that choice of basis. And on that matter, if the figure-of-merit is simply how few steps something can be calculated in, why not start with a complicated dynamical system that has the desired computation embedded in it. (ie, why not just create "analog simulators" for specific problems?) This is not the figure of merit. The figure of merit is really "How long does it take to perform the computation" and "how does that time scale as the problem size changes?". If we choose to break everything down in terms of elementary gates, then the first question is essentially how many gates are there, and the second is how does the number of gates scale. But we don't have to break it down like that. There are plenty of "analog quantum simulators". Feynman's original specification of a quantum computer was one such analogue simulator. It's just that the time feature manifests in a different way. There, you're talking about implementing a Hamiltonian evolution $H$ for a particular time $t_0$, $e^{iHt_0}$. Now, sure, you could implement $2H$, and replace $t_0$ with $t_0/2$, but practically, the coupling strengths in $H$ are limited, so there's a finite time that things take, and we can still demand how that scales with the problem size. Similarly, there's adiabatic quantum computation. There, the time required is determined by the energy gap between the ground and the first excited state. The smaller the gap, the longer your computation takes. We know that all 3 models are equivalent in the time they take (up to polynomial conversion factors, which are essentially irrelevant if you're talking about an exponential speed-up). So, analog quantum simulators are certainly a thing, and there are those of us who think they're a very sensible thing at least in the short-term. My research, for example, is very much about "how do we design Hamiltonians $H$ so that their time evolution $e^{-iHt_0}$ creates the operations that we want?", aiming to do everything we can in a language that is "natural" for a given quantum system, rather than having to coerce it into performing a whole weird sequence of quantum gates.
There's a 59.5125% chance of survival. Naively, we might have thought there'd be a 55% chance of survival as 55% of the roll results are good. But the 20 is a slightly better result than the 1 is a bad one, so that pushes up the probability a bit. Let's see how. The approach The simplest way to tackle this is to look at the probabilities of surviving in exclusive ways, then combine those probabilities. Remember the rules of combining probabilities of multiple events: If we want to know the probability of (A or B), when A and B don't overlap, we sum the probability of A with the probability of B. (We'll be constructing all of our scenarios below without overlap.) If we want to know the probability of (A and B), we multiply the probability of A with the probability of B. Notation We'll use a number or a range of numbers in brackets to indicate the probability of that result on a d20. That is, \$[4]=\frac 1 {20}\$, \$[7]=\frac 1 {20}\$, and \$[10-20]=\frac {11} {20}\$. In this manner we'll generally be concerned with four different results: \$[1]\$ (\$\frac 1 {20}\$ chance), \$[2-9]\$ (\$\frac 8 {20}\$), \$[10-19]\$ (\$\frac {10} {20}\$), and \$[20]\$ (\$\frac 1 {20}\$ again). When we want to indicate a particular sequence of rolls, we'll list them in order: \$[1][1-9]\$ would be read as "the probability of rolling a 1, then some single-digit number." When we want to indicate a particular selection of rolls, but don't care about their order * we'll group the unordered results in curly-brackets. Thus \$\{[2-9][10-19][2-9]\}[20]\$ would be read as "the probability of rolling two simple failures and a simple success in any order, followed by a 20." Ways to stabilize after... The probabilities of stabilizing on the first, second, third, &c. rolls are exclusive: one can stabilize on the first or on the second, but not both. So we'll determine each of these probabilities and sum them up according to our "or-rule" from above. 1 roll: $$[20]=\frac 1 {20}$$ 2 rolls: $$[1-19][20]= \frac{19}{20} \times \frac 1 {20}= \frac{19}{400}$$ 3 rolls: $$\begin{align} [10-19][10-19][10-19] = \frac {10} {20} \times \frac{10}{20} \times \frac{10}{20} &= \frac 1 8 \\\text{or} \quad \{[1][10-19]\}[20] = \left\{2 \times \left(\frac 1 {20} \times \frac{10}{20}\right)\right\} \times \frac 1 {20} &= \frac 1 {400} \\\text{or} \quad [2-19][2-19][20] = \frac{18}{20} \times \frac{18}{20} \times \frac 1{20} &= \frac{81}{2000}\end{align}$$ $$\left(\text{total: } \frac 1 8 + \frac 1 {400} + \frac{81}{2000} = \frac{21}{125}\right)$$ 4 rolls: $$\begin{align} \{[10-19][10-19][1-9]\}[10-20]=\left\{3 \times \left( \frac{10}{20} \times \frac{10}{20} \times \frac 9 {20}\right)\right\} \times \frac{11}{20} &= \frac {297}{1600}\\\text{or} \quad \{[2-9][2-9][10-19]\}[20]=\left\{3 \times \left(\frac{8}{20} \times \frac{8}{20} \times \frac {10}{20}\right)\right\} \times \frac {1}{20} &= \frac{3}{250}\end{align}$$ $$\left(\text{total: } \frac {297}{1600} + \frac{3}{250} = \frac{1581}{8000}\right)$$ 5 rolls: $$\{[2-9][2-9][10-19][10-19]\}[10-20]=\left\{6 \times \left(\frac{8}{20} \times \frac{8}{20} \times \frac{10}{20} \times \frac{10}{20}\right)\right\} \times \frac{11}{20}=\frac{33}{250}$$ Summing it all up: $$\frac 1 {20} + \frac{19}{400} + \frac{21}{125} + \frac{1581}{8000} + \frac{33}{250} = \frac{4761}{8000} = 0.595125$$ Bonus for reading this far: expected time to recovery With these probabilities in hand it's easy to give an expected time-of-stabilization for those that do stabilize: $$t_\text{stable} = \dfrac{\left(\begin{align} 1 & \times \frac 1 {20} \\+2 & \times \frac{19}{400} \\+3 & \times \frac{21}{125} \\+4 & \times \frac{1581}{8000} \\+5 & \times \frac{33}{250}\end{align}\right)}{\frac{4761}{8000}} = 3.5278 \, \text{rounds, or about 21 sec}$$ * - Order is really important. Sometimes. For instance, if we want to know the probability of the first two rolls being a success and a failure, we have to count all the ways that we could get a success then a failure, and add on all the ways we could get a failure then a success. But some of our rolls are indistinguishable events, and that's where it gets fun. If we want to count the ways of our first three rolls being two failures and a success we only have three orders we can consider (FFS, FSF, SFF), not the six one might expect (ABC, ACB, BAC, BCA, CAB, CBA). The count of possible arrangements of n objects with multiplicities \$m_1, m_2, \dots\$ is \$N = \dfrac{n!}{m_1! \times m_2! \times \dots}\$. See if you can spot the three times this pops up!
To Xi'an's first point: When you're talking about $\sigma$-algebras, you're asking about measurable sets, so unfortunately any answer must focus on measure theory. I'll try to build up to that gently, though. A theory of probability admitting all subsets of uncountable sets will break mathematics Consider this example. Suppose you have a unit square in $\mathbb{R}^2$, and you're interested in the probability of randomly selecting a point that is a member of a specific set in the unit square. In lots of circumstances, this can be readily answered based on a comparison of areas of the different sets. For example, we can draw some circles, measure their areas, and then take the probability as the fraction of the square falling in the circle. Very simple. But what if the area of the set of interest is not well-defined? If the area is not well-defined, then we can reason to two different but completely valid (in some sense) conclusions about what the area is. So we could have $P(A)=1$ on the one hand and $P(A)=0$ on the other hand, which implies $0=1$. This breaks all of math beyond repair. You can now prove $5<0$ and a number of other preposterous things. Clearly this isn't too useful. $\sigma$-algebras are the patch that fixes math What is a $\sigma$-algebra, precisely? It's actually not that frightening. It's just a definition of which sets may be considered as events. Elements not in $\mathscr{F}$ simply have no defined probability measure. Basically, $\sigma$-algebras are the "patch" that lets us avoid some pathological behaviors of mathematics, namely non-measurable sets. The three requirements of a $\sigma$-field can be considered as consequences of what we would like to do with probability:A $\sigma$-field is a set that has three properties: Closure under countable unions. Closure under countable intersections. Closure under complements. The countable unions and countable intersections components are direct consequences of the non-measurable set issue. Closure under complements is a consequence of the Kolmogorov axioms: if $P(A)=2/3$, $P(A^c)$ ought to be $1/3$. But without (3), it could happen that $P(A^c)$ is undefined. That would be strange. Closure under complements and the Kolmogorov axioms let us to say things like $P(A\cup A^c)=P(A)+1-P(A)=1$. Finally, We are considering events in relation to $\Omega$, so we further require that $\Omega\in\mathscr{F}$ Good news: $\sigma$-algebras are only strictly necessary for uncountable sets But! There's good news here, also. Or, at least, a way to skirt the issue. We only need $\sigma$-algebras if we're working in a set with uncountable cardinality. If we restrict ourselves to countable sets, then we can take $\mathscr{F}=2^\Omega$ the power set of $\Omega$ and we won't have any of these problems because for countable $\Omega$, $2^\Omega$ consists only of measurable sets. (This is alluded to in Xi'an's second comment.) You'll notice that some textbooks will actually commit a subtle sleight-of-hand here, and only consider countable sets when discussing probability spaces. Additionally, in geometric problems in $\mathbb{R}^n$, it's perfectly sufficient to only consider $\sigma$-algebras composed of sets for which the $\mathcal{L}^n$ measure is defined. To ground this somewhat more firmly, $\mathcal{L}^n$ for $n=1,2,3$ corresponds to the usual notions of length, area and volume. So what I'm saying in the previous example is that the set needs to have a well-defined area for it to have a geometric probability assigned to it. And the reason is this: if we admit non-measureable sets, then we can end up in situations where we can assign probability 1 to some event based on some proof, and probability 0 to the same event event based on some other proof. But don't let the connection to uncountable sets confuse you! A common misconception that $\sigma$-algebras are countable sets. In fact, they may be countable or uncountable. Consider this illustration: as before, we have a unit square. Define $$\mathscr{F}=\text{All subsets of the unit square with defined $\mathcal{L}^2$ measure}.$$ You can draw a square $B$ with side length $s$ for all $s \in (0,1)$, and with one corner at $(0,0)$. It should be clear that this square is a subset of the unit square. Moreover, all of these squares have defined area, so these squares are elements of $\mathscr{F}$. But it should also be clear that there are uncountably many squares $B$: the number of such squares is uncountable, and each square has defined Lebesgue measure. So as a practical matter, simply making that observation is often enough to make the observation that you only consider Lebesgue-measurable sets to gain headway against the problem of interest. But wait, what's a non-measurable set? I'm afraid I can only shed a little bit of light on this myself. But the Banach-Tarski paradox (sometimes the "sun and pea" paradox) can help us some: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them, without changing their shape. However, the pieces themselves are not "solids" in the usual sense, but infinite scatterings of points. The reconstruction can work with as few as five pieces. A stronger form of the theorem implies that given any two "reasonable" solid objects (such as a small ball and a huge ball), either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox".1 So if you're working with probabilities in $\mathbb{R}^3$ and you're using the geometric probability measure (the ratio of volumes), you want to work out the probability of some event. But you'll struggle to define that probability precisely, because you can rearrange the sets of your space to change volumes! If probability depends on volume, and you can change the volume of the set to be the size of the sun or the size of a pea, then the probability will also change. So no event will have a single probability ascribed to it. Even worse, you can rearrange $S\in\Omega$ such that the volume of $S$ has $V(S)>V(\Omega)$, which implies that the geometric probability measure reports a probability $P(S)>1$, in flagrant violation of the Kolmogorov axioms which require that probability has measure 1. To resolve this paradox, one could make one of four concessions: The volume of a set might change when it is rotated. The volume of the union of two disjoint sets might be different from the sum of their volumes. The axioms of Zermelo–Fraenkel set theory with the axiom of Choice (ZFC) might have to be altered. Some sets might be tagged "non-measurable", and one would need to check whether a set is "measurable" before talking about its volume. Option (1) doesn't help use define probabilities, so it's out. Option (2) violates the second Kolmogorov axiom, so it's out. Option (3) seems like a terrible idea because ZFC fixes so many more problems than it creates. But option (4) seems attractive: if we develop a theory of what is and is not measurable, then we will have well-defined probabilities in this problem! This brings us back to measure theory, and our friend the $\sigma$-algebra.
What is the correct formula to transform AC current from a Wye connection to a Delta connection? I am not an electrical engineer, and this is my first question on this part of StackExchange. I hope it is understandable, and a "good" question. Please let me know, what can be done better. In the following question, I will use the notation of the following picture. Background I am not an expert on this field. Therefore I would like to add some background. Maybe there is a much better solution, that I don't see at the moment. The basic equation of an electrical machine is given by: $$u^k(t) = Ri^k(t) + \frac{\mathrm{d}\Psi^k(t)}{\mathrm{d}t}=Ri^k(t)+u^k_{ind}(t), \tag{1}\label{1},$$ with the induced voltage \$u_{ind}^k\$. This equation holds for each phase \$k∈\{1,2,3\}\$ in the Wye-connection, but it does not matter which phase we use to do stuff, because of symmetry. The reason, why this equation is important, is the computation of the maximum torque curve. The definition of that curve is the maximum torque the engine can produce for each rotational speed. There are two restrictions \$i_{max}\$ from the power supply and \$u_{max}\$ from the converter. Computing the rms values of both current and voltage (Eq \eqref{1}), one can compute the curve by $$\begin{align*} \text{for each rot-speed } n \text{ find} &&\max T(i,γ) \\ \text{such that}&& i_{rms}^k<i_{max} \\ &&u^k_{rms} <u_{max} \end{align*}$$ with the electrical displacement angle γ in the region of field weakening. The result will be a graph like that The software I am using to simulate electrical machines uses the Wye connection to compute all results. Having a motor with a delta connection I need to transform: $$\begin{align*} u_{ind}^{\star,1} &→ u_{ind}^{Δ,12} \\ i^{\star,1} &→ i^{Δ,12} \\ R^{\star} &→ R^Δ \end{align*} $$ so that it is possible to evaluate \eqref{1} in the delta scheme. The resistance can be transformed quite simple using the rules written on wikipedia. Problem The current is an ideal sinusoidal value, so one can just apply the √3-factor to transform from wye to delta. If the induced voltage would be sinusoidal, you could transform it as well. But it is not, as you can see in the following picture of the three induced voltages. This picture is just the computation of one specific configuration. So the "sinusoidalness" can be better, but also worse. Solution (Approach) So far I think the transformation for the induced voltage is: $$\begin{align*} u_{ind}^{Δ,12}(t) &=u_{ind}^{\star,1}(t)+u_{ind}^{\star,2}(t) \\ u_{ind}^{Δ,23}(t) &=u_{ind}^{\star,2}(t)+u_{ind}^{\star,3}(t) \\ u_{ind}^{Δ,31}(t) &=u_{ind}^{\star,3}(t)+u_{ind}^{\star,1}(t) \end{align*}$$ because of Kirchhoff's voltage law applied to So far so good. Now I now the voltage in the delta scheme. But I can't simply do this: $$u^{Δ,12}(t) = R^{Δ}i^1(t)\sqrt{3}+u^{Δ,12}_{ind}(t)$$ as these two values would be out-of-phase. For the current, I want to use the Kirchhoff's current law on each corner node, to get: $$\begin{align*} i^{\star,1} &= i^{Δ,12}(t) - i^{Δ,31}(t) \\ i^{\star,2} &= i^{Δ,23}(t) - i^{Δ,12}(t) \\ i^{\star,3} &= i^{Δ,31}(t) - i^{Δ,23}(t) \\ \end{align*} ⇔ \begin{pmatrix}i^{\star,1}(t) \\ i^{\star,2}(t) \\ i^{\star,3}(t)\end{pmatrix}= \underbrace{\begin{pmatrix} 1 & 0 & -1 \\ -1 & 1 & 0 \\ 0 & -1 & 1\end{pmatrix}}_{=:T^{Δ→\star}}\begin{pmatrix}i^{Δ,12}(t) \\ i^{Δ,23}(t) \\ i^{Δ,31}(t)\end{pmatrix} $$ using counter-clockwise current in the delta connection. Now I would like to do this transformation in the other direction, as I am interested in delta values, but the problem is that \$T^{Δ→\star}\$ is not invertible, so the above transformation is not uniquely solvable. There has to be a mistake ?! I think it has do to with the fact, that all three currents add up to 0. Maybe there is a different way. I also thought about doing some FFT on \$u_{ind}\$, but I don't know how to proceed. Probably there is something completely wrong with my approach, as I couldn't find anything about this topic. (Most web-sites only consider DC values, or sinusoidal values, or only the resistance.) Any help, ideas etc are highly appreciated.
It looks like you're new here. If you want to get involved, click one of these buttons! Now that we've got enriched profunctors up and running, let's see how to compose them! We've already thought about it in some examples. In Lecture 58 we saw how to compose \(\mathbf{Bool}\)-enriched profunctors, also known as feasibility relations: Here we have three \(\mathbf{Bool}\)-enriched categories \(X,Y,\) and \(Z\), also known as preorders, and feasibility relations between these: $$ \Phi \colon X \nrightarrow Y , $$ $$ \Psi \colon Y \nrightarrow Z. $$ Remember, these are really monotone functions $$ \Phi: X^{\text{op}} \times Y \to \mathbf{Bool} ,$$ $$ \Psi: Y^{\text{op}} \times Z \to \mathbf{Bool}.$$ In the pictures \(\Phi(x,y) = \text{true}\) iff there's a path from \(x\in X\) to \(y \in Y\), and \(\Psi(y,z) = \text{true}\) iff there's a path from \(y \in Y\) to \(z \in Z\). Their composite $$ \Psi\Phi \colon X \nrightarrow Z $$ is given by $$ (\Psi\Phi)(x,z) = \bigvee_{y \in Y} \Phi(x,y) \wedge \Psi(y,z). $$Remember, the join \( \bigvee \) in \(\mathbf{Bool}\) means 'there exists', while the meet \(\wedge\) means 'and'. So, this formula says that you can get from \(x \in X\) to \(z \in Z\) iff there exists \(y \in Y\) such that you can get from \(x\) to \(y\) and from \(y\) to \(z\). Now let's generalize this, replacing \(\mathbf{Bool}\) with any sufficiently nice poset \(\mathcal{V}\). Last time we saw that if \(\mathcal{V}\) is a closed commutative monoidal poset we can define \(\mathcal{V}\) -enriched profunctors between \(\mathcal{V}\) -enriched categories. But to compose these enriched profunctors, \(\mathcal{V}\) will need to be a bit nicer. The reason is not hard to see. Suppose we have three \(\mathcal{V}\)-enriched categories \(\mathcal{X},\mathcal{Y}\), and \(\mathcal{Z}\). Suppose we have \(\mathcal{V}\)-enriched profunctors between these: $$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y}, $$ $$ \Psi : \mathcal{Y} \nrightarrow \mathcal{Z}. $$ Remember, these are really \(\mathcal{V}\)-enriched functors $$ \Phi \colon \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} , $$ $$ \Psi \colon \mathcal{Y}^{\text{op}} \times \mathcal{Z} \to \mathcal{V} . $$ Let's try to compose them using this formula: $$ (\Psi\Phi)(x,z) = \bigvee_{y \in \mathrm{Ob}(\mathcal{Y})} \Phi(x,y) \otimes \Psi(y,z). $$ Note I've replaced the meet \(\wedge\), which is the multiplication in our monoidal poset \(\mathbf{Bool}\), by \(\otimes\), which is the name of the multiplication in a general monoidal poset \(\mathcal{V}\). But I'm keeping the join \(\bigvee\). Why? The reason is that composing enriched profunctors is like matrix multiplication! First we 'multiply' the matrix entries \(\Phi(x,y)\) and \(\Psi(y,z)\), then we 'sum' over \(y\). The multiplication in \(\mathcal{V}\) is \(\otimes\), but the sum... well, in general a monoidal poset doesn't have a way to do 'sums', but if it has joins then these act like sums! So, we'd better assume \(\mathcal{V}\) has all joins. There's a name for this: Definition. A quantale is a closed monoidal poset \( \mathcal{V}\) that has all joins: that is, every subset of \( S\subseteq \mathcal{V}\) has a least upper bound \(\bigvee S\). 'Quantale' may sound like a strange word, but we've seen that posets are good for studying logic, and quantales first showed up in the study of quantum logic. In quantum logic we often need noncommutative quantales, but for our work now we need the multiplication \(\otimes\) in \(\mathcal{V}\) to be commutative. So, putting it all together: Definition. If \(\mathcal{V}\) is a commutative quantale and \(\Phi \colon \mathcal{X} \nrightarrow \mathcal{Y}\), \(\Psi\colon \mathcal{Y} \nrightarrow \mathcal{Z}\) are \(\mathcal{V}\)-enriched profunctors, define their composite by $$ (\Psi\Phi)(x,z) = \bigvee_{y \in \mathrm{Ob}(\mathcal{Y})} \Phi(x,y) \otimes \Psi(y,z). $$ Great! But we still need to check that \(\Psi\Phi\) is an enriched profunctor! We've got a well-defined function $$ \Psi\Phi \colon \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} $$ but we need to check that it's a \(\mathcal{V}\)-enriched profunctor. I'll let you give it a try. This cool fact will help: Theorem. If \(\mathcal{V}\) is a monoidal poset with all joins, \(\mathcal{V}\) is a quantale if and only if $$ a \otimes \left( \bigvee_{b\in B} b\right) = \bigvee_{b \in A} (a \otimes b) $$ for every element \(a\) and every subset \(B\) of \(\mathcal{V}\). Proof Sketch. For every element \(a\) of \(\mathcal{V}\) there is a monotone function $$ a \otimes - \, \colon \mathcal{V} \to \mathcal{V} $$ sending each \(x \in \mathcal{V}\) to \(a \otimes x\) . By the Adjoint Functor Theorem for Posets, this monotone function has a right adjoint iff it preserves all joins. It a right adjoint iff \(\mathcal{V}\) is closed, since \(\mathcal{V}\) being closed says that $$ a \otimes x \le y \text{ if and only if } x \le a \multimap y $$ for all \(x, y\) in \(\mathcal{V}\), which means that $$ a \multimap - \, \colon \mathcal{V} \to \mathcal{V} $$ is a right adjoint of $$ a \otimes - \, \colon \mathcal{V} \to \mathcal{V} $$ On the other hand, \(a \otimes -\) preserves all joins iff $$ a \otimes \left( \bigvee_{b\in B} b\right) = \bigvee_{b \in B} (a \otimes b) $$ for every element \(a\) and every subset \(B\) of \(\mathcal{V}\). \( \qquad \blacksquare \) Now try this puzzle: Puzzle 198. Suppose \(\mathcal{V}\) is a commutative quantale, and suppose \( \Phi \colon \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} \) and \( \Psi \colon \mathcal{Y}^{\text{op}} \times \mathcal{Z} \to \mathcal{V} \) are \(\mathcal{V}\)-enriched functors. Show that \( \Psi\Phi \colon \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} \), defined by $$ (\Psi\Phi)(x,z) = \bigvee_{y \in \mathrm{Ob}(\mathcal{Y})} \Phi(x,y) \otimes \Psi(y,z),$$ is a \(\mathcal{V}\)-enriched functor. This means checking an inequality - you can find the definition of enriched functor in Lecture 32. Also try these: Puzzle 199. Suppose that \(\mathcal{V}\) is a commutative quantale. Show that composition of \(\mathcal{V}\)-enriched profunctors is associative. Puzzle 200. Suppose that \(\mathcal{V}\) is a commutative quantale and \(\mathcal{X}\) is a \(\mathcal{V}\)-enriched category. The hom-functor $$ \mathrm{hom} \colon \mathcal{X}^{\text{op}} \times \mathcal{X} \to \mathcal{V} $$ described in Puzzle 197 gives an enriched profunctor that we call \(1_{\mathcal{V}} \colon \mathcal{V} \nrightarrow \mathcal{V}\). Show that this serves as an identity for composition of \(\mathcal{V}\)-enriched profunctors: check the left and right unit laws. You can see what's coming! If you get stuck I'll help you next time. I'll also give more examples of commutative quantales, and say more about them!
Let $\{a_n\}$ be a sequence such that $a_n\geq 0$ for all $n$ $\{a_n\}$ is monotonically decreasing $\sum_{n=1}^\infty a_n$ converges Is it true that as $n\rightarrow\infty$ then $$n\log n\;a_n\rightarrow 0$$ Given the hypotheses, we can show that $n a_n\rightarrow 0$ as $n\rightarrow\infty$. This follows since $$0\leq 2na_{2n}\leq 2(S_{2n}-S_n)\quad\text{ and }\quad 0\leq(2n+1)a_{2n+1}\leq 2(S_{2n+1}-S_n)+a_{2n+1}$$ I've been trying to adapt this approach to $n\log n\; a_n$, but it has been fruitless so far. I've been working with inequalities that involve $\log$, but they each seem to be too 'weak'; in that, I end up with a product sequence with one part going to $0$ and the other going to $\infty$. I also tried condensation, but I can't determine whether the general term of that new series forms a monotonically decreasing sequence. I also have not been able to come up with a counter-example. Any help in resolving the question in either direction is appreciated. UPDATE RRL's approach solves every case where $\lim\inf n\log n\;a_n>0$, but we still haven't resolved the case where $\lim\inf n\log n\; a_n=0$ and $\lim\sup n\log n\; a_n>0$. On a serendipitous note, I was reading through one of my books on analysis and the result for $na_n\rightarrow 0$ was posed as a problem. It came with a footnote that $1/n$ cannot be replaced by a function that approaches $0$ quicker. I take this to include $1/n\log n$. However, I'm having trouble finding the exact reference the author cites. The book I found this in is "Elementary Real and Complex Analysis" by Geogi E. Shilov (first printed in 1973), and the only direction he gives is the name "A.S. Nemirovski". So, any help directing me to this reference would be a big help as well.
I want to solve the problem: Find the curve satisfies following conditions. Minimize the functional $J$ The coordinates of the start/end points are given Direction(tangential vector) at the start/end points are given The length of the curve is $L$ I parameterized the curve using $\theta(s)$. $s$ is the length from the start point, and $\theta$ is the angle of the tangential vector at $s$. The functional $J$ is defined as following. $$ J[\theta] = \int_0^L f(\theta, \theta'; s)ds \qquad \left(\theta'\equiv \frac{d\theta}{ds}\right) $$ The curve $(x(s), y(s))$ can be writen using $\theta$ $$ x(s) = \int_0^s \cos(\theta(s))ds \\ y(s) = \int_0^s \sin(\theta(s))ds $$ Then, the second condition becomes following isoperimetric constraints. (set the start point as origin.) $$ x(L) = \int_0^L \cos(\theta)ds = p_x\\ y(L) = \int_0^L \sin(\theta)ds = p_y $$ condition 3, and 4 is already considered by the definitions. Therefore the problem is equivalent to the euler lagrange equation with two isoperimetric constraints. I wanted to verify this solution, so took it to my acquaintance. He said that the solution is wrong, but I can't understand. Is there any wrong part?
Difference between revisions of "Permanent" m (link) m (link) Line 63: Line 63: The most familiar problem in the theory of permanents was van der Waerden's conjecture: The permanent of a [[doubly-stochastic matrix]] of order $n$ is bounded from below by $n!/n^n$, and this value is attained only for the matrix composed of fractions $1/n$. A positive solution to this problem was obtained in [[#References|[4]]]. The most familiar problem in the theory of permanents was van der Waerden's conjecture: The permanent of a [[doubly-stochastic matrix]] of order $n$ is bounded from below by $n!/n^n$, and this value is attained only for the matrix composed of fractions $1/n$. A positive solution to this problem was obtained in [[#References|[4]]]. − Among the applications of permanents one may mention relationships to certain combinatorial problems (cf. [[Combinatorial analysis]]), such as the "[[problème des rencontres]]" and the "problème d'attachement" (or "hook problem" ), and also to the [[Fibonacci numbers]], the enumeration of [[Latin square]]s and + Among the applications of permanents one may mention relationships to certain combinatorial problems (cf. [[Combinatorial analysis]]), such as the "[[problème des rencontres]]" and the "problème d'attachement" (or "hook problem" ), and also to the [[Fibonacci numbers]], the enumeration of [[Latin square]]s and [[Steiner system]], and to the derivation of the number of $1$-factors and linear subgraphs of a [[graph]], while doubly-stochastic matrices are related to certain probability models. There are interesting physical applications of permanents, of which the most important is the dimer problem, which arises in research on the [[Adsorption|adsorption]] of di-atomic molecules in surface layers: The permanent of a $(0,1)$-matrix of a simple structure expresses the number of ways of combining the atoms in the substance into di-atomic molecules. There are also applications of permanents in statistical physics, the theory of crystals and physical chemistry. ====References==== ====References==== Latest revision as of 17:16, 19 March 2018 of an $m \times n$-matrix $A = \left\Vert a_{ij} \right\Vert$ The function $$ \mathrm{per}(A) = \sum_\sigma a_{1\sigma(1)}\cdots a_{m\sigma(m)} $$ where $a_{ij}$ are elements from a commutative ring and summation is over all one-to-one mappings $\sigma$ from $\{1,\ldots,m\}$ into $\{1,\ldots,n\}$. If $m=n$, then $\sigma$ represents all possible permutations, and the permanent is a particular case of the Schur matrix function (cf. Immanant) $$ d_\chi^H (A) = \sum_{\sigma\in H} \chi(\sigma) \prod_{i=1}^n a_{i\sigma(i)} $$ for $H \subseteq S_n$, where $\chi$ is a character of degree 1 on the subgroup $H$ (cf. Character of a group) of the symmetric group $S_n$ (one obtains the determinant for $H=S_n$, $\chi =\pm 1$, in accordance with the parity of $\sigma$). The permanent is used in linear algebra, probability theory and combinatorics. In combinatorics, a permanent can be interpreted as follows: The number of systems of distinct repesentatives for a given family of subsets of a finite set is the permanent of the incidence matrix for the incidence system related to this family. The main interest is in the permanent of a matrix consisting of zeros and ones (a $(0,1)$-matrix), of a matrix containing non-negative real numbers, in particular doubly-stochastic matrices (in which the sum of the elements in any row and any column is 1), and of a complex Hermitian matrix. The basic properties of the permanent include a theorem on expansion (the analogue of Laplace's theorem for determinants) and the Binet–Cauchy theorem, which gives a representation of the permanent of the product of two matrices as the sum of the products of the permanents formed from the cofactors. For the permanents of complex matrices it is convenient to use representations as scalar products in the symmetry classes of completely-symmetric tensors (see, e.g., [3]). One of the most effective methods for calculating permanents is provided by Ryser's formula: $$ \mathrm{per}(A) = \sum_{t=0}^{n-1} (-1)^t \sum_{X \in \Gamma_{n-t}} \prod_{i=1}^m r_i(X) $$ where $\Gamma_k$ is the set of submatrices of dimension $m \times k$ for the square matrix $A$, $r_i = r_i(X)$ is the sum of the elements of the $i$-th row of $X$ and $i,k=1,\ldots,m$. As it is complicated to calculate permanents, estimating them is important. Some lower bounds are given below. a) If $A$ is a $(0,1)$-matrix with $r_i(A) \ge t$, $i=1,\ldots,m$, then $$ \mathrm{per}(A) \ge \frac{t!}{(t-m)!} $$ for $t \ge m$, and $$ \mathrm{per}(A) \ge t! $$ if $t < m$ and $\mathrm{per}(A) > 0$. b) If $A$ is a $(0,1)$-matrix of order $n$, then $$ \mathrm{per}(A) \ge \prod_{i=1}^n \{ r_i^* + i - n \} $$ where $r_1^* \ge \cdots \ge r_n^*$ are the sums of the elements in the rows of $A$ arranged in non-increasing order and $\{ r_i^* + i - n \} = \max(0, r_i^* + i - n )$. c) If $A$ is a positive semi-definite Hermitian matrix of order $n$, then $$ \mathrm{per}(A) \ge \frac{n!}{s(A)^n} \prod_{i=1}^n |r_i|^2 $$ where $s(A) = \sum_{i,j} a_{ij}$ if $s(A) > 0$. Upper bounds for permanents: 1) For a $(0,1)$>-matrix $A$ of order $n$, $$ \mathrm{per}(A) \le \prod_{i=1}^n (r_i!)^{1/r_1} \ . $$ 2) For a completely-indecomposable matrix $A$ of order $n$ with non-negative integer elements, $$ \mathrm{per}(A) \le 2^{s(A)-2n} + 1 \ . $$ 3) For a complex normal matrix $A$ with eigen values $\lambda_1,\ldots,\lambda_n$, $$ |\mathrm{per}(A)| \le \frac{1}{n}\sum_{i=1}^n |\lambda_i|^n \ . $$ The most familiar problem in the theory of permanents was van der Waerden's conjecture: The permanent of a doubly-stochastic matrix of order $n$ is bounded from below by $n!/n^n$, and this value is attained only for the matrix composed of fractions $1/n$. A positive solution to this problem was obtained in [4]. Among the applications of permanents one may mention relationships to certain combinatorial problems (cf. Combinatorial analysis), such as the "problème des rencontres" and the "problème d'attachement" (or "hook problem" ), and also to the Fibonacci numbers, the enumeration of Latin squares and Steiner triple systems, and to the derivation of the number of $1$-factors and linear subgraphs of a graph, while doubly-stochastic matrices are related to certain probability models. There are interesting physical applications of permanents, of which the most important is the dimer problem, which arises in research on the adsorption of di-atomic molecules in surface layers: The permanent of a $(0,1)$-matrix of a simple structure expresses the number of ways of combining the atoms in the substance into di-atomic molecules. There are also applications of permanents in statistical physics, the theory of crystals and physical chemistry. References [1] H.J. Ryser, "Combinatorial mathematics" , Wiley & Math. Assoc. Amer. (1963) Zbl 0112.24806 [2] V.N. Sachkov, "Combinatorial methods in discrete mathematics" , Moscow (1977) (In Russian); translated by V. Kolchin: Encyclopedia of Mathematics and Its Applications 55. Cambridge University Press (1995) Zbl 0845.05003 [3] H. Minc, "Permanents" , Addison-Wesley (1978) [4] G.P. Egorichev, "The solution of van der Waerden's problem on permanents" , Krasnoyarsk (1980) (In Russian); Adv. Math. 42 (1981) 299-305. Zbl 0478.15003. [5] D.I. Falikman, "Proof of the van der Waerden conjecture regarding the permanent of a doubly stochastic matrix" Math. Notes , 29 : 6 (1981) pp. 475–479 Mat. Zametki , 29 : 6 (1981) pp. 931–938. Zbl 0475.15007 Comments The solution of the van der Waerden conjecture was obtained simultaneously and independently of each other in 1979 by both O.I. Falikman, [5], and G.P. Egorichev, [4], [a4]. For some details cf. also [a2]–[a5]. References [a1] D.E. Knuth, "A permanent inequality" Amer. Math. Monthly , 88 (1981) pp. 731–740 [a2] J.C. Lagarias, "The van der Waerden conjecture: two Soviet solutions" Notices Amer. Math. Soc. , 29 : 2 (1982) pp. 130–133 [a3] J.H. van Lint, "Notes on Egoritsjev's proof of the van der Waerden conjecture" Linear Algebra Appl. , 39 (1981) pp. 1–8 [a4] G.P. [G.P. Egorichev] Egorychev, "The solution of van der Waerden's problem for permanents" Adv. in Math. , 42 : 3 (1981) pp. 299–305 [a5] J.H. van Lint, "The van der Waerden conjecture: Two proofs in one year" Math. Intelligencer , 4 (1982) pp. 72–77 [a6] R.M. Wilson, "Non-isomorphic triple systems" Math. Zeitschr. , 135 (1974) pp. 303–313 [a7] A. Schrijver, "A short proof of Minc's conjecture" J. Comb. Theory (A) , 25 (1978) pp. 80–83 [a8] H. Minc, "Nonnegative matrices" , Wiley (1988) How to Cite This Entry: Permanent. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Permanent&oldid=39890
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Conjugate Heat Transfer In this blog post we will explain the concept of conjugate heat transfer and show you some of its applications. Conjugate heat transfer corresponds with the combination of heat transfer in solids and heat transfer in fluids. In solids, conduction often dominates whereas in fluids, convection usually dominates. Conjugate heat transfer is observed in many situations. For example, heat sinks are optimized to combine heat transfer by conduction in the heat sink with the convection in the surrounding fluid. Heat Transfer by Solids and Fluids Heat Transfer in a Solid In most cases, heat transfer in solids, if only due to conduction, is described by Fourier’s law defining the conductive heat flux, q, proportional to the temperature gradient: q=-k\nabla T. For a time-dependent problem, the temperature field in an immobile solid verifies the following form of the heat equation: Heat Transfer in a Fluid Due to the fluid motion, three contributions to the heat equation are included: The transport of fluid implies energy transport too, which appears in the heat equation as the convective contribution. Depending on the thermal properties on the fluid and on the flow regime, either the convective or the conductive heat transfer can dominate. The viscous effects of the fluid flow produce fluid heating. This term is often neglected, nevertheless, its contribution is noticeable for fast flow in viscous fluids. As soon as a fluid density is temperature-dependent, a pressure work term contributes to the heat equation. This accounts for the well-known effect that, for example, compressing air produces heat. Accounting for these contributions, in addition to conduction, results in the following transient heat equation for the temperature field in a fluid: Conjugate Heat Transfer Applications Effective Heat Transfer Efficiently combining heat transfer in fluids and solids is the key to designing effective coolers, heaters, or heat exchangers. The fluid usually plays the role of energy carrier on large distances. Forced convection is the most common way to achieve high heat transfer rate. In some applications, the performances are further improved by combining convection with phase change (for example liquid water to vapor phase change). Even so, solids are also needed, in particular to separate fluids in a heat exchanger so that fluids exchange energy without being mixed. Flow and temperature field in a shell-and-tube heat exchanger illustrating heat transfer between two fluids separated by the thin metallic wall. Heat sinks are usually made of metal with high thermal conductivity (e.g. copper or aluminum). They dissipate heat by increasing the exchange area between the solid part and the surrounding fluid. Temperature field in a power supply unit cooling due to an air flow generated by an extracting fan and a perforated grille. Two aluminum fins are used to increase the exchange area between the flow and the electronic components. Energy Savings Heat transfer in fluids and solids can also be combined to minimize heat losses in various devices. Because most gases (especially at low pressure) have small thermal conductivities, they can be used as thermal insulators… provided they are not in motion. In many situations, gas is preferred to other material due to its low weight. In any case, it is important to limit the heat transfer by convection, in particular by reducing the natural convection effects. Judicious positioning of walls and use of small cavities helps to control the natural convection. Applied at the micro scale, the principle leads to the insulation foam concept where tiny cavities of air (bubbles) are trapped in the foam material (e.g. polyurethane), which combines high insulation performances with light weight. Window cross section (left) and zoom-in on the window frame (right). Temperature profile in a window frame and glazing cross section from ISO 10077-2:2012 (thermal performance of windows). Fluid and Solid Interactions Fluid/Solid Interface The temperature field and the heat flux are continuous at the fluid/solid interface. However, the temperature field can rapidly vary in a fluid in motion: close to the solid, the fluid temperature is close to the solid temperature, and far from the interface, the fluid temperature is close to the inlet or ambient fluid temperature. The distance where the fluid temperature varies from the solid temperature to the fluid bulk temperature is called the thermal boundary layer. The thermal boundary layer size and the momentum boundary layer relative size is reflected by the Prandtl number (Pr=C_p \mu/k): for the Prandtl number to equal 1, thermal and momentum boundary layer thicknesses need to be the same. A thicker momentum layer would result in a Prandtl number larger than 1. Conversely, a Prandtl number smaller than 1 would indicate that the momentum boundary layer is thinner than the thermal boundary layer. The Prandtl number for air at atmospheric pressure and at 20°C is 0.7. That is because for air, the momentum and thermal boundary layer have similar size, while the momentum boundary layer is slightly thinner than the thermal boundary layer. For water at 20°C, the Prandtl number is about 7. So, in water, the temperature changes close to a wall are sharper than the velocity change. Normalized temperature (red) and velocity (blue) profile for natural convection of air close to a cold solid wall. Natural Convection The natural convection regime corresponds to configurations where the flow is driven by buoyancy effects. Depending on the expected thermal performance, the natural convection can be beneficial (e.g. cooling application) or negative (e.g. natural convection in insulation layer). The Rayleigh number, noted as Ra, is used to characterized the flow regime induced by the natural convection and the resulting heat transfer. The Rayleigh number is defined from fluid material properties, a typical cavity size, L, and the temperature difference,\Delta T, usually set by the solids surrounding the fluid: The Grashof number is another flow regime indicator giving the ratio of buoyant to viscous forces: The Rayleigh number can be expressed in terms of the Prandtl and the Grashof numbers through the relation Ra=Pr Gr. When the Rayleigh number is small (typically <10 3), the convection is negligible and most of the heat transfer occurs by conduction in the fluid. For a larger Rayleigh number, heat transfer by convection has to be considered. When buoyancy forces are large compared to viscous forces, the regime is turbulent, otherwise it is laminar. The transition between these two regimes is indicated by the critical order of the Grashof number, which is 10 9. The thermal boundary layer, giving the typical distance for temperature transition between the solid wall and the fluid bulk, can be approximated by \delta_\mathrm{T} \approx \frac{L}{\sqrt[4\,]{Ra}} when Pr is of order 1 or greater. Temperature profile induced by natural convection in a glass of cold water in contact with a hot surface . Forced Convection The forced convection regime corresponds to configurations where the flow is driven by external phenomena (e.g. wind) or devices (e.g. fans, pumps) that dominate buoyancy effects. In this case the flow regime can be characterized, similarly to isothermal flow, using the Reynolds number as an indicator,Re= \frac{\rho U L}{\mu}. The Reynolds number represents the ratio of inertial to viscous forces. At low Reynolds numbers, viscous forces dominate and laminar flow is observed. At high Reynolds numbers, the damping in the system is very low, giving small disturbances. If the Reynolds number is high enough, the flow field eventually ends up in turbulent regime. The momentum boundary layer thickness can be evaluated, using the Reynolds number, by \delta_\mathrm{M} \approx \frac{L}{\sqrt{Re}}. Streamlines and temperature profile around a heat sink cooling by forced convection. Radiative Heat Transfer Radiative heat transfer can be combined with conductive and convective heat transfer described above. In a majority of applications, the fluid is transparent to heat radiation and the solid is opaque. As a consequence, the heat transfer by radiation can be represented as surface-to-surface radiation transferring energy between the solid wall through transparent cavities. The radiative heat flux emitted by a diffuse gray surface is equal to \varepsilon n^2 \sigma T^4. When a surface is surrounded by bodies at a homogeneous T_\mathrm{amb}, the net radiative flux is q_\mathrm{r} = \varepsilon n^2 \sigma (T_\mathrm{amb}^4-T^4). When surrounding surfaces of different temperatures, each surface-to-surface exchange is determined by the surface’s view factors. Nevertheless, both fluids and solids may be transparent or semitransparent. So radiation can occur in fluid and solids. In participating (or semitransparent) media, the radiation rays interact with the medium (solid or fluid) then absorb, emit, and scatter radiation. Whereas radiative heat transfer can be neglected in applications with small temperature differences and lower emissivity, it plays a major role in applications with large temperature differences and large emissivities. Comparison of temperature profiles for a heat sink with a surface emissivity \varepsilon = 0 (left) and \varepsilon = 0.9 (right). Conclusion Heat transfer in solids and heat transfer in fluids are combined in the majority of applications. This is because fluids flow around solids or between solid walls, and because solids are usually immersed in a fluid. An accurate description of heat transfer modes, material properties, flow regimes, and geometrical configurations enables the analysis of temperature fields and heat transfer. Such a description is also the starting point for a numerical simulation that can be used to predict conjugate heat transfer effects or to test different configurations in order, for example, to improve thermal performances of a given application. Notations C_{p}: heat capacity at constant pressure (SI unit: J/kg/K) g: gravity acceleration (SI unit: m/s 2) Gr: Grashof number (dimensionless number) k: thermal conductivity (SI unit: W/m/K) L: characteristic dimension (SI unit: m) n: refractive index (dimensionless number) p_\mathrm{A}: absolute pressure (SI unit: Pa) Pr: Prandtl number (dimensionless number) q: heat flux (SI unit: W/m 2) Q: heat source (SI unit: W/m 3) Ra: Rayleigh number (dimensionless number) S: strain rate tensor (SI unit: 1/s) T: temperature field (SI unit:K) T_\mathrm{amb}: ambient temperature (SI unit: K) \bold{u}: velocity field (SI unit: m/s) U: typical velocity magnitude (SI unit: m/s) \alpha_{p}: thermal expansion coefficient (SI unit: 1/K) \delta_\mathrm{M}: momentum boundary layer thickness (SI unit: m) \delta_\mathrm{T}: thermal layer thickness (SI unit: m) \Delta T: characteristic temperature difference (SI unit: K) \varepsilon: surface emissivity (dimensionless number) \rho: density (SI unit: kg/m 3) \sigma: Stefan-Boltzmann constant (SI unit: W/m 2T 4) \tau: viscous stress tensor (SI unit: N/m 2) Comments (27) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Current browse context: cond-mat.mtrl-sci Change to browse by: References & Citations Bookmark(what is this?) Condensed Matter > Materials Science Title: Temperature and high fluence induced ripple rotation on Si(100) surface (Submitted on 7 Apr 2016) Abstract: Topography evolution of Si(100) surface due to oblique incidence low energy ion beam sputtering (IBS) is investigated. Experiments were carried out at different elevated temperatures from 20$^{\circ}$C to 450$^{\circ}$C and at each temperature, the ion fluence is systematically varied in a wide range from $\sim$ 1$\times$10$^{18}$cm$^{-2}$ to 1$\times$10$^{20}$cm$^{-2}$. The ion sputtered surface morphologies are characterized by atomic force microscopy and high-resolution cross-sectional transmission electron microscopy. At room temperature, the ion sputtered surfaces show periodic ripple nanopatterns and their wave-vector remains parallel to ion beam projection for entire fluence range. With increase of substrate temperature, these patterns tend to demolish and reduce into randomly ordered mound-like structures around 350$^{\circ}$C. Further rise in temperature above 400$^{\circ}$C leads, surprisingly, orthogonally rotated ripples beyond fluence 5$\times$10$^{19}$cm$^{-2}$. All the results are discussed combining the theoretical framework of linear, non-linear and recently developed mass redistribution continuum models of pattern formation by IBS. These results have technological importance regarding the control over ion induced pattern formation as well as it provides useful information for further progress in theoretical field. Submission historyFrom: Debasree Chowdhury [view email] [v1]Thu, 7 Apr 2016 16:35:38 GMT (1286kb,D)
I am doing a question on finding the Pareto efficient quantity of a public good. Instead of using the condition $\sum MRS_i = c'(G)$ where $c(G)$ denotes the cost of the public good, it asks you to find the efficient quantity by maximising the sum of the agents' utilities. Apparently this is only valid if preferences are quasi-linear so whilst I can do the question I do not understand why this is a valid approach. Any help on this matter would be appreciated. I don't think it is true in a standard public good economy the question is referring to. Consider the following counterexample: Suppose $I = \{1,2\}$ and utility of the individual $i$ depends on his consumption of public good $(G)$ and private good $x_i$: $u_1(G, x_1) = 2\sqrt{G} + x_1$ and $u_2(G, x_2) = 2\sqrt{G} + x_2$, Also, the CRS technology used for production of public good uses private good as input: $G = f(x_0) = x_0$. If the society has only 4 units of private good in the beginning, then the set of feasible allocations can be written as $\{(G, x_1, x_2)\in\mathbb{R}^3_+: G+x_1+x_2 = 4\}$. Notice that the allocation $a_1 = (G, x_1, x_2) = (1,3,0)$ is Pareto efficient, but does not maximize the sum of utilities. The reason is that allocation $a_2 = (G, x_1, x_2) = (4,0,0)$ yields the higher sum. $\color{blue}{u_1(1,3) + u_2(1,0)} = 5+2 =7 \color{blue}{<} 8 = 4 + 4 = \color{blue}{u_1(4,0) + u_2(4,0)}$. The Lindahl Equilibrium $y^*$ with quasi-linear preferences is uniquely determined. That is, $y^*$ is independent of individual consumption levels of $x$
Let $n \in \mathbb{N}$ be odd. Show that: $$\Aut(\mathbb{Z}/{n\mathbb{Z}}) \cong \Aut(\mathbb{Z}/{2n\mathbb{Z}})$$ $\DeclareMathOperator{\Aut}{Aut}$ My attempt: An automorphism $f \in \Aut(\mathbb{Z}/{n\mathbb{Z}})$ is uniquely represented by $f(1)$ since $1$ generates $\mathbb{Z}/{n\mathbb{Z}}$. $f(1)$ has to be a generator of $\mathbb{Z}/{2n\mathbb{Z}}$, which means $2n$ and $f(1)$ are relatively prime. Thus, $\Aut(\mathbb{Z}/{n\mathbb{Z}})$ and $(\mathbb{Z}/{n\mathbb{Z}})^\times$ are actually isomorphic via the isomorphism $i \mapsto f_i$, where $f_i$ is an automorphism of $\mathbb{Z}/{n\mathbb{Z}}$ having $f_i(1) = i$. Since we can similarly deduce that $\Aut(\mathbb{Z}/{2n\mathbb{Z}}) \cong (\mathbb{Z}/{2n\mathbb{Z}})^\times$, we have reduced the problem to showing that $$(\mathbb{Z}/{n\mathbb{Z}})^\times = (\mathbb{Z}/{2n\mathbb{Z}})^\times$$ Since $n$ and $2$ are relatively prime, we have: $$\phi(2n) = \phi(2)\phi(n) = \phi(n)$$ Hence, both groups are of the same order, $\phi(n)$. Now, I'm aware of the fact that $(\mathbb{Z}/{p\mathbb{Z}})^\times$ is a cyclic group if $p$ is prime, but that is certainly not the case for $2n$, so we cannot easily conclude that they are isomorphic. The only thing that comes to mind is to try to find what the elementary divisors for an Abelian group of order $\phi(n)$ could be. For example, $\phi(n)$ is even for $n \ge 3$ so there exists a unique element of order $2$ in both groups, which is $-1$. So the isomorphism would send $-1$ to $-1$. How should I proceed here?
On the DNA Computer Binary Code In any finite set we can define a , a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule ( G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases The Boolean algebra on the set of elements X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable. In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table: OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table: $A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example: CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111 ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000 $\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001 The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is: In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position. There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning. References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14.
This is something I have been thinking about recently, allow me to complete Mariano Suárez-Alvarez' answer. First just an observation: " pick any Riemannian manifold with trivial holonomy at each point: for example, a space form of curvature zero": actually you have no choice, a connection with trivial holonomy has vanishing curvature, so the only candidate metrics are Euclidean. In the language of geometric structures, a flat torsion-free connection is equivalent to an affine structure. Now to the point. As we are going to see, the answer to your question is "generically, yes". First note that given a connection $\nabla$ (let's permanently assume that $\nabla$ is torsion-free otherwise there's no chance of it being a Riemanniann connection), by definition a metric $g$ has Levi-Civita connection $\nabla$ if and only if $g$ is $\nabla$-parallel: $\nabla g = 0$. Let us make a general observation about parallel tensor fields with respect to a given connection. If $F$ is a parallel tensor field, then $F$ is preserved by parallel transport. In particular: $F$ is completely determined by what it is at some point $x_0 \in M$ (to find $F_x$, parallel transport $F_{x_0}$ along some path from $x_0$ to $x$). $F_{x_0}$ must be invariant under the holonomy group $\operatorname{Hol}(\nabla, x_0)$. Conversely, given a tensor $F_{x_0}$ in some tangent space $T_{x_0} M$ such that $F_{x_0}$ is invariant under $\operatorname{Hol}(\nabla, x_0)$, there is a unique parallel tensor field $F$ on $M$ extending $F_{x_0}$ (obtained by parallel-transporting $F_{x_0}$). So you have the answer to your question in the following form: There are as many Riemannian metrics having Levi-Civita connection $\nabla$ as there are inner products $g$ in $T_{x_0} M$ preserved by $\operatorname{Hol}(\nabla, x_0)$. Now you may want to push the analysis further: how many is that? The answer is provided by analysing the action of the restricted holonomy group $\operatorname{Hol}_0(\nabla, x_0)$ on $T_{x_0}M$. Now this is just linear algebra: let's just call $G = \operatorname{Hol}_0(\nabla, x_0)$ and $V = T_{x_0} M$. Let $g$ and $h$ be two inner products in $V$ that are preserved by $G$, in other words $G \subset O(g)$ and $G \subset O(h)$. If $G$ acts irreducibly on $V$, i.e. there are no $G$-stable subspaces $\{0\} \subsetneq W \subsetneq V$, then a little exercise that I am leaving to you shows that $g$ and $h$ must be proportional. So in the generic case where $\nabla$ is irreducible, the answer to your question is yes: If $\nabla$ is irreducible, all Riemannian metrics with connection $\nabla$ must be equal up to positive scalars. NB: note that there might not be any such metrics if $G$ does not preserve any inner product on $V$, in other words $G$ must be conjugated to a subgroup of $O(n)$. On the opposite side of the spectrum, if $G$ is trivial, i.e. $\nabla$ is flat, then $g$ and $h$ can be anything, there are no restrictions: If $\nabla$ is flat, there are as many Riemannian metrics with connection $\nabla$ as there are inner products in a $\dim M$-dimensional vector space, they are the Euclidean metrics on $M$. In the "general" case where $\nabla$ is reducible, I hope I am not mistaken (I won't write the details) in saying that you can derive from the de Rham decomposition theorem that the situation is a mix of the two previous "extreme" cases: If $\nabla$ is reducible, locally one can write $M = M_0 \times N$, such that both $g$ and $h$ split as products. The components of $g$ and $h$ on $M_0$ are both Euclidean and their components on $N$ are equal up to a scalar. If this is correct, I believe your question is answered completely. NB: In this paper (see also this one), Richard Atkins addresses this question. I haven't really looked but since it seems to me that there is not much more to say than what I've written, I have no idea what he's really doing in there.
Let A=111111 and B=142857. Find a positive integer N with six or fewer digits such that N is the multiplicative inverse of AB modulo 1,000,000. Let A=111111 and B=142857. Find a positive integer N with six or fewer digits such that N is the multiplicative inverse of AB modulo 1,000,000. [111,111 x 142,857] x M mod 1,000,000 =1, solve for M LCM[111,111, 142,857] = 999,999 GCD[111,111, 142,857] =15,873 M =999,999 / 15,873 =63 - is the multiplicative inverse of AB mod 1,000,000. Let A=111111 and B=142857. Find a positive integer N with six or fewer digits such that N is the multiplicative inverse of AB modulo 1,000,000. \(\begin{array}{rcll} \text{Let} \\ & A=111111 \\ \text{and} \\ & B=142857 \\ \end{array}\) \(\begin{array}{rcll} A\text{ and } B \text{ are factors of } 999999. \\ & 9A=999999 \\ \text{and} \\ & 7B=999999 \\ \end{array}\) \(\begin{array}{rcll} A\text{ and } B \text{ modulo } 1000000. \\ & 9A \equiv -1 \pmod{1000000} \\ \text{and} \\ & 7B \equiv -1 \pmod{1000000} \\ \end{array}\) \(\begin{array}{rcll} \text{Multiply these equations: } \\ & (9A)(7B) &\equiv& (-1)(-1) \pmod{1000000} \\ & (AB) (9\cdot 7) &\equiv& 1 \pmod{1000000} \\ & (AB)\underbrace{(63)}_{=(AB)^{-1}} &\equiv& 1 \pmod{1000000} \\ \end{array}\) \(\text{So $N=\boxed{63}$ is the multiplicative inverse to $AB$ modulo $1000000$.} \)
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
According to the wikipedia article: http://en.wikipedia.org/wiki/Levenberg_Marquardt -- $S(\boldsymbol\beta+\boldsymbol\delta) \approx \|\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta\|^2$ Taking the derivative with respect to δ and setting the result to zero gives: $(J^{T}J)\boldsymbol \delta = J^{T} [y - f(\boldsymbol \beta)])$ -- My attempt to derive the equation: $\|\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta\|^2 = (\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta)^T(\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta)$ using product rule: $\frac{\partial \|\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta\|^2}{\partial \boldsymbol\delta} = (-J^{T})(\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta) + (\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta)^T(-J)$ The dimensions of the left and right side don't match. I believe there might be something wrong with my differentiation. There seems to be a transpose missing, but I'm not sure what would cause a transpose in the differentiation operation.
Answer $\theta $ lies in the First Quadrant or Quadrant-I . Work Step by Step The trigonometric ratios are as follows: $\sin \theta =\dfrac{y}{r} \\ \cos \theta =\dfrac{x}{r} \\ \tan \theta =\dfrac{y}{x}\\ \csc \theta =\dfrac{r}{y} \\ \sec \theta =\dfrac{r}{x} \\ \cot \theta =\dfrac{x}{y}$ where, $ r=\sqrt {x^2+y^2}$ It has been seen that both $ x $ and $ y $ are positive; this implies that the angle $\theta $ lies in the First Quadrant or Quadrant-I .
Let's first have a look at the rectangular signal given as an example in your question. If you have a rectangle $s(t)$ in the time domain which is $1$ in the interval $[-T/2,T/2]$ and zero elsewhere, its Fourier transform is $S(f)=T\text{sinc}(Tf)$, where I use $\text{sinc}(x)=\sin(\pi x)/(\pi x)$. The value of its Fourier transform at $f=0$ equals $S(0)=T$, which corresponds to $$\int_{-\infty}^{\infty}s(t)dt=T\tag{1}$$ Its time average (or mean, or DC value) is given by $$\bar{s}=\lim_{T_0\rightarrow\infty}\frac{1}{T_0}\int_{-T_0/2}^{T_0/2}s(t)dt=0\tag{2}$$ It is clear that any function for which the integral in (1) is finite, must have a DC-value of zero. The integral in (1) is the value of the Fourier transform of the signal at DC, and this is probably what confuses you. The DC value of a signal, and the value of its Fourier transform at DC are not the same. Any signal with a finite Fourier transform at DC has a DC value of zero, i.e. $\bar{s}=0$. Any signal with a non-zero DC value $\bar{s}\neq 0$ has a Dirac delta impulse component in its Fourier transform at DC. If you write a signal as $$s(t)=\bar{s}+\tilde{s}(t)$$ where $\bar{s}$ is the DC component as computed from (2), and, consequently, $\tilde{s}(t)$ has a DC component of zero, then its Fourier transform is $$S(f)=\bar{s}\delta(f)+\tilde{S}(f)$$ where $\tilde{S}(0)$ is finite. EDIT:Also note that when the Fourier transform of a signal $s(t)$ has a certain non-zero value at a frequency $f_0$, then this does not entail that the signal has a pure sinusoidal component at that frequency. The same is true for DC. If the Fourier transform has a finite value at DC, the time-domain signal has no DC component, otherwise there would be a Dirac impulse at $f=0$, just as there would be a Dirac impulse at $f_0$ if the signal contained a sinusoid at the frequency.
Up until a few days ago I was thinking that the following two forms of the Fisher Information are "always" equivalent: $$(1) \quad \mathcal{I(\theta)}= E_\theta [\frac{\partial \log \ell(y;\theta)}{\partial \theta} \frac{\partial \log \ell(y; \theta)}{\partial \theta'}],$$ $$(2) \quad \mathcal{I(\theta)}= E_\theta [-\frac{\partial^2 \log \ell(y;\theta)}{\partial \theta\partial \theta'} ],$$ where $\theta$ is the parameter vector, $y$ is the vector of observations, and $\ell$ is the likelihood function. I heard that (1) is the original definition of the Fisher Information, however (2) only holds when the model is not "misspecified". I consulted some sources, among them Statistics and Econometrics Models (Gourieroux and Monfort 1995, Vol 1, p83 property 3.8), but there is no mention of any regularity conditions needed to hold for the second definition of $\mathcal{I}(\theta)$ to be equivalent to the original definition (1). How does misspecification of a model cause this equivalency to break?
Inverses for Integer Addition Theorem $\forall x \in \Z: \exists -x \in \Z: x + \paren {-x} = 0 = \paren {-x} + x$ Proof Let us define $\eqclass {\tuple {a, b} } \boxtimes$ as in the formal definition of integers. $\boxtimes$ is the congruence relation defined on $\N \times \N$ by: $\tuple {x_1, y_1} \boxtimes \tuple {x_2, y_2} \iff x_1 + y_2 = x_2 + y_1$ In order to streamline the notation, we will use $\eqclass {a, b} {}$ to mean $\eqclass {\tuple {a, b} } \boxtimes$, as suggested. Thus: \(\displaystyle \eqclass {a, a + x} {} + \eqclass {a + x, a} {}\) \(=\) \(\displaystyle \eqclass {a + a + x, a + x + a} {}\) \(\displaystyle \) \(=\) \(\displaystyle \eqclass {a, a} {}\) Construction of Inverse Completion: Members of Equivalence Classes \(\displaystyle \) \(=\) \(\displaystyle \eqclass {a + x + a , a + a + x} {}\) \(\displaystyle \) \(=\) \(\displaystyle \eqclass {a + x, a} {} + \eqclass {a, a + x} {}\) So $\eqclass {a, a + x} {}$ has the inverse $\eqclass {a + x, a} {}$. $\blacksquare$ Sources 1951: Nathan Jacobson: Lectures in Abstract Algebra: I. Basic Concepts... (previous) ... (next): Introduction $\S 5$: The system of integers 1964: W.E. Deskins: Abstract Algebra... (previous) ... (next): $\S 2.5$: Theorem $2.25$ 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $2$: Integers and natural numbers: $\S 2.1$: The integers: $\mathbf Z. \, 4$ 2008: Paul Halmos and Steven Givant: Introduction to Boolean Algebras... (previous) ... (next): $\S 1$
Abbreviation: CanMon A is a monoid $\mathbf{M}=\langle M, \cdot, e\rangle$ such that cancellative monoid $\cdot $ is left cancellative: $z\cdot x=z\cdot y\Longrightarrow x=y$ $\cdot $ is right cancellative: $x\cdot z=y\cdot z\Longrightarrow x=y$ Let $\mathbf{M}$ and $\mathbf{N}$ be cancellative monoids. A morphism from $\mathbf{M}$ to $\mathbf{N}$ is a function $h:M\rightarrow N$ that is a homomorphism: $h(x\cdot y)=h(x)\cdot h(y)$, $h(e)=e$ Example 1: $\langle\mathbb{N},+,0\rangle$, the natural numbers, with addition and zero. All free monoids are cancellative. All finite (left or right) cancellative monoids are reducts of groups. $\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &2\\ f(5)= &1\\ f(6)= &2\\ f(7)= &1\\ \end{array}$
A while back I bought a couple of PIC16F57 (DIP) chips because they were dirt cheap. I figured someday I could use these in something. Yes, I know, this is a horrible way to actually build something and a great way to accumulate junk. However, this time the bet paid off! Only about a year or two too late; but that’s beside the point. The problem I now had was that I didn’t have a PIC programmer. When I bought these chips I figured I could easily rig a board up to the chip via USB. Lo and behold, I didn’t read the docs properly; this chipset doesn’t have a USB to serial interface. Instead, it only supports Microchip’s In-Circuit Serial Programming (ICSP) protocol via direct serial communication. Rather than spend the $40 to buy a PIC programmer (thus, accumulating even more junk I don’t need), I decided to think about how I could make this happen. Glancing at some of my extra devices lying around, I noticed an unused Arduino. This is how the idea for this project came to life. Believe me, the irony of programming a PIC chip with an ATMega is not lost on me. So for all of you asking, “why would anyone do this?” the answer is two-fold. First, I didn’t want to accumulate even more electronics I would not use often. Second, these exercises are just fun from time to time! Hardware Design My prototype’s hardware design is targeted to using an Arduino Uno (rev 3) and a PIC16F57. Assuming the protocol looks the same for other ICSP devices, a more reusable platform could emerge from a common connector interface. Likewise, for other one-offs it could easily be adapted for different pinouts. Today, however, I just have the direct design for interfacing these two devices: Overall, the design can’t get much simpler. For power I have two voltage sources. The Arduino is USB-powered and the 5V output powers the PIC chip. Similarly, I have a separate +12V source for entering/exiting PIC programming mode. For communication, I have tied the serial communication pins from the Arduino directly to the PIC device. The most complicated portion of this design is the transistor configuration; though even this is straightforward. I use the transistor to switch the 12V supply to the PIC chip. If I drive the Arduino pin 13 high, the 12V source shunts to ground. Otherwise, 12V is supplied to the MCLR pin on the PIC chip. I make no claims that this is the most efficient design (either via layout or power consumption), but it’s my first working prototype. Serial Communication with an Arduino Arduino has made serial communication pretty trivial. The only problem is that the Arduino’s serial communication ports are UART. That is to say, the serial communication is asynchronous. The specification for programming a PIC chip with ICSP clearly states a need for a highly controlled clock for synchronous serial communication. This means that the Arduino’s Serial interface won’t work for us. As a result, we will go on to use the Arduino to generate our own serial clock and also signal the data bits accordingly. Setting the Clock Speed The first task to managing our own serial communication with the Arduino is to select an appropriate clock speed. The key to choosing this speed was finding a suitable trade-off between programming speed (i.e. fast baud rate) vs. computation speed on the Arduino (i.e. cycles of computation between each clock tick). Remember, the Arduino is ultimately running an infinite loop and isn’t actually doing any parallel computation. This means that the amount of time it takes to perform all of your logic for switching data bits must be negligible between clock ticks. If your computation time is longer than or close to the clock ticking frequency, the computation will actually impact the clock’s ability to tick steadily. As a rule of thumb, you can always set your clock rate to have a period that is roughly 1 to 2 orders of magnitude than your total computation. Taking these factors into account, I chose 9600 baud (or a clock at 9.6KHz). To perform all the logic required for sending the appropriate programming data bits, I estimated somewhere in the 100’s of nanoseconds to 1’s of microseconds for computation. Giving myself some headroom, I selected a standard baud rate that was roughly two orders of magnitude larger than my computation estimate. Namely, a period of 104 microseconds corresponds to a 9.6KHz clock. After completing the project I could have optimized my clock speed. However, that was unnecessary for this project. The clock rate I had selected worked well. The 9600 baud rate is fast enough for timely programming the device because we don’t have much data to transmit. Similarly, it provides us a lot of headroom to experiment with different types of computation. Generating the Clock Signal While this discussion has primarily focused on the design decisions involved in choosing a clock signal rate, how did we generate it? The process really comes down to toggling a GPIO pin on the Arduino. In our specific implementation, I chose pin 2 on the Arduino. While you can refer to the code for more specific details, an outline of this process follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 inline bool clock_tick() { if (PORTD & _BV(SERIAL_CLOCK_PORT)) { PORTD &= ~_BV(SERIAL_CLOCK_PORT); // If clock is currently high, toggle to low return false; } PORTD |= _BV(SERIAL_CLOCK_PORT); // Return true if we have turned clock high return true; } void loop() { if (clock_tick()) { // ... compute and control data signals } // delay for 52us (half clock period) waitForHalfClockPeriod(); } As you can see, “ticking” the clock basically consists of toggling it and then making sure each loop iteration waits for half the clock period. The omitted section for data control is where most of the logic for the controller goes. However, it runs in a time that is far less than 52 microseconds. As a result, the duration of each loop iteration can be considered as: $$ \begin{equation} 52\mu s \gg \delta \\ 52\mu s + \delta \simeq 52\mu s \end{equation} $$ where \(\delta\) is the time required to perform computation for data control. Consequently, the clock ticks at an appropriate rate. I have included an image taken from my oscilloscope below. This image provides some empirical evidence that what we’re doing should work. While there is no data being sent on this image (we’ll show more of that below), we can generate a nice clock signal (notice the 1/|dX| and BX-AX lines on the image) at 9.6KHz by toggling the pin and waiting. Controlling the Data Line Now that we have a steady clock, we need to control the data line. Writing this section of code felt like I was back in my VHDL/Verilog days. The basic principal— from a signal generation perspective— was to only change the data lines on a positive clock edge. There were minor complications for the read data command (since the pin has to go from output to input), but this was an isolated case with a straightforward solution. To actually control the signal, we manually turn the serial data pin (in our case, pin 4) high or low depending on the command and data each clock cycle. This ICSP programming protocol starts with a 6 bit command sequence. If the command requires data, then a framed 14-bit word (total of 16 bits with the start and stop bits) is sent or received. Command and data bits are sent least significant bit first. In the case of my PIC16F57, the commands are only 4 bits where the upper 2 bits are ignored by the PIC. Likewise, since the PIC16F57 has a 12 bit word, the upper 2 bits of the data word are also ignored while sending and receiving data. The Load Data Command Let’s first investigate the load data command. This command queues up data to write to the chip. A series of additional commands and delays are executed to flush this data to the chip. The bits for the load command are 0bXX0010 (where X is a “don’t care” value). However, let’s take a look at it under the oscilloscope: The yellow curve is the clock and the blue curve is our data line. Starting from the left (and reading the blue curve under the yellow “high” marks) we can read our command exactly as intended: 0b0100XX. Notice that it is inverted since our least significant bits are sent first. If you follow along a little bit further on the top, you’ll notice a clock-low delay. This delay allows the PIC chip to prepare for data. The data for the command immediately follows the delay. Implementation Overview Without going too deeply into the details (again, I refer to the code), the command sequences are modeled as a state machine. Generally, when executing a command, we keep track of the number of steps taken for a particular command already. Since each command consists of sending a finite number of bits, we can now precisely what to do at each step. The other detail I mentioned earlier was about the read command. This command is sent over pin 4 in output mode, but during the delay this pin must switch to input mode. When in input mode, the PIC chip will proceed to send data at the given memory address. To accommodate this, each command starts by setting the pin as output mode. In the case of the read command, it sets the pin as input when appropriate. Conclusion I’ve enjoyed building out this project. When initially building, I really wanted to discover whether or not I could build a PIC programmer with an Arduino. This post reviews my initial prototype and a high-level description of the Arduino code. Unfortunately, the story doesn’t end here. Due to a variety of limitations, I had to introduce a PC-based controller to stream data to the Arduino. My finished product also removes extra elements (i.e. a second 12.5V power supply) and moves from a breadboard to a more permanent fixture. Even so, I leave these details to a part 2 of this post. In any case, you can checkout my code from this repo and run it today. While I work on the second part of the write up, you can always read through what I’ve done. For now though, I will leave you with a picture of some messy breadboarding.
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero). I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it. But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$ I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ... Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!) On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case @Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question. Moreover, the title is vague and doesn't clearly ask a question. And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed. If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself. but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A? @swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying But 240 miles seems waaay to short to cross two time zones So my inclination is to say the answer key is nonsense You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi... Hi there, I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer. Where does the term e^{(r_1-r_2)x} come from? It seems like it is taken out of the blue, but it yields the desired result.
I have been trying to determine the series expansion of the beta function, but so far I haven't been successful. The two results I wish to obtain are the following: $$ B(x,y) = \sum_{n=0}^{\infty} \frac{\binom{n-y}{n} }{x+n} $$ and \begin{equation} B(x,y) = \sum_{n=0}^{\infty} \frac{1}{n! (n+x)} \frac{\Gamma(n-y+1)}{\Gamma(1- y)} \end{equation} For the second result, it follows directly from (Eq. 16.79 of Basic Concepts of String Theory) $$B(x,y) = \frac{1}{\Gamma(1-y)} \int_{0}^{\infty} ds \int_0^1 dz \; s^{-y} e^{-(1-z)s} z^{x-1} $$ However, I don't understand what were the steps to obtain this equality. Therefore, any hints on how to proceed in obtaining the first result and this last equation would be highly appreciated.
Let there be 3 fields $A$, $B$ and $C$. If all elements of $A$ are algebraic over $B$ and all elements of $B$ are algebraic over $C$, prove that this implies that all elements of $A$ is algebraic over $C$. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let it be that element $\alpha\in A$ has minimal polynomial $\sum_{i=0}^{n}\beta_{i}X^{i}$ with $\beta_{i}\in B$. Then $C\subset B_{0}:=C\left(\beta_{0},\dots,\beta_{n}\right)$ and $B_{0}\subset B_{0}\left(\alpha\right)$ are finite extensions. Consequently $C\subset B_{0}\left(\alpha\right)$ is a finite (hence algebraic) extension. This tells us that $\alpha$ is algebraic over $C$. edit: Used are the following lemma's: If $K\subset L$ and $L\subset M$ are finite extensions then $K\subset M$ is a finite extension. Every finite extension is algebraic. (A proof of that can be found here). If you have an element $\alpha \in A$ which is algebraic over $B$, what can you say about the form of $\alpha$? Similarly, if you have an element $\beta \in B$ which is algebraic over $C$, what can you say about the form of $\beta$? Can you somehow write $\alpha \in A$ as an algebraic element over $C$?
This question comes from Georgi, Lie Alegbras in Particle Physics. Consider the algebra generated by $\sigma_a\otimes1$ and $\sigma_a\otimes \eta_1$ where $\sigma_a$ and $\eta_1$ are Pauli matrices (so $a=1,2,3$). He claims this is "semisimple, but not simple". To me, that means we should look for an invariant subalgebra (a two-sided ideal). The multiplication table is pretty easy to figure out: $[\sigma_a,\sigma_b]=i\epsilon_{abc}\sigma_c,$ $[\sigma_a,\sigma_b\otimes\eta_1]=i\epsilon_{abc}\sigma_c\otimes\eta_1$ $[\sigma_a\otimes\eta_1,\sigma_b\otimes\eta_1]=i\epsilon_{abc}\sigma_c\otimes1$ I'm dropping off the identity in all the places where it looks like it should be. So the only subalgebra is the $\mathfrak{su}(2)$ generated by $\sigma_a\otimes 1$, and that is not invariant from the second line above. So this looks like a simple algebra to me. Is there a typo somewhere I do not see?
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Definition An ordinary first order first degree differential equation is of the form \[\frac{dy}{dx}=f(x,y)……….(1)\] which can also be written as \[Mdx+Ndy=0……….(2)\] where M and N are functions of x and y or constants. All first order first degree differential equations can’t be solved. However, in the following series of articles we shall consider the following special types of equation of first order and first degree which can be solved by some standard method: 1. Equations in which variables are separable 2. Homogeneous equations 3. Exact equations 4. Linear equations Equations in which variables are separable If the equation (1) or (2) can be put in the form \[{{f}_{1}}(x)dx+{{f}_{2}}(y)dy=0\] Then the equation can be solved easily by integrating each term separately. Thus the solution of the equation is \[\int{{{f}_{1}}(x)dx+\int{{{f}_{2}}(y)dy=c}}\] where c is an arbitrary constants. Example 01 \[Solve:ydx+\left( 1+{{x}^{2}} \right){{\tan }^{-1}}xdy=0\] Solution: Given, \[ydx+\left( 1+{{x}^{2}} \right){{\tan }^{-1}}xdy=0\] \[\Rightarrow ydx=-\left( 1+{{x}^{2}} \right){{\tan }^{-1}}xdy\] \[\Rightarrow \frac{dx}{\left( 1+{{x}^{2}} \right){{\tan }^{-1}}x}=-\frac{dy}{y}\] \[\Rightarrow \frac{dx}{\left( 1+{{x}^{2}} \right){{\tan }^{-1}}x}+\frac{dy}{y}=0\] Integrating, we get \[\int{\frac{dx}{\left( 1+{{x}^{2}} \right){{\tan }^{-1}}x}+\int{\frac{dy}{y}}}=\log c\] where c is an arbitrary constants. \[\Rightarrow \int{\frac{d\left( {{\tan }^{-1}}x \right)}{\left( 1+{{x}^{2}} \right)}+}\int{\frac{dy}{y}}=\log c\] \[\Rightarrow \log \left( {{\tan }^{-1}}x \right)+\log y=\log c\] \[\Rightarrow \log \left( y{{\tan }^{-1}}x \right)=\log c\] \[\therefore y{{\tan }^{-1}}x=c\] is the required solution. Example 02 \[Solve:\frac{dy}{dx}+1={{e}^{x+y}}\] Solution: Given \[\frac{dy}{dx}+1={{e}^{x+y}}……….(1)\] \[Let,x+y=z\Rightarrow 1+\frac{dy}{dx}=\frac{dz}{dx}\] Therefore equation (1) becomes \[\frac{dz}{dx}={{e}^{z}}\Rightarrow {{e}^{-z}}dz=dx\] Integrating, we get \[\int{{{e}^{-z}}dz=\int{dx+c}}\] \[\Rightarrow -{{e}^{-z}}=x+c\Rightarrow -{{e}^{-\left( x+y \right)}}=x+c\] \[\therefore \frac{1}{{{e}^{\left( x+y \right)}}}+x+c=0\] is the required solution. Homogeneous differential equations A function f(x, y) is said to be homogeneous function of degree n in x and y if it can be expressed in the form \[{{x}^{n}}\phi \left( \frac{y}{x} \right),or,{{y}^{n}}\mu \left( \frac{x}{y} \right)\] When M and N are both homogeneous of same degree in x, y, then the equation M dx + N dy = 0 is called homogeneous equation. Hence a homogeneous equation can be put in the form \[\frac{dy}{dx}=F\left( \frac{y}{x} \right)\] Then putting y = vx where v is a function of x, the above equation becomes \[v+x\frac{dv}{dx}=F(v)\] \[\Rightarrow \frac{dv}{F(v)-v}=\frac{dx}{x}\] Then integrating both sides, we get the solution in terms of v and x. finally replacing v by (y/x), we get the required solution. Example \[Solve:\left( {{y}^{2}}-2xy \right)dx=\left( {{x}^{2}}-2xy \right)dy\] Solution: Here (y 2 – 2xy) and (x 2 – 2xy) both are homogeneous equation of same degree 2. The given equation can be written as \[\frac{dy}{dx}=\frac{\left( {{y}^{2}}-2xy \right)}{\left( {{x}^{2}}-2xy \right)}……….(1)\] \[Putting,y=vx\Rightarrow \frac{dy}{dx}=v+x\frac{dv}{dx}\] Now the equation (1) becomes \[v+x\frac{dv}{dx}=\frac{{{v}^{2}}{{x}^{2}}-2v{{x}^{2}}}{{{x}^{2}}-2v{{x}^{2}}}=\frac{{{v}^{2}}-2v}{1-2v}\] \[\Rightarrow x\frac{dv}{dx}=\frac{{{v}^{2}}-2v}{1-2v}-v\] \[\Rightarrow x\frac{dv}{dx}=\frac{{{v}^{2}}-2v-v+2{{v}^{2}}}{1-2v}=\frac{3v\left( v-1 \right)}{1-2v}\] \[\Rightarrow 3\frac{dx}{x}=\frac{1-2v}{v\left( v-1 \right)}dv\] \[\Rightarrow 3\frac{dx}{x}=-\frac{v+\left( v-1 \right)}{v\left( v-1 \right)}dv\] \[\Rightarrow 3\frac{dx}{x}=-\left\{ \frac{1}{v-1}+\frac{1}{v} \right\}dv\] Integrating we get \[3\int{\frac{dx}{x}}=-\int{\left\{ \frac{1}{v-1}+\frac{1}{v} \right\}dv+\log c}\] \[\Rightarrow 3\log x=-\log v-\log \left( v-1 \right)+\log c\] \[\Rightarrow \log \left\{ {{x}^{3}}v\left( v-1 \right) \right\}=\log c\] \[\Rightarrow {{x}^{3}}.\frac{y}{x}\left( \frac{y}{x}-1 \right)=c\] \[\therefore xy\left( y-x \right)=c\] is the required solution. Exact Differential equation A first order first degree differential equation of the form \[Mdx+Ndy=0……….(1)\] Where both M and N are functions of x, y, is said to be exact, if there exist a function u(x, y) such that \[Mdx+Ndy=du……….(2)\] Then equation (1) becomes du = 0, which on integration gives u(x, y) = c, c being a constant. Thus, u(x, y) = c is a solution of (1) For example, \[\log xdy+\frac{y}{x}dx=0,is-exact\] \[\sin ce,\log xdy+\frac{y}{x}dx=d\left( y\log x \right)\] Hence ylogx = c, c being a constant, is a general solution of the equation. Theorem: The necessary and sufficient condition for the differential equation M dx + N dy = 0 to be exact is \[\frac{\partial M}{\partial y}=\frac{\partial N}{\partial x}\] Method for solving M dx + N dy = 0 when it is exact Step 1: First integrate the terms in M with respect to x, treating y as constant Step 2: Then integrate, those terms of N which do not contain x, with respect to y. Step 3: Lastly equate the sum of the results of Step 1 and Step 2 to a constant give the required solution. Thus the solution of Mdx + Ndy = 0 is ʃMdx (treating y as constant) + ʃ(terms of N not containing x)dy = c Example \[Show,\left( 3x+4y+5 \right)dx+\left( 4x-3y+3 \right)dy=0\] is an exact equation and hence solve it. Solution: \[Here,M=\left( 3x+4y+5 \right),N=\left( 4x-3y+3 \right)\] \[\therefore \frac{\partial M}{\partial y}=4;\frac{\partial N}{\partial x}=4\] \[\therefore \frac{\partial M}{\partial y}=\frac{\partial N}{\partial x}\] So, the given equation is an exact equation. Hence the solution of the equation is \[\int{\left( 3x+4y+5 \right)dx+\int{\left( -3y+3 \right)}}dy=c\] \[\Rightarrow \frac{3{{x}^{2}}}{2}+4xy+5x-\frac{3{{y}^{2}}}{2}+3y=c\] \[\therefore 3{{x}^{2}}-3{{y}^{2}}+8xy+10x+6y=2c=k\] where k = 2c, is a constant Thus the required solution. Linear differential equations A first order first degree differential equation of the form \[\frac{dy}{dx}+Py=Q……….(1)\] Where P and Q are functions of x alone or constant, is known as first order linear equations. To solve such type of equations, we use the integrating factor as \[{{e}^{\int{Pdx}}}\] Multiplying both sides of (1) by integrating factor we get \[\frac{dy}{dx}.{{e}^{\int{Pdx}}}+P.y{{e}^{\int{Pdx}}}=Q{{e}^{\int{Pdx}}}\] \[\Rightarrow \frac{d}{dx}\left( y{{e}^{\int{Pdx}}} \right)=Q{{e}^{\int{Pdx}}}\] Integrating both sides we get \[y{{e}^{\int{Pdx}}}=\int{Q{{e}^{\int{Pdx}}}dx+c}\] is the required solution Example \[Solve:{{\cos }^{2}}x\frac{dy}{dx}+y=\tan x\] Solution: The given equation can be written as \[\frac{dy}{dx}+y{{\sec }^{2}}x=\tan x{{\sec }^{2}}x……….(1)\] which is a linear equation in y \[\therefore I.F.={{e}^{\int{{{\sec }^{2}}xdx}}}={{e}^{\tan x}}\] Multiplying both sides of (1) by I.F. and integrating, we get \[y{{e}^{\tan x}}=\int{\tan x{{\sec }^{2}}x}{{e}^{\tan x}}dx+c……….(2)\] \[Let,\tan x=z\Rightarrow {{\sec }^{2}}xdx=dz\] \[\therefore y{{e}^{\tan x}}=\int{z{{e}^{z}}dz+c}\] \[\Rightarrow y{{e}^{\tan x}}=z\int{{{e}^{z}}dz-\int{\left\{ \frac{d}{dz}z.\int{{{e}^{z}}dz} \right\}}}dz\]+c \[\Rightarrow y{{e}^{\tan x}}=z{{e}^{z}}-{{e}^{z}}+c\] \[\Rightarrow y{{e}^{\tan x}}={{e}^{\tan x}}\left( \tan x-1 \right)+c\] Is the required solution. In the upcoming articles we will learn about the methods thoroughly with lots of question solutions. Hope to see you then. Don’t forget to like, share and comment.
2. Series 31. Year Post deadline: 27th November 2017 Upload deadline: 28th November 2017 11:59:59 PM (3 points)1. Tooth Fairy How big would the storage facilities of the Tooth Fairy need to be, to store all of the primary teeth of all of the children of the world? Or, in other words, how rapidly would they need to grow? How long would it take for the whole Earth supply of phosphorous to be contained in those storage facilities? Karel's mind wandered to the Discworld (3 points)2. solar power plant The solar constant, or more accurately the solar irradiance, is the influx of energy coming from the Sun at the distance where Earth is. It technically doesn't have a constant value, but let's suppose it is approximately $P = 1{,}370\,\mathrm{W\cdot m^{-2}}$. Also, suppose that Earth's orbit is circular and its axis of rotation is tilted with respect to the normal of the orbital plane by $23.5\dg $. What would be the maximum power captured by a solar panel of area $S= 1\,\mathrm{m^2}$ at the summer and winter solstice, if the panel lies flat on the ground in Prague (latitude $50\dg $ N)? Ignore the effects of any obstructions or the atmosphere. Karel watched Crash Course Astronomy (6 points)3. observing What fraction of a spherical planet's surface cannot be seen from the stationary orbit above the planet? (A stationary orbit is one where the satellite stays fixed above a certain point on the planet.) The density of the planet is $\rho $ and its rotation period is $T$. Filip went through the unseen competition problems. (6 points)4. nuclear waste no more Imagine we have a thing (e.g. a nuclear waste container) and we want to get rid of it. We transfer the object to a circular orbit around the Sun at the same distance as Earth, but far enough from Earth to ignore its gravitational influence. Which of these methods of the objects disposal would require the least amount of energy and thus would be the most efficient? Throw it into the Sun. Getting it to the solar surface would be sufficient to burn the object. Transfer it to a circular orbit in the Asteroid belt (located between the orbits of Mars and Jupiter). Get it out of the Solar System completely. Karel thought about what exactly is SEO and discovered this problem. (7 points)5. raining glass A worker brought a bag of marbles to a skyscraper construction, to show off in front of his colleagues. But, what an unlucky accident – the marbles pour out and start falling through the scaffolding towards the ground. The scaffolding consists of different levels separated by height $h$. The floor of each level is made out of identical metal grid in which the holes constitute $k \%$ out of the whole grid area. Consider a simplified model of marbles falling through the scaffolding, in which if marble lands in the hole of the grid it goes through unobstructed and if it lands on the solid part of the grid its velocity drops to $0$ and starts to fall down again immediately (i.e. the size of the marbles is insignificant with respect to the size of the holes in the scaffolding and the marbles don't bounce upon landing, instead they stop and immediately roll down into a hole and continue with their fall). Ignore any potential collisions between marbles themselves. If we assume the marbles pour out of the bag with a constant mass flow of $Q$, what is the force on each level of the scaffolding, when the situation comes to a steady state? Mirek wanted to transfer Ohm's law into mechanics. (10 points)P. ooh Oganesson What properties does the $118^{\rm th}$ element in the Periodic table have? Alternatively, what sort of properties would it have, had it been stable? Discuss at least three physical qualities. Karel wanted to have something on extrapolation. (10 points)S. derivatives and Monte Carlo integration Plot the error as a function of step size for the method \[\begin{equation*} f'(x)\approx \frac {-f(x+2h)+f(x-2h)+8f(x+h)-8f(x-h)}{12h} \end {equation*}\] derived using Richardson extrapolation. What are the optimal step size and minimum error? Compare with forward and central differences. Use $\exp (\sin (x))$ at $x=1$ as the function you are differentiating. Bonus:Use error estimate to determine the theoretical optimal step size. There is a file with experimentally determined $t$, $x$ and $y$ coordinates of a point mass on the website. Using numerical differentiation, find the time dependence of components of speed and acceleration and plot both functions. What is the most likely physical process behind this movement? Choose your own numerical method but justify your choice. Bonus:Is there a better method for obtaining velocity and acceleration, then direct application of numerical differentiation? We have an integral $\int _0^{\pi } \sin ^2 x \d x$. Find the value of the integral from a geometrical construction using Pythagoras theorem. Find the value of the integral using a Monte Carlo simulation. Determine the standard deviation. Bonus:Solve the Buffon's needle problem (an estimate of the value of $\pi $) using MC simulation. Find the formula for the volume of a six-dimensional sphere using Monte Carlo method. Hint:You can use the Pythagoras theorem to measure distances even in higher dimensions. Mirek and Lukáš read the Python documentation.
Definition:Superfactorial Contents Definition Let $n \in \Z_{\ge 0}$ be a positive integer. The superfactorial of $n$ is defined as: $n\$ = \displaystyle \prod_{k \mathop = 1}^n k! = 1! \times 2! \times \cdots \times \left({n - 1}\right)! \times n!$ where $k!$ denotes the factorial of $n$. $1, 2, 12, 288, 34 \, 560, 24 \, 883 \, 200, 125 \, 411 \, 328 \, 000, \ldots$ Also defined as Some sources, for example Clifford A. Pickover, define the superfactorial as: $n \$ = \underbrace{n!^{n!^{·^{·^{.^{n!}}}}} }_n$ The superfactorial of $1$ is given by: $1 \$ = 1! = 1$ The superfactorial of $2$ is given by: $2 \$ = 2!^{2!} = 2^2 = 4$ The superfactorial of $3$ is given by: $3 \$ = 3!^{3!^{3!} } = 6^{6^6} = 6^{46 \, 656}$ Also see Results about Superfactorialscan be found here. Sources 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $288$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $3!^{3!^{3!} }$
Abbreviation: CLRng A is a lattice-ordered ring $\mathbf{A}=\langle A,\vee,\wedge,+,-,0,\cdot\rangle$ such that commutative lattice-ordered ring $\cdot$ is : $xy=yx$ commutative Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be commutative lattice-ordered rings. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \vee y)=h(x) \vee h(y)$, $h(x \wedge y)=h(x) \wedge h(y)$, $h(x + y)=h(x) + h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$. A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. [[Commutative f-rings]] subvariety [[Lattice-ordered rings]] supervariety [[Abelian lattice-ordered groups]] subreduct [[Commutative rings]] subreduct
Consider the graph $(V,E)$ with vertex set $V=\{v_1,...,v_n\}$ and edge set $E\subset V\times V$. Further, assume that $\forall v_i\in V, (v_i,v_i)\in E$. Assume that each vertex has an $\textit{initial value}$ (i.e. there is a function $\phi_0:V\rightarrow\mathbb{R}$). We will think of these values as changing with time as I describe below. Also assume that there each edge as an associated $\textit{weight}$. By this I mean that is a function $\omega:E\rightarrow[0,1]$ such that 1.) $\forall$ $i\in\{1,...,n\}$ $\omega(v_i,v_i) > 0$ 2.) $\forall i$ we have that the following holds: $$\sum_{(v_i,v_j)\in E}\omega(v_i,v_j)=1$$. Finally, we define a discrete time dynamical system by letting $$\phi_k(v_i)=\sum_{(v_i,v_j)\in E}\omega(v_i,v_j)\phi_{k-1}(v_j).$$ So to sum up the situation we have a graph with elements of $\mathbb{R}$ assigned to each vertex, and we have a dynamical system that for each time interval it averages the values of a vertex with all adjacent vertices, where the average is a weighted average with weights given by $\omega$. Note that $\omega$ does not depend on $k$. We also assume that the vertex itself is a non-zero component of the average. I am generally interested in the behavior of this type of dynamical system. Specific types of questions that I'm interested in are: 1.)What is the long term behavior of this system? Are the functions $\phi_k$ asymptotically constant? What properties must exist on $\omega$ or $\phi_0$ to either guarantee this or guarantee that it does not happen (ignoring the trivial case where $\phi_0$ is constant.) 2.) What assumptions can we put on $\omega$ and $\phi_0$ such that the $\phi_k$ approach a non-constant steady state (ie such that $\phi_k\rightarrow \phi$ pointwise where $\phi$ is non-constant.) My feeling is that this always happens since we are taking averages then the images of all $\phi_k$ should lie inside some compact subset of $\mathbb{R}^n$. 3.) Are there any way to determine the speed of either of the convergences above. I realize that the setup is fairly general so I'm willing to add additional assumptions if needed. Any thoughts would be greatly appreciated!
Suppose $A\in R^{n\times n}$, where $R$ is a commutative ring. Let $p_i \in R$ be the coefficients of the characteristic polynomial of $A$: $\mathop{\mathrm{det}}(A-xI) = p_0 + p_1x + \dots + p_n x^n$. I am looking for a proof that: $-\mathop{\mathrm{adj}}(A) = p_1 I + p_2 A + \dots + p_n A^{n-1}$. In the case where $\mathop{\mathrm{det}}(A)$ is a unit, $A$ is invertible, and the proof follows from the Cayley-Hamilton theorem. But what about the case where $A$ is not invertible? Suppose $A\in R^{n\times n}$, where $R$ is a commutative ring. Let $p_i \in R$ be the coefficients of the characteristic polynomial of $A$: $\mathop{\mathrm{det}}(A-xI) = p_0 + p_1x + \dots + p_n x^n$. Here is a direct proof along the lines of the standard proof of the Cayley–Hamilton theorem. [ This works universally, i.e. over the commutative ring $R=\mathbb{Z}[a_{ij}]$ generated by the entries of a generic matrix $A$.] The following lemma combining Abel's summation and Bezout's polynomial remainder theorem is immediate. Lemma Let $A(\lambda)$ and $B(\lambda)$ be matrix polynomials over a (noncommutative) ring $S.$ Then $A(\lambda)B(\lambda)-A(0)B(0)=\lambda q(\lambda)$ for a polynomial $q(\lambda)\in S[\lambda]$ that can be expressed as $$q(\lambda)=A(\lambda)\frac{B(\lambda)-B(0)}{\lambda}+\frac{A(\lambda)-A(0)}{\lambda}B(0)=A(\lambda)b(\lambda)+a(\lambda)B(0) \qquad (*)$$ with $a(\lambda),b(\lambda)\in S[\lambda].$ Let $A(\lambda)=A-\lambda I_n$ and $B(\lambda)=\operatorname{adj} A(\lambda)$ [ viewed as elements of $S[\lambda]$ with $S=M_n(R)$], then $$A(\lambda)B(\lambda)=\det A(\lambda)=p_A(\lambda)=p_0+p_1\lambda+\ldots+p_n\lambda^n$$ is the characteristic polynomial of $A$ and $$A(0)B(0)=p_0 \text{ and } q(\lambda)=p_1+\ldots+p_n\lambda^{n-1}$$ Applying $(*),$ we get $$q(\lambda)=(A-\lambda I)b(\lambda)-\operatorname{adj} A \qquad (**) $$ for some matrix polynomial $b(\lambda)$ commuting with $A.$ Specializing $\lambda$ to $A$ in $(**),$ we conclude that $$q(A)=-\operatorname{adj} A\qquad \square$$ HINT $\;$ Work " generically", i.e. let the entries $\;\rm a_{i,j}$ of $\rm A\;$ be indeterminates and work in the matrix ring $\rm M = M_n(R)\;$ over $\;\rm R = {\mathbb Z}[a_{i,j}\:]. \;$ We wish to prove $\rm B = C$ from $\rm d\: B = d\: C$ for $\rm d = det\: A \in R, \;\; B,C \in M.$ But this is equivalent to $\rm d\: b_{i,j} = d\: c_{i,j}$ in the domain $\rm R = {\mathbb Z}[a_{i,j}\:]$ where $\;\rm d = det\: A \ne 0$, so $\rm d$ is cancelable, yielding $\;\rm b_{i,j} = c_{i,j}\;$ hence $\rm B = C$. This identity remains true over every commutative ring $\rm S$ since, by the universality of polynomial rings, there exists an eval homomorphism that evaluates $\;\rm a_{i,j}\;$ at any $\;\rm s_{i,j}\in S$. Notice that the crucial insight is that $\;\rm b_{i,j}\:, \; c_{i,j}\:,\; d\;$ have polynomial form in $\;\rm a_{i,j}\:$, i.e. they are elts of the polynomial ring $\;\rm R = {\mathbb Z}[a_{i,j}\:] = {\mathbb Z}[a_{1,1},\cdots,a_{n,n}\:]$ which, being a domain, enjoys cancelation of elts $\ne 0$. Working generically allows us to cancel $\rm d$ and deduce the identity before any evaluation where $\rm d\mapsto 0.$ Such proofs by way of universal polynomial identities emphasize the power of the abstraction of a formal polynomial (vs. polynomial function). Alas, many algebra textbooks fail to explicitly emphasize this universal viewpoint. As a result, many students cannot easily resist the obvious topological temptations and instead derive hairier proofs employing density arguments (e.g see elswhere in this thread). Analogously, the same generic method of proof works for many other polynomial identities, e.g. $\rm\quad\; det(I-AB) = det(I-BA)\;\:$ by taking $\;\rm det\;$ of $\;\;\rm (I-AB)\;A = A\;(I-BA)\;$ then canceling $\;\rm det \:A$ $\rm\quad\quad det(adj \:A) = (det \:A)^{n-1}\quad$ by taking $\;\rm det\;$ of $\;\rm\quad A\;(adj\: A) = (det\: A) \;I\quad\;\;$ then canceling $\;\rm det \:A$ Now, for our pièce de résistance of topology, we derive the polynomial derivative purely formally. For $\rm f(x) \in R[x]$ define $\rm D f(x) = f_0(x,x)$ where $\rm f_0(x,y) = \frac{f(x)-f(y)}{x-y}.$ Note that the existence and uniqueness of this derivative follows from the Factor Theorem, i.e. $\;\rm x-y \; | \; f(x)-f(y)\;$ in $\;\rm R[x,y],\;$ and, from the cancelation law $\;\rm (x-y) g = (x-y) h \implies g = h$ for $\rm g,h \in R[x,y].$ It's clear this agrees on polynomials with the analytic derivative definition since it is linear and it takes the same value on the basis monomials $\rm x^n$. Resisting limits again, we get the product rule rule for derivatives from the trivial difference product rule $$ \rm f(x)g(x) - f(y)g(y)\; = \;(f(x)-f(y)) g(x) + f(y) (g(x)-g(y))$$ $\quad\quad\quad\quad\rm\quad\quad\quad \Longrightarrow \quad\quad\quad\quad\quad\; D(fg)\quad = \quad (Df) \; g \; + \; f \; (Dg) $ by canceling $\rm x-y$ in the first equation, then evaluating at $\rm y = x$,i.e. specialize the difference "quotient" from the product rule for differences.Here the formal cancelation of the factor $\;\rm x-y\;$ before evaluation at $\;\rm y = x\;$ is precisely analogous to the formal cancelation of $\;\rm det \:A\;$ in all of the examples given above. I guess it is worth giving a fuller answer, and then Victor can tell me more precisely where I am missing some subtlety. As I said, the definition I know of the adjugate is that it is a matrix whose entries are polynomials in the entries $a_{ij}$ of $A$ and which satisfies $A \text{ adj}(A) = I \det A$ identically, e.g. over $\mathbb{Z}[a_{ij}]$. Assuming Cayley-Hamilton, we know that $p_0 I + p_1 A + ... + p_n A^n = 0$ identically and that $p_0 = \det A$, where $p_k \in \mathbb{Z}[a_{ij}]$ as well. Specializing now to $a_{ij} \in \mathbb{C}$ and supposing that $A$ is invertible, we conclude that $$A \text{ adj}(A) = - p_1 A - p_2 A^2 - ... - p_n A^n$$ implies $$\text{adj}(A) = - p_1 I - p_2 A - ... - p_n A^{n-1},$$ as you say. Lemma: The invertible $n \times n$ matrices are dense in the $n \times n$ matrices with the operator norm topology. Proof. Let $A$ be a non-invertible $n \times n$ matrix, hence $\det A = 0$. The polynomial $\det(A - xI)$ has leading term $(-1)^n x^n$, hence cannot be identically zero, so in any neighborhood of $A$ there exists $x$ such that $A - xI$ is invertible. But everything in sight is continuous in the operator norm topology, so the conclusion follows identically over $\mathbb{C}$ and hence identically. (I should mention that this is not even my preferred method of proving matrix identities. Whenever possible, I try to prove them combinatorially by interpreting $A$ as the adjacency matrix of some graph. For example - confession time! - this is how I think about Cayley-Hamilton. This is far from the cleanest or the shortest way to do things, but my combinatorial intuition is better than my algebraic intuition and I think it's good to have as many different proofs of the basics as possible.) As an arithmetic geometer, I have no choice but to use topological methods hand in hand with algebraic methods. Very likely necessity has been the mother of aesthetics here, but I find proofs of linear algebra facts using genericity arguments to be beautiful and insightful. Qiaochu has shown how to answer the OP's question using these methods [he uses the "analytic" -- i.e., usual -- topology on $\mathbb{C}^n$, but close enough] assuming the Cayley-Hamilton theorem. Here I want to show that one can also prove the Cayley-Hamilton theorem quickly by these methods. Step 1: To prove C-H as a polynomial identity, it is enough to prove that it holds for all $n \times n$ matrices over $\mathbb{C}$. Proof: Indeed, to say C-H holds as a polynomial identity means that it holds for the generic matrix $A = \{a_{ij}\}_{1 \leq i,j \leq n}$ whose entries are independent indeterminates over the ring $R = \mathbb{Z}[a_{ij}]$. But this ring embeds into $\mathbb{C}$ -- indeed into any field of characteristic zero and infinite absolute transcendence degree -- and two polynomials with coefficients in a domain $R$ are equal iff they are equal in some extension domain $S$. Step 2: C-H is easy to prove for complex matrices $A$ with $n$ distinct eigenvalues $\lambda_1,\ldots,\lambda_n$. Proof: The characteristic polynomial evaluated at $A$ is $P(A) = \prod_{i=1}^n(A-\lambda_i I_n)$. Let $e_1,\ldots,e_n$ be a basis of $\mathbb{C}^n$ such that each $e_i$ is an eigenvector for $A$ with eigenvalue $\lambda_i$. Then -- using the fact that the matrices $A - \lambda_i I_n$ all commute with each other -- we have that for all $e_i$, $P(A)e_i = \left(\prod_{j \neq i} (A-\lambda_j I_n)\right) (A-\lambda_i I_n) e_i = 0.$ Since $P(A)$ kills each basis element, it is in fact identically zero. Step 3: The set of complex matrices with $n$ distinct eigenvalues is a Zariski-open subset of $\mathbb{C}^n$: indeed this is the locus of nonvanishing of the discriminant of the characteristic polynomial. Since we can write down diagonal matrices with distinct entries, it is certainly nonempty. Therefore it is Zariski dense, and any polynomial identity which holds on a Zariski dense subset of $\mathbb{C}^{n^2}$ holds on all of $\mathbb{C}^{n^2}$. This formula can be obtained during a proof of the Cayley-Hamilton theorem, as is indicated on its Wikipedia article. The essence of the argument is that Euclidean division by a monic polynomial (on the left, say), can be performed in the polynomial ring over any (unitary) ring, not necessarily commutative; this follows directly from consideration of what Euclidean division does, or by a simple inductive argument. Since I care about polynomials being monic, I'l define the characteristic polynomial of a matrix $A$ to be $\chi_A=\det(I_nX-A)=\sum_{i=0}^nc_iX^i$ where $c_n=1$ (and $c_0=\det(-A)$), so the result to prove becomes $\mathrm{adj}(-A)=c_1I_n+c_2A+\cdots+c_{n-1}A^{n-2}+A^{n-1}=\sum_{i=1}^nc_iA^{i-1}$ Consider the noncommutative ring $M=\mathrm{Mat}_n(R)$, and using Euclidean division in $M[X]$ (in which $R[X]$ is embedded by mapping $r$ to $rI_n$) divide $\chi_A$ on the left by $X-A$. Since we know that $(X-A)\mathrm{adj}(X-A)=\det(X-A)=\chi_A$, uniqueness of quotient and remainder in Euclidean division implies they will have to be $\mathrm{adj}(X-A)$ and $0$, respectively. Writing the quotient $\mathrm{adj}(X-A)=\sum_{i=0}^{n-1}B_iX^i$, its coefficients $B_i\in M$ are determined in the division successively as $B_{n-1}=c_n=1$ and $B_{i-1}=c_i+AB_i$ for $i=n-1,\ldots,1$ (these are just the intermediate values while computing the evaluation of $\chi_A$ at$~X=A$ using the Horner scheme), which expands to $B_{i-1}=c_iA^0+c_{i+1}A^1+\cdots+c_nA^{n-i}$. In particular the constant coefficient of the quotient $\mathrm{adj}(X-A)$ equals $B_0=\sum_{i=1}^nc_iA^{i-1}$, but this is also $\mathrm{adj}(-A)$ (by substituting $X=0$). To retrieve the Cayley-Hamilton theorem from the formula found, multiply on the left or right by $A$ and move the left hand side to the right. EDIT OF AUG. 31, 2010. The proof of the Cayley-Hamilton Theorem I like best (among the ones I know) is on page 21 (proof of Proposition 2.4) of Introduction to Commutative Algebra by Atiyah and MacDonald. The argument can be phrased as follows. Let $K$ be a commutative ring; let $n$ be a positive integer; let $A=(a_{ij})\in M_n(K)$ be an $n$ by $n$ matrix with entries in $K$; let $\chi$ be its characteristic polynomial; define $B=(b_{ij})\in M_n(K[A])$ by $b_{ij}:=\delta_{ij}\,A-a_{ij}$; let $(e_i)$ be the canonical basis of $K^n$; observe $$\sum_i\, \, b_{ij}\, e_i=0,\quad\det B=\chi(A);$$ and write $(c_{ij})$ for the adjugate of $B$. Applying (a trivial case of) Fubini's Theorem to the double sum $\sum_{i,j}\, c_{jk}\, b_{ij}\, e_i$, we get $\chi(A)=0$. Thank you very much to darij grinberg! [I'm leaving the previous edits "for the record".] END OF EDIT OF AUG. 31, 2010. PREVIOUS EDITS: Here is a proof of the Cayley-Hamilton Theorem. Let $K$ be a commutative ring, let $n$ be a positive integer, let $X$ be an indeterminate, let $A\in M_n(K)$ be an $n$ by $n$ matrix with coefficients in $K$, and let $\chi:=\det(X-A)$ be the characteristic polynomial. Equip $K^n$ with the $K[X]$-module structure induced by $A$. We must check $\chi K^n=0$. Form the right $M_n(K[X])$-module $$H:=\mathrm{Hom}_{K[X]}(K[X]^n,K^n).$$ Let $e\in H$ be the evaluation at $A$ (note $K[X]^n=K^n[X]$). As $e$ is surjective, it suffices to show $e\chi=0$. As $X-A$ divides $\chi$ on the left, it suffices to show $e(X-A)=0$. But this is obvious. EDIT OF AUG. 1, 2010. Here is a diagrammatic rewriting of the argument. EDIT OF AUG. 30, 2010. Here is a coordinate version of the above argument. [Compare with the proof of Propositon 3 page 81 of Weil's Basic Number Theory, and with the proof of Propositon 2.4 page 21 of Introduction to Commutative Algebra by Atiyah and MacDonald]. Weil's formulation. Put $$B(X)=(b_{ij}(X)):=X-A\in M_n(K[X]),$$ and let $C(X)=(c_{ij}(X))$ be the adjugate of $B(X)$. We have $$\sum_j\ c_{jk}(X)\ b_{ij}(X)=\delta_{ik}\ \chi(X)\in K[X].$$ Replacing $X$ with $A$, evaluating on $e_i$ (the $i$-th vector of the canonical basis of $K^n$), and summing over $i$ gives $$\sum_j\ c_{jk}(A)\ \sum_i\ b_{ij}(A)\ e_i=\chi(A)\ e_k\in K^n.$$ But the second sum is 0 by definition of $b_{ij}(X)$. Atiyah-MacDonald's formulation. Put $A=(a_{ij})$ and define $B=(b_{ij})\in M_n(K[A])$ by $b_{ij}:=\delta_{ij}A-a_{ij}$; observe $$\sum_i\, b_{ij}\, e_i=0,\quad\det B=\chi(A);$$ and write $(c_{ij})$ for the adjugate of $B$. Computing $\sum_{i,j}\,c_{jk}\,b_{ij}\,e_i$ in two ways we get $\chi(A)=0$.
I'm studying a toy theory in quantum field theory. There are two free fields: a real massive scalar field $\phi$ with mass $M$ and a complex massive scalar field $\Psi$ with mass $m$. They are coupled by $$ \mathcal{L} \subset g \Psi \Psi^\dagger \phi $$ I'm well aware that this interaction term results in a Lagrangian which is unbounded below, but it's just a toy model I'm using to try to get a grasp on the basics of quantum field theory. Now, when I go to compute the tree-level scattering amplitude for $\Psi\Psi \to \Psi\Psi$ scattering I end up with a $T$-channel diagram and a $U$-channel diagram. (In both diagrams the two $\Psi$ particles come in, exchange an off-shell $\phi$ particle, and scatter into their final states). The sum of these two diagrams gives me the total matrix element, which goes like $$ \frac1{(t-M^2)} + \frac1{(u-M^2)} $$ ignoring the factor's of $i$ and the factor of $g^2$. In computing this amplitude I worked in the center of mass/momentum frame, with the initial four momentum of the two particles being $(E(p),0,0,\pm p)$, with the final four momentum of the two particles being $(E(p),0, \pm \cos(θ)p, \pm \sin(θ)p)$. This ensures that four-momentum is conserved. With this parametrization, $t$ becomes $-4(p\sin(θ/2))^2$, and $u$ becomes $-4(p\cos(θ/2))^2$. Things get interesting in the limit as $M \to 0$, i.e., the mass of the phi particle approaches zero. In this case the scattering amplitude becomes unbounded as $θ$ approaches zero or $\pi$. This is, to my mind, a highly unphysical result. Is this divergence a result of a fundamental issue with this toy-model? How can we make sense of this result? What is the physical interpretation of a divergent scattering amplitude? Worse still is the fact that the integrated/total cross section also appears to diverge in all of these cases.
5. Series 31. Year Post deadline: 26th March 2018 Upload deadline: 27th March 2018 11:59:59 PM (3 points)1. staircase on the Moon If we once colonized the Moon, would it be appropriate to use stairs on it? Imagine the descending staircase on the Moon. The height of one stair is $h=15 \mathrm{cm}$ and it's length is $d=25 \mathrm{cm}$. Estimate the number $N$ of stairs that a person would fly over if he walked into the staircase with a velocity $v=5{,}4 \mathrm{km\cdot h^{-1}}=1{,}5 \mathrm{m\cdot s^{-1}}$. The gravitational acceleration on the Moon's surface is six times weaker than on Earth's surface. Dodo read The Moon Is a Harsh Mistress. (3 points)2. death rays on the glass A light ray falls on a glass plate with an absolute reflective index $n = 1,5$. Determine its angle of incidence $\alpha _1$ if the reflected ray forms an angle $60 \dg$ with the refracted ray. The board is stored in the air. Danka likes solvine more problems simultaneously. (5 points)3. wedge We have two wedges with the masses $m_1$, $m_2$ and the angle $\alpha $ (see figure). Calculate the acceleration of the left wedge. Assume that there is no friction anywhere. Bonus: Consider friction with the $f$ coefficient. Jáchym robbed the CTU scripts. (7 points)4. thermal losses At what temperature does the indoor environment of the flat in a block of flats stabilise? Consider that our flat is adjacent to other apartments (except its shorter walls), in which the temperature $22 \mathrm{\C}$ is maintained. The shorter walls adjoin the surroundings where the temperature is $ - 5 \mathrm{\C}$. The inside dimensions of the flat are height $ h = 2{,}5 \mathrm{m}$, width $ a = 6 \mathrm{m}$ and length $ b = 10 \mathrm{m} $. The coefficient of the specific thermal conductivity of the walls is $ \lambda = 0{,}75 \mathrm{W\cdot K^{-1}\cdot m^{-1}} $. The thickness of the outer walls and the ceilings are $ D\_{out} = 20 \mathrm{cm}$, and the thickness of the inner walls are $ D\_{in} = 10 \mathrm{cm}$. How will the result be changed if we add polystyrene insulation to the building? The thickness of the polystyrene is $ d = 5 \mathrm{cm}$, and its specific heat conductivity is $ \lambda '= 0{,}04 \mathrm{W\cdot K^{-1}\cdot m^{-1}} $. (8 points)5. sneaky dribblet Let's take a rounded drop of radius $ r_0 $ made of water of density $ \rho \_v $ which coincidentally falls in the mist in the homogeneous gravity field $g$. Consider a suitable mist with special assumptions. It consists of air of density $\rho \_{vzd}$ and water droplets with an average density of $ rho\_r $ and we consider that the droplets are dispersed evenly. If a drop falls through some volume of such mist, it collects all the water that is in that volume. Only air is left in this place. What is the dependence of the mass of the drop on the distance traveled in such a fog? Bonus: Solve the motion equations. Karal wanted to assign something with changing mass.
Let's say we are given a function $f(x)$, which is not defined at the point $x_0$. How do we find linear approximation of $f$ near $x_0$? P.S. I wrote "linear" just to make things simpler, I came across this problem while trying to approximate the following function near zero: $\frac{lnx}{x*e^x}$. My problem is that to approximate this function near zero I have to put zero for x in the function as the first (or zeroth) term of Maclaurin series, but this way I get zero in the denominator. Well there are two problems here. Assuming you want to linearize this function in the usual sense of approximating it by first derivative, then you can use a kind of limiting process. In order to evaluate function where it is not formally defined you can use the limiting value. For example if you want to evaluate a function $f$ at some point $x_0$ where it not defined, you could try evaluating the function in limit as $x \to x_0$. Now in your specific case this will not work as function obviously explodes as it approaches zero. Reason for this is as x goes to zero, the logarithm goes to negative infinity and $\frac{1}{x}$ goes to zero, thus the fraction goes to negative infinity. Further your function is not defined to the left of zero, and there is no way to build a derivative. Best thing you can hope for is to linearize your function around $0 + \varepsilon$. Usually even if function is undefined at some point, but is defined on both sides of the point you can use the $ \lim \limits_{h\to 0} \frac{f(x+h) - f(x-h)}{2h}$ definition of limit to evaluate the derivative at that point. In your case you could say that at zero, the best "linear" approximation to your function is a vertical line through origin.
6. Series 31. Year Post deadline: 7th May 2018 Upload deadline: 8th May 2018 11:59:59 PM (3 points)1. they came apart We have two point masses with the same mass $m$ at a distance $d$ from each other. They are located freely in space with no external gravitational forces. What's the minimum velocity we need to impart on one of the points in the direction away from the other point, so that they keep flying away from each other indefinitely? Matej played with the universe (3 points)2. hot wire Calculate the current, that needs to pass through a metal wire of a diameter $d = 0{,}10 \mathrm{mm}$ located in a vacuum bulb, so that its temperature stays at $T = 2 600 K$. Assume the surface of the wire radiates like an ideal black body and neglect any losses by heat conduction. The resistivity of the material of the wire at the given temperature is $\rho = 2{,}5 \cdot 10^{-4} \mathrm{\Ohm \cdot cm}$. \taskhint {Hint}{Use the Stefan-Boltzmann's law.} Danka was contemplating the light bulb efficiency (6 points)3. non-analytic spring Imagine a pole of length $b = 5 \mathrm{cm}$ and mass $m = 1 \mathrm{kg}$ and a spring of initial length $c = 10 \mathrm{cm}$, spring constant $k = 200 \mathrm{N\cdot m^{-1}}$ and negligible mass, that are connected at one of their ends. The other ends of the spring and the pole are affixed at the same height $a = 10 \mathrm{cm}$ from each other. The spring and the pole can both freely rotate about the fixed points and their joint. Label $\phi $ the angle of the pole to the horizontal. Find all angles $\phi $, for which the system is in an equilibrium. Which of these are stable and which unstable? Jachym was supposed to come up with an easy problem. (7 points)4. dimensional analysis Matej was making a gun and wanted to measure what is the speed of the projectiles leaving the barrel. Unfortunately, he doesn't have any other measuring device, than a ruler. However, he found a block that is made half from steel half from wood. He lays it down at the edge of the table (of height $100 \mathrm{cm}$ and length $200 \mathrm{cm}$), and shoots at it horizontally. With the steel part of the block facing the gun, the bullet bounces off perfectly elastically and lands $50 \mathrm{cm}$ from the edge of the table. The block slides $5 \mathrm{cm}$ on the table. Then Matej turns around the block and shoots into the wooden side. This time the bullet stays in the block and the block slides only $4 \mathrm{cm}$. Help Matej with calculating the speed of the bullet. It might be also helpful to know, that when Matej lifts one edge of the table by at least $20 \mathrm{cm}$, the moving block won't stop sliding. Matej wanted all the variables to have the same unit. (8 points)5. jump from a plane Filip of mass $80 \mathrm{kg}$ jumped out of air plane, that is $h_1 =500 \mathrm{m}$ above the ground. At the same time, Danka (mass $50 \mathrm{kg}$) jumps out of a different airplane but from a height of $h_2 =569 \mathrm{m}$. Assume both of them have the same drag coefficient $C = 1{,}2$, Filip's cross-sectional area is $S\_F = 2{,}2 \mathrm{m^2}$ and Danka's is $S\_D=1{,}5 \mathrm{m^2}$. The density of air $\rho =1{,}205 \mathrm{kg\cdot m^{-3}}$ is and stays the same in all heights. At what time will Danka be at the same height above the ground as Filip? Danka contemplated the strenuous life of a physicist and wanted to break free for a moment. (9 points)P. universe expansion compensation According to the current observations and cosmological models, it seems that our Universe is expanding and the rate of expansion is accelerating. What if that wasn't the case? What if the Universe stayed the same, but the physical laws/constants were changing so that it would seem like the universe is expanding, the way we observe it? Describe as many laws that would need to change. Karel was intrigued whether one can compensate the expansion of universe.
Using the definition of $m$ as an outer measure, there exist $A_i=(a_i,b_i]$ such that $A\subset \cup_i A_i$ and $\sum_i (b_i-a_i) \leq m(A) + \epsilon/2$. Let $b_i' = b_i + \epsilon 2^{-i-1}$. $G:=\cup_i (a_i,b_i')$ is an open set that contains $A$ and $m(G)\leq \sum_i (b_i'-a_i) =\epsilon/2 + \sum_i (b_i-a_i) \leq m(A) + \epsilon $. Since $m(A)<\infty$, $m(G \setminus A)=m(G) - m(A)\leq \epsilon$. Since $m(A)<\infty$, $A$ is approcheable by a bounded set. Indeed, $m(A) = m(\cup_n (A\cap [-n,n])) = \lim_n m(A \cap [-n,n])$. There is therefore some $N$ such that $$m(A \cap [-N,N])\geq m(A)-\epsilon/2 \quad (\star)$$ Let $A' = A \cap [-N,N]$. Since $[-N,N]\setminus A'$ has finite measure, there exists some open $G'$ such that $[-N,N]\setminus A' \subset G'$ and $m(G'\setminus ( [-N,N]\setminus A'))\leq \epsilon/2$. Let us prove that the closed set $[-N,N]\setminus G'$ fits the bill. It's easy to prove $[-N,N]\setminus G'\subset A'$. Furthermore, $$\begin{aligned} m(A'\setminus ([-N,N]\setminus G')) &= m(A'\cap ([-N,N]^c \cup G'))\\ &= m(A'\cap G') \end{aligned}$$ and $$\begin{aligned} \epsilon/2 \geq m(G'\setminus ( [-N,N]\setminus A')) &= m((G'\cap [-N,N]^c) \cup (G'\cap A'))\\ &\geq m(G'\cap A')\end{aligned}$$Hence $m(A'\setminus ([-N,N]\setminus G')) \leq \epsilon/2$. Let $F = [-N,N]\setminus G'$, then $$m(A') - m(F) \leq \epsilon/2 \quad (\star \star)$$ $(\star)$ and $(\star \star)$ yield $$m(A\setminus F) = m(A) - m(F) \leq (m(A') - m(F)) + \epsilon /2 \leq \epsilon $$ Finally, $F\subset A \subset G$ and $m(G\setminus F) = m(G\setminus A) + m(A\setminus F) \leq 2\epsilon$
Expectation of Shifted Geometric Distribution Theorem Then the expectation of $X$ is given by: $\expect X = \dfrac 1 p$ From the definition of expectation: $\expect X = \displaystyle \sum_{x \mathop \in \Omega_X} x \map \Pr {X = x}$ By definition of shifted geometric distribution: $\expect X = \displaystyle \sum_{k \mathop \in \Omega_X} k p \paren {1 - p}^{k - 1}$ Let $q = 1 - p$: \(\displaystyle \expect X\) \(=\) \(\displaystyle p \sum_{k \mathop \ge 0} k q^{k - 1}\) as $\Omega_X = \N$ \(\displaystyle \) \(=\) \(\displaystyle p \sum_{k \mathop \ge 1} k q^{k - 1}\) The term in $k = 0$ vanishes \(\displaystyle \) \(=\) \(\displaystyle p \frac 1 {\paren {1 - q}^2}\) Derivative of Geometric Progression \(\displaystyle \) \(=\) \(\displaystyle \frac p {p^2}\) as $q = 1 - p$ \(\displaystyle \) \(=\) \(\displaystyle \frac 1 p\) $\blacksquare$ From the Probability Generating Function of Shifted Geometric Distribution, we have: $\map {\Pi_X} s = \dfrac {p s} {1 - q s}$ where $q = 1 - p$. From Expectation of Discrete Random Variable from PGF, we have: $\expect X = \map {\Pi'_X} 1$ We have: \(\displaystyle \map {\Pi'_X} s\) \(=\) \(\displaystyle \map {\frac \d {\d s} } {\frac {p s} {1 - q s} }\) \(\displaystyle \) \(=\) \(\displaystyle \frac p {\paren {1 - q s}^2}\) Derivatives of PGF of Shifted Geometric Distribution Plugging in $s = 1$: \(\displaystyle \map {\Pi'_X} 1\) \(=\) \(\displaystyle \frac p {\paren {1 - q}^2}\) \(\displaystyle \) \(=\) \(\displaystyle \frac p {p^2}\) as $q = 1 - p$ \(\displaystyle \) \(=\) \(\displaystyle \frac 1 p\) Hence the result. $\blacksquare$
how come $π(x)π(x_p∣x)=π(x_p)π(x∣x_p)$? This is a consequence of the form of the transition kernel for the Metropolis-Hastings algorithm: The Markov transition kernel associated with this algorithm is$$\pi(y|x) = \rho(x,y) q(y|x) + (1-r(x)) \delta_x(y) \;,$$where $q$ denotes the density of the proposal distribution, $r(x)=\int \rho(x,y) q(y|x) \text{d}y$ and $\delta_x$ denotesthe Dirac mass in $x$. It is straightforward to verify that\begin{align*}\rho(x,y) q(y|x)\pi(x)&=\rho(y,x) q(x|y)\pi(y) \\(1-r(x)) \delta_x(y) \pi(x)&=(1-r(y)) \delta_y(x)\pi(y) \;,\end{align*}which together establish detailed balance for the Metropolis-Hastings chain. From this equality, establishing stationarity of $\pi$ is straightforward, as seen by integrating both sides in $x$. Is it true for any Markov chains which are ergodic and aperiodic? This is a sufficient condition to establish stationarity (and in the case of an irreducible Markov chain ergodicity) but not a necessary condition. The Gibbs sampler comes as a counter-example. MALA (Metropolis adjusted Langevin algorithm) as another one.
Mine is a tad less elegant but arguably a bit clearer and assumes only the complete basics. 0. The task Prove that for $ \forall n \in \mathbb{N}$ it is true that $n! \leq (\frac{n+1}{2})^n $ : I. Base steps (I need four base steps because my final inequality works for $k \ge 4$.) $$n=0: 1 \leq (\frac{1}{2})^0 = 1 $$$$n=1: 1 \leq (\frac{2}{2})^1 = 1 $$$$n=2: 2 \leq (\frac{3}{2})^2 = \frac{9}{4} $$$$n=3: 6 \leq (\frac{4}{2})^3 = 8 $$ II. Inductive assumption $$k! \leq (\frac{k+1}{2})^k$$ III. Inductive hypothesis $$k! \leq (\frac{k+1}{2})^k \implies (k+1)! \leq (\frac{k+2}{2})^{k+1}$$ IV. Proof I am going from the assumption to the hypothesis. We will be using the fact that $(a \leq c) \land (b \geq c) \implies a \leq b$. Step 1. \begin{align} k! \leq (\frac{k+1}{2})^k && \text{multiply both sides by (k+1)} \tag 1\\ (k+1) k! \leq (k+1) (\frac{k+1}{2})^k && \tag 2\\ (k+1)! \leq (k+1) (\frac{k+1}{2})^k && \text{from the definition of the factorial} \tag 3\\ (k+1)! \leq 2(\frac{k+1}{2}) (\frac{k+1}{2})^k && \text{factoring 2 out of (k+1)} \tag 4\\ (k+1)! \leq 2(\frac{k+1}{2})^{k+1} && \tag 5\\\end{align} We have shown that (5) is true. Now, we need to show that $2(\frac{k+1}{2})^{k+1} \leq (\frac{k+2}{2})^{k+1}$, so we prove the initial inequality. Step 2.\begin{align} 2(\frac{k+1}{2})^{k+1} \leq (\frac{k+2}{2})^{k+1} && \tag 6\\\end{align} $$\begin{equation}\begin{aligned}(\frac{k+2}{2})^{k+1} &= (\frac{k+2}{2}) (\frac{k+2}{2})^k \\ &= (\frac{k+2}{k+1})^k (\frac{k+1}{2})^k (\frac{k+2}{2}) \\\end{aligned}\end{equation}\tag{7}$$ \begin{align} (\frac{k+1}{2})^k (\frac{k+2}{2}) \geq (\frac{k+1}{2})^{k+1} && \tag 8\\\end{align} Using the trick already mentioned above by Andre Nicolas we managed to show that the last two terms of equation (7) are greater than the main part of the left hand side of the inequality (6). That leaves us with having to prove that $2 \leq (\frac{k+2}{k+1})^k$. Step 3.The last step is to notice that $(\frac{k+2}{k+1})$ is in fact equal to one plus a tiny difference and that the increment would get smaller and smaller with greater values of k. So probably it can be represented as $(1 + \frac{1}{k})^k$. And indeed, $ \lim (\frac{k+2}{k+1}) = e$ so for greater $k$s the value of the expression stays close to ~2.76 which is, in particular, more than 2. This is where we need our four base steps because $2 \leq (\frac{k+2}{k+1})^k$ holds only for $k \in \mathbb{N}-\{0,1,2,3\} $(at least for $k \geq 0)$. The four test cases get us covered here and allow the induction to work.
The electro-weak force is known to contain a chiral anomaly that breaks $B+L$ conservation. In other words, it allows for the sum of baryons and leptons to change, but still conserves the difference between the two. This means that the standard model could have a channel for protons to decay, for example into a pion and a positron. Does anyone know what the total proton decay rate through standard model channels is? Electroweak instantons violate baryon number (and lepton number) by three units (all three generations participate in the 't Hooft vertex). This is explained in 't Hooft's original paper. As a result, the proton is absolutely stable in the standard model. The lightest baryonic state that is unstable to decay into leptons is $^3$He. The deuteron is unstable with regard to decay into an anti-proton and leptons. The rate is proportional to $[\exp(-8\pi^2/g_w^2)]^2$, which is much smaller than the rates for proton decay that have been discussed in extensions of the standard model. Note that the decay $^3\mathrm{He}\to$ leptons involves virtual $(b,t)$ quarks, and the rate contains extra powers of $g_w$ in the pre-exponent (which does not matter much, given that the exponent is already very big). Just to give a rough number, the lifetime is a typical weak decay lifetime (say, $10^{-8}$ sec), multiplied by the instanton factor $$ \tau = \tau_w \exp(16\pi^2/g_w^2)=\tau_w\exp(4\pi\cdot 137\cdot\sin^2\theta_W) = \tau_w\cdot 10^{187}\sim 10^{180}\, sec $$ where I have neglected many pre-exponetial factors which can be calculated, in principle, in the standard model. As far as I know, the standard model is assumed to have vanishing anomalies, i.e. that the proton does not decay in the standard model. See page 5 in this reference. You are asking for this calculation. I do not know if one can keep calling it "the standard model". Here is a strong statement, at the end of chapter 7.3.1: Thus all possible anomalies cancel for every generation of the standard model. If in one generation a quark (or any other particle) were missing, one would get non-vanishing anomalies (not for SU(3)SU(3)SU(3), but for the three other combinations) This was for the unbroken phase, but it continuous to make the same statement for the broken phase. So the answer is that there should be an extension of the standard model to study B+L conservation effects. There have been several attempts to measure proton decay. So far, all have been unsuccessful. Various calculations give estimates ranging from $10^{30}$ to $10^{36}$ years. Knowing the sensitivity of the experiments, we can set limits for the proton half-life. The current best measurements indicate it is $10^{34}$ years or more. For example, a 2014 publication from the Super-Kamiokande neutron detector in Japan gives a minimum of $5.9 × 10^{33}$ years. protected by Qmechanic♦ Oct 26 '16 at 13:45 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
On one hand, it seems to make no sense, because of the following: When expanded, the claim $f(n,a) \in O(n/a)$ would be There exist $C > 0$, $n_0$, and $a_0$ such that if $n \geq n_0$ and $a \geq a_0$, then $f(n,a) \leq C \cdot n/a$. Now, given any $\epsilon > 0$, we can find an $a_\epsilon \geq a_0$ such that $C \cdot n_0/a_\epsilon < \epsilon$, and thus $f(n_0, a_\epsilon) < \epsilon$. So for any $\epsilon$, there is an input, with $n = n_0$ and $a = a_\epsilon$, at which the algorithm takes less than $\epsilon$ time to run. But any nontrivial algorithm performs at least one operation, regardless of the input. So this seems nonsensical. On the other hand, you can imagine someone saying that some approximation algorithm runs in time "$O(n/a)$", where $n$ is the size of the input and $a$ is the maximum multiplicative error. And in fact, there is a paper in which an algorithm is claimed to run in "$\tilde{O}(\frac{n}{A^3 \epsilon})$" time. (Yes, I know $\tilde{O}$ is different from $O$). I asked a longer question about that here. This question is an attempt to isolate the core confusion there.
UPDATE To make my question more precise, I'll define what I mean by an operator theory: An operator theoryis a theory in which the dynamical objects are operators, i.e., the equations of motion are imposed on operators. A wave function theory, on the other hand, is a theory in which the dynamical objects are functions of space-time, i.e., the equations of motion are imposed on functions. In wave-function theories we have differential operators (e.g., $-i\nabla$), but these aren't dynamical, so there is a clear distinction between an operator and a differential operator. To make the distinction more evident, I'll place hats $\hat{\square}$ over dynamical operators. At the first stages of the old (non-relativistic) quantum theory Heisenberg developed a theory in which the fundamental objects were operators, namely, the position and momentum operators: $$ \begin{align} i\frac{\mathrm d}{\mathrm dt} \hat X(t)&=[\hat X,\hat H] \\ i\frac{\mathrm d}{\mathrm dt} \hat P(t)&=[\hat P,\hat H] \end{align} \tag{QM.1} $$ More or less one year later Schrödinger published a new theory in which the fundamental object is just a function of space-time: $$ -i\partial_t\psi(x)=-\frac{1}{2m}\Delta\psi(x)+V(x)\psi(x) \tag{QM.2} $$ Later on, Schrödinger proved that the two formulations are actually equivalent. Nowadays we have QFT, which is an operator theory, because the equations of motion are imposed on fields, i.e., operators. For example, a scalar field $\hat\phi$ evolves through$$\begin{align}i\frac{\mathrm d}{\mathrm dt}\hat\phi(x)=[\hat\phi,\hat H]\\i\frac{\mathrm d}{\mathrm dt}\hat\pi(x)=[\hat\pi,\hat H]\end{align} \tag{QFT.1}$$where $\hat \pi$ is the field conjugate to $\hat\phi$. To me, it seems more or less natural to ask about a possible $(\mathrm{QFT.2})$, i.e., a formulation of QFT as a wave-function theory:$$\begin{array}{}&\text{non-relativistic} & \text{relativistic}\\\text{operator theory} & (\mathrm{QM.1}) & (\mathrm{QFT.1}) \\\text{wave-function theory} & (\mathrm{QM.2}) & \quad ??\end{array}$$ IMHO functional methods (i.e., path integrals) belong to an operator point of view. I believe it is not possible to use path integrals to calculate, for example, scattering amplitudes without using operators sooner or later, which means that path integrals don't actually fit in the $??$ slot. My question(s): Has someone done anything similar to Schrödinger, in the sense of reformulating QFT using solely functions of space-time and differential operators? Is there any theory of relativistic quantum mechanics in which the formalisms consists of PDE's? If the answer to the first question is that no wave-function theory exists as for today, then is there any reason to expect that there will never be? I mean: is there a no-go theorem or an argument that suggests that it is impossible to formulate QFT as a wave-function theory? My thoughts on this Any relativistic theory of interactions must be able to describe creation/annihilation phenomena, which operators easily do (through $a,a^\dagger$). But a single wave-function must have a fixed number of space-time arguments, so the number of particles must be fixed. This means that one wave-function will not suffice. A wave-function theory of interactions must, therefore, consist in an infinite number of wave-functions, each with a different number of space-time arguments:$$\begin{align}&\psi(x_1)\\&\psi(x_1,x_2)\\&\psi(x_1,x_2,x_3)\\&\cdots\end{align}$$which means that we must have an infinite number of PDE's, one for each $\psi$. And the solutions must be consistent with the operator formulation of QFT, i.e., we must prove that the new formulation is equivalent to the old one. I believe that the easier way to archive this is to use the correlation functions of QFT (or $n$-point functions) $$ \begin{align} \psi(x_1)&=\langle\hat\phi(x_1)\rangle\\ \psi(x_1,x_2)&=\langle\hat\phi(x_1)\hat\phi(x_2)\rangle\\ \psi(x_1,x_2,x_3)&=\langle\hat\phi(x_1)\hat\phi(x_2)\hat\phi(x_3)\rangle\\ &\cdots \end{align} $$ With this, it should be possible in principle to find the PDE's for the $\psi$'s in terms of the PDE's for $\hat\phi$. This means, we should use $(\partial^2+m^2)\hat\phi=\hat j$ to get $$ \begin{align} \mathcal O_1\ \psi(x_1)&=\mathcal J(x_1)\\ \mathcal O_2\ \psi(x_1,x_2)&=\mathcal J(x_1,x_2)\\ \mathcal O_3\ \psi(x_1,x_2,x_3)&=\mathcal J(x_1,x_2,x_3)\\ &\cdots \end{align} $$ for some differential operators $\mathcal O_i$ and some sources $\mathcal J$. Once we find the appropriate $\mathcal O_i,\mathcal J$, there will be no explicit reference to any operator, and we only use wave-functions (of course, these wave-functions don't have the same probabilistic interpretation of non-relativistic QM, but this is irrelevant: the point is to find an equivalent formulation of QFT using just functions of spacetime). I don't know if this makes much sense, or whether there is a better approach. Any comment will be appreciated.
This is an exercise from old exam on formal languages that I don't know how to solve: Let $p \ge 5$ be a prime number and $L_p$ be a language of words over $\{0,1\}$ that read in binary from right (i.e. from least significant bit) give a number that gives remainder modulo $p$ from the set $\{1,2, \ldots, \frac{p-1}{2}\}$. How to show that: Every DFA recognizing $L_p$ has at least $2p$ states. ? One fact that I know of and is somehow related (has DFA and primes in the statement) is: Any DFA recognizing language $\{0^n : n \text{ is not divisible by } p\}$ has at least $p$ states. This can be seen by observing that the language is infinite, hence any DFA must have a reachable cycle, from which some accepting state is reachable. And if that cycle had less than $p$ states, then because any number smaller than $p$ is coprime with $p$, we could loop sufficiently many times in that cycle and arrive at the aforementioned accepted state with a word $0^{kp}$ for some natural $k$ - a contradiction. Maybe it's possible to use this fact, or alter this proof somehow to make it fit for the theorem with $L_p$? -- EDIT I'm trying to solve it by Myhill-Nerode theorem, as Yuval Filmus suggested. So, the goal is to find $2p$ words $w_1, \ldots, w_{2p}$ that will be pairwise distinguishable. I don't have a good intuition here but let's define $w_i$ to be $rev(bin(i))$ for $i = 1, \ldots, 2p$ ($bin(a)$ gives a binary representation of number $a$, and $rev(w)$ reverses the word $w$). Let's take any $i \neq j$ that both belong to $L_p$, or both don't. Now the task becomes a bit number-theoretic -- adding a common suffix $x$ to these words changes their values such that $val(w_ix) = val(w_i) + 2^{length(w_i)-1}val(x)$ (and similarly for $j$), where $val(\cdot)$ gives value of binary string reading from LSB (so e.g. $val(01) = 2$). Now the question is: can we always find an appropriate $x$ that makes one of $w_ix, w_jx$ belong to $L_p$, and the other not? I don't know the answer to this question. Maybe I should use the fact, that $2$ is a multiplicative generator modulo $p$?
Abbreviation: MetSp A is a structure $\mathbf{X}=\langle X,d\rangle$, where $d:X\times X\to [0,infty)$ is a metric space , i.e., distance metric points zero distance apart are identical: $d(x,y)=0\iff x=y$ $d$ is : $d(x,y)=d(y,x)$ symmetric the holds: $d(x,z)\le d(x,y)+d(y,z)$ triangle inequality Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{X}$ and $\mathbf{Y}$ be metric spaces. A morphism from $\mathbf{X}$ to $\mathbf{Y}$ is a function $h:X\rightarrow Y$ that is continuous in the topology induced by the metric: $\forall z\in X\ \forall\epsilon>0\ \exists\delta>0\ \forall x\in X(0<d(x,z)<\delta\Longrightarrow d(h(x),h(z))<\epsilon$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. Classtype higher-order Amalgamation property Strong amalgamation property Epimorphisms are surjective [[Compact metric spaces]] [[Hausdorff spaces]] reduced type
The other day I was playing around in Matlab, and although I can’t remember what I set out to do I did end up making a small lossy audio compression/decompression system! It seemed like a good topic for a blog post. The discrete cosine transformation Before I show the code I’ll have to very briefly introduce the discrete cosine transform (DCT). We should be able to ignore the maths and implementation of the DCT and treat it as a magic box which comes with Matlab or octave. If your interested in the details (and they are interesting) this book is a great place to start if you want more depth than wikipedia offers. An audio sample is a sequence real numbers \( X = \{x_1, \ldots x_N\} \). The DCT of this audio sample is the sequence, \( DCT(X) = Y = \{y_1, \ldots, y_N \} \) such that $$ x_n = \sum_{k=1}^n y_k w(k) cos\left( \frac{\pi(2n-1)(k-1)}{2N} \right) $$ where $$ w(k) =\cases{\frac{1}{\sqrt{N}}, & k=1 \cr \sqrt{\frac{2}{N}}, & \text{otherwise}}.$$ Don’t worry too much about that expression. We just need note that the DCT represents the original signal as a sum of cosines, and that the coefficients specify the amplitude of these cosines. If we have the DCT coefficients we can transform them back to the original sequence with the inverse discrete cosine transform (IDCT). This could be calculated with the above expression but more efficient algorithms exist for both the DCT and IDCT (these algorithms are based on the fast Fourier transform, which is again an interesting topic that I won’t get into). The compression scheme So what does this have to do with audio compression? The coefficients of the DCT are amplitudes of cosines that are “within” the original single. Small coefficients will result in cosines with small amplitudes, which we are less likely to hear. So instead of storing the original sample we could take the DCT of the sample, discard small coefficients, and keep that. We would store fewer numbers and so compress the audio data. The decompression algorithm would be simple, we would simply take the IDCT of whatever we stored play that back. We will be missing some of the signal, but one of the properties of DCT’s is that a few of the larger coefficients account for a large amount of the power in the original signal. Also the coefficients we discard will usually be from quiet high frequency parts of the sound, which we hear less. These are some of the reasons why DCT is often used in compression. Filling in the details There are a few details we are missing. When compressing with DCTs you typically compress small slices (windows) of the audio at once. This is partly so that seeking through the compressed stream is easier but mostly because we want the coefficients in our window to represent frequencies we hear (with large window the majority of the coefficients would represent frequencies well out of the human hearing range). In addition we need to consider the binary format of the data. We could store the results of the DCT as floating point values, but that would be 32 bits per coefficient – which seems a little high given that .wav format files are stored as 16 bit integers. So let’s instead linearly map the range of the DCT coefficients to 16 bit integers and store those instead. We’ll have to store not just the coefficients, but their index too, let’s store them as 16 bit integers as well. It may seem inefficient to do this since a few bits of our integers will never be used. This is somewhat offset by Matlab/Octave saving files with gzip compression. This will be able to compress those runs of zeros caused by using an overly large integer fairly well. This is a bit of a kludge, we are keeping things simple, so using non Matlab data types would be out of the question. After some testing I realised that I could map the actual coefficients to \(n\) bit range (with \( n < 16 \)), store them in 16 bit integers, and still get a saving in space which was nearly as good as using real \(n\) bit integers! I think that pretty much cover it! Here's the code for compression: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 % Simple DCT compression. % Works in matlab with signal processing toolbox or octave. % X : (audio) samples, vector with each element in [-1,1] % window : window size, length(X) must be divisible by this. % num_components : number of DCT components to store per window. % coeff_bits: number of bits to use to store each coefficient. function result = compress_dct(X, window, num_components, coeff_bits) num_win = length(X)/window; X = reshape(X, window, num_win); % reshape so each window is a row Y = dct(X); % applies dct to each row % find top components and their indices [a, I] = sort(abs(Y), 'descend'); I = I(1:num_components, :); % build struct result.coeffs = int16(zeros(num_components, num_win)); result.ind = int16(I); result.window = window; result.coeff_bits = coeff_bits; for i = 1:num_win % store each coefficient (in [-1,1]) as an integer mapped to range % (-2^(coeff_bits-1), 2^(coeff_bits_1)) result.coeffs(:,i) = int16(Y(I(:,i), i)*2^(coeff_bits-1)); end end Here’s the decompression function: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 function X = decompress_dct(data) num_win = size(data.coeffs, 2); coeffs = double(data.coeffs)/(2^(data.coeff_bits-1)); % Rescale coeffs to [-1,1] % Construct full DCT windows from sparse. Y = zeros(data.window, num_win); for i = 1:num_win Y(data.ind(:,i),i) = coeffs(:,i); end % Inverse DCT each window. X = idct(Y); % Stitch windows into one long vector. X = reshape(X, num_win*data.window, 1); end And here’s an example of using compression and decompression functions from Octave. 1 2 3 4 5 6 7 8 9 10 11 12 13 window_size = 2048; n_coeffs_keep = 100; coeff_n_bits = 10; % Load wav file, must be mono, number of samples divisible by window size. X = wavread('bach_clip.wav'); % Compress wav comp = compress_dct(X, window_size, n_coeffs_keep, coeff_n_bits); % Save comp structure in a binary format with extra gzip compression % so we can see how big it really is. save -binary -z bach.mat comp % Decompress and write back to wav for comparison. Xdecomp = decompress_dct(comp); wavwrite(Xdecomp, 44100, 16, 'bach_decomp.wav'); What does it sound like? Here are some examples of a compressed piece of audio at various settings. I realise that soundcloud will stream in mp3 which could obscure the results, but the compression artefacts are large enough to hear though mp3 and you can download the .wavs though soundcloud if you want. EDIT: The soundcloud player seems to be very noisy for some reason – I suggest you click download and listen to the .wavs. The most annoying artifact, that occurs even in the higher bit rate example, is a slight “clicking” noise. I think this is caused by the windowing – the sample is not forced to be “continuous” over the boundaries of windows so you hear small clicks on windows where it does not line up. Aside from that the highest bitrate version is not totally awful to listen to, although even a fairly poor set of headphones I can hear “garbling”. The cool thing is that even two mid bit-rate streams are fairly intelligible (e.g you could probably understand speech), which is impressive considering the level of compression achieved and the simplicity of the code. The lowest bit rate stream is really bad, but it’s a good example of what very drastic lossy audio compression sounds like. In conclusion I think it’s rather impressive how far you can get with lossy audio compression by only using the DCT and some generic lossless compression. A core part of MP3 audio compression is the DCT, but MP3 goes well beyond this to achieve much better results.
See edit at the end of the question All the references in this question refer to Quantum algorithm for solving linear systems of equations (Harrow, Hassidim & Lloyd, 2009). HHL algorithm consists in an application of the quantum phase estimation algorithm (QPE), followed by rotations on an ancilla qubit controlled by the eigenvalues obtained as output of the QPE. The state of the quantum registers after the rotations is $$ \sum_{j=1}^{N} \sum_{k=0}^{T-1} \alpha_{k|j}\beta_j \vert \tilde\lambda_k\rangle \vert u_j \rangle \left( \sqrt{1 - \frac{C^2}{\tilde\lambda_k^2}} \vert 0 \rangle + \frac{C}{\tilde\lambda_k}\vert 1 \rangle \right). $$ Then, the algorithm just uncomputes the first register containing the eigenvalues ($\vert \tilde\lambda_k \rangle$) to give the state $$ \sum_{j=1}^{N}\beta_j \vert u_j \rangle \left( \sqrt{1 - \frac{C^2}{\lambda_j^2}} \vert 0 \rangle + \frac{C}{\lambda_j}\vert 1 \rangle \right). $$ Here, the notation used assumes that the QPE was perfect, i.e. the approximations were the exact values. The next step of the algorithm is to measure the ancilla qubit (the right-most one in the sum above) and to select the output only when the ancilla qubit is measured to be $\vert 1 \rangle$. This process is also called "post-selection". The state of the system after post-selecting (i.e. after ensuring that the measurement returned $\vert 1 \rangle$) is written $$ \frac{1}{D}\sum_{j=1}^{N}\beta_j \frac{C}{\lambda_j} \vert u_j \rangle $$ where $D$ is a normalisation constant (the exact expression can be found in the HHL paper, page 3). My question: Why is the $\frac{C}{\lambda_j}$ coefficient still in the expression? From what I understood, measuring $$\left( \sqrt{1 - \frac{C^2}{\lambda_j^2}} \vert 0 \rangle + \frac{C}{\lambda_j}\vert 1 \rangle \right)$$should output $\vert 0 \rangle$ or $\vert 1 \rangle$ and destroy the amplitudes in front of those states. EDIT: Specifying the question. Following @glS' answer, here is the updated question: Why does the post-selection works like described by @glS' answer and not like above?
On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions 1. Department of Mathematics, University of Houston, Houston, TX, 77204, USA 2. Department of Mathematical Sciences, Clemson University, Clemson, SC, 29634, USA 3. Department of Mathematics, University of Tennessee, Knoxville, TN 37996, USA We study well-posedness of a velocity-vorticity formulation of the Navier-Stokes equations, supplemented with no-slip velocity boundary conditions, a corresponding zero-normal condition for vorticity on the boundary, along with a natural vorticity boundary condition depending on a pressure functional. In the stationary case we prove existence and uniqueness of a suitable weak solution to the system under a small data condition. The topic of the paper is driven by recent developments of vorticity based numerical methods for the Navier-Stokes equations. Keywords:Navier-Stokes equations, velocity-vorticity formulation, well-posedness, no-slip boundary conditions. Mathematics Subject Classification:Primary: 35Q30, 76N10. Citation:Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148 References: [1] R. A. Adams, [2] M. Akbas, L. Rebholz and C. Zerfas, Optimal vorticity accuracy in an efficient velocity-vorticity method for the 2D Navier-Stokes equations, [3] C. Begue, C. Conca, F. Murat and O. Pironneau, Les equations de Stokes et de Navier-Stokesavec des conditions aux limites sur la pression, in [4] M. Benzi, M. A. Olshanskii, L. G. Rebholz and Z. Wang, Assessment of a vorticity based solver for the Navier-Stokes equations, [5] [6] S. Charnyi, T. Heister, M. Olshanskii and L. Rebholz, On conservation laws of Navier-Stokes Galerkin discretizations, [7] P. G. Ciarlet, [8] [9] L. C. Evans, [10] [11] [12] [13] V. Girault, Curl-conforming finite element methods for Navier-Stokes equations with nonstandard boundary conditions in $\textbf{R}^3$, [14] [15] P. Gresho and R. Sani, [16] [17] [18] M. D. Gunzburger, [19] T. Heister, M. A. Olshanskii and L. G. Rebholz, Unconditional long-time stability of velocity-vorticity method for 2D Navier-Stokes equations, [20] W. Layton, [21] H. K. Lee, M. A. Olshanskii and L. G. Rebholz, On error analysis for the 3D Navier-Stokes equations in Velocity-Vorticity-Helicity form, [22] D. C. Lo, D. L. Young and K. Murugesan, An accurate numerical solution algorithm for 3D velocity-vorticity Navier-Stokes equations by the DQ method, [23] A. J. Majda and A. L. Bertozzi, [24] H. L. Meitz and H. F. Fasel, A compact-difference scheme for the Navier-Stokes equations in vorticity-velocity formulation, [25] M. A. Olshanskii, T. Heister, L. Rebholz and K. Galvin, Natural vorticity boundary conditions on solid walls, [26] M. A. Olshanskii and L. G. Rebholz, Velocity-vorticity-helicity formulation and a solver for the Navier-Stokes equations, [27] A. Palha and M. Gerritsma, A mass, energy, enstrophy and vorticity conserving (MEEVC) mimetic spectral element discretization for the 2D incompressible Navier-Stokes equations, [28] R. Temam, [29] K. L. Wong and A. J. Baker, A 3D incompressible Navier-Stokes velocity-vorticity weak form finite element algorithm, [30] X. H. Wu, J. Z. Wu and J. M. Wu, Effective vorticity-velocity formulations for the three-dimensional incompressible viscous flows, show all references References: [1] R. A. Adams, [2] M. Akbas, L. Rebholz and C. Zerfas, Optimal vorticity accuracy in an efficient velocity-vorticity method for the 2D Navier-Stokes equations, [3] C. Begue, C. Conca, F. Murat and O. Pironneau, Les equations de Stokes et de Navier-Stokesavec des conditions aux limites sur la pression, in [4] M. Benzi, M. A. Olshanskii, L. G. Rebholz and Z. Wang, Assessment of a vorticity based solver for the Navier-Stokes equations, [5] [6] S. Charnyi, T. Heister, M. Olshanskii and L. Rebholz, On conservation laws of Navier-Stokes Galerkin discretizations, [7] P. G. Ciarlet, [8] [9] L. C. Evans, [10] [11] [12] [13] V. Girault, Curl-conforming finite element methods for Navier-Stokes equations with nonstandard boundary conditions in $\textbf{R}^3$, [14] [15] P. Gresho and R. Sani, [16] [17] [18] M. D. Gunzburger, [19] T. Heister, M. A. Olshanskii and L. G. Rebholz, Unconditional long-time stability of velocity-vorticity method for 2D Navier-Stokes equations, [20] W. Layton, [21] H. K. Lee, M. A. Olshanskii and L. G. Rebholz, On error analysis for the 3D Navier-Stokes equations in Velocity-Vorticity-Helicity form, [22] D. C. Lo, D. L. Young and K. Murugesan, An accurate numerical solution algorithm for 3D velocity-vorticity Navier-Stokes equations by the DQ method, [23] A. J. Majda and A. L. Bertozzi, [24] H. L. Meitz and H. F. Fasel, A compact-difference scheme for the Navier-Stokes equations in vorticity-velocity formulation, [25] M. A. Olshanskii, T. Heister, L. Rebholz and K. Galvin, Natural vorticity boundary conditions on solid walls, [26] M. A. Olshanskii and L. G. Rebholz, Velocity-vorticity-helicity formulation and a solver for the Navier-Stokes equations, [27] A. Palha and M. Gerritsma, A mass, energy, enstrophy and vorticity conserving (MEEVC) mimetic spectral element discretization for the 2D incompressible Navier-Stokes equations, [28] R. Temam, [29] K. L. Wong and A. J. Baker, A 3D incompressible Navier-Stokes velocity-vorticity weak form finite element algorithm, [30] X. H. Wu, J. Z. Wu and J. M. Wu, Effective vorticity-velocity formulations for the three-dimensional incompressible viscous flows, [1] Daoyuan Fang, Ruizhao Zi. On the well-posedness of inhomogeneous hyperdissipative Navier-Stokes equations. [2] [3] Matthias Hieber, Sylvie Monniaux. Well-posedness results for the Navier-Stokes equations in the rotational framework. [4] Bin Han, Changhua Wei. Global well-posedness for inhomogeneous Navier-Stokes equations with logarithmical hyper-dissipation. [5] Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. [6] Weimin Peng, Yi Zhou. Global well-posedness of axisymmetric Navier-Stokes equations with one slow variable. [7] Yoshihiro Shibata. Local well-posedness of free surface problems for the Navier-Stokes equations in a general domain. [8] Donatella Donatelli, Eduard Feireisl, Antonín Novotný. On incompressible limits for the Navier-Stokes system on unbounded domains under slip boundary conditions. [9] [10] Chao Deng, Xiaohua Yao. Well-posedness and ill-posedness for the 3D generalized Navier-Stokes equations in $\dot{F}^{-\alpha,r}_{\frac{3}{\alpha-1}}$. [11] Chérif Amrouche, Nour El Houda Seloula. $L^p$-theory for the Navier-Stokes equations with pressure boundary conditions. [12] Sylvie Monniaux. Various boundary conditions for Navier-Stokes equations in bounded Lipschitz domains. [13] Siegfried Maier, Jürgen Saal. Stokes and Navier-Stokes equations with perfect slip on wedge type domains. [14] Hugo Beirão da Veiga. Navier-Stokes equations: Some questions related to the direction of the vorticity. [15] Matthew Paddick. The strong inviscid limit of the isentropic compressible Navier-Stokes equations with Navier boundary conditions. [16] Boris Muha, Zvonimir Tutek. Note on evolutionary free piston problem for Stokes equations with slip boundary conditions. [17] Barbara Kaltenbacher, Irena Lasiecka. Well-posedness of the Westervelt and the Kuznetsov equation with nonhomogeneous Neumann boundary conditions. [18] Petr Kučera. The time-periodic solutions of the Navier-Stokes equations with mixed boundary conditions. [19] Franck Boyer, Pierre Fabrie. Outflow boundary conditions for the incompressible non-homogeneous Navier-Stokes equations. [20] Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Backgrounds Proof. Suppose we have an orthogonal matrix, $$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$ we have $$ I = A^T A = \begin{bmatrix} a^2 +c^2 & ab + cd \\ ab + cd & b^2 + d^2 \end{bmatrix} $$ thus we have the following formulas $\Vert (a,c) \Vert_2 = 1$ $\Vert (b,d) \Vert_2 = 1$ $(a,c)(b,d)^T = 0$ So the solutions to the above formulas are given as $$ \begin{aligned} (a,c) &= (\cos \theta, \sin \theta) \\[8pt] (b,d) &= \left(\cos (\theta \pm \pi/2), \sin (\theta \pm \pi/2) \right) \end{aligned} $$ So when 1) $ad - bc = 1$, $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} $$ and when 2) $ad - bc = -1$, $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} \cos\theta & \sin \theta \\ \sin \theta & -\cos \theta \end{bmatrix} $$ The first case, by definition, corresponds to the Givens rotation by $\theta$. We can also check that the second case corresponds to the the Householder reflection with respect to $v = (-\sin(\theta/2), \cos(\theta/2))$. Question As far as I understood, for $3 \times 3$ matrix cases, a Givens rotation is a special case of rotation, where only two axes serve as the rotational axes, and Householder reflection is also a special case of reflection, where only a one-dimensional plane serves as the axis of the reflection. So my question is : can we generalize, that For any $n \times n$ orthogonal matrix A on $\mathbb{R}$, $A$ is a rotationor reflection?
Note: Cross-posted on Physics SE. I made some circuit to prepare a 2 qubit state, but I am having trouble understanding how to measure Bell's inequality. I know the inequality is of the form $$|E(a,b)-E(a,b')+E(a',b)+E(a',b')| \leq 2$$ where for each $E$ $$E = \frac{N_{++} + N_{--} - N_{-+} - N_{+-}}{N_{++} + N_{--} + N_{-+} + N_{+-}} $$ My problem is, what would the different $a,a',b,b'$ be? With this question, I don't mean what their values would be (since IBM Q just outputs $0$ or $1$ in the $01$ basis), but how do I implement this? Can I just do $a,b$ in $01$ basis and $a',b'$ in $\pm$ basis? And if so, how do I proceed with this? Do I just apply a Hadamard gate before the measurement and take whatever $0$ or $1$ value it outputs?
This question is about fitting a multivariate linear regression by maximum likelihood, under a specific parameterization of the covariance matrix, when the number of observations is smaller than the number of responses. It arises in an applied project that I'm part of. Let $Y_i \in \mathbb R^r, i=1, \dots, n$ be independent multivariate normal with (non-stochastic) mean $\beta'X_i$ and covariance matrix $\Sigma = \Lambda (R_1 \otimes R_2)\Lambda$. $\Lambda$ is a diagonal matrix and the $R_i$ are correlation matrices of sizes $r_1$ and $r_2$, respectively, where $r_1 \times r_2 = r$. The number of predictors, $p$, is small enough that the MLE of $\beta$ may be computed as $\hat{\beta} = (X'X)^{-1}X'Y$, where $Y$ and $X$ are $n\times r$ and $n\times p$ matrices with the $Y_i$ and $X_i$ as rows. Now, after profiling out $\beta$ the profile log-likelihood is: $$ \ell(\Lambda, R_1, R_2) = -\frac{nr}{2}\log(2\pi) - n\log\vert \Lambda\vert - \frac{nr_2}{2}\log\vert R_1\vert - \frac{nr_1}{2}\log\vert R_2\vert - \frac{n}{2}\mathrm{tr}\left[\Lambda^{-1}(R_1^{-1}\otimes R_2^{-1})\Lambda^{-1}S\right], $$ where $S = (Y - X\hat{\beta})'(Y - X\hat{\beta})/n$. Question: Can this be optimized over $\Lambda, R_1, R_2$ subject to the constraint that $\Lambda$ is diagonal and $R_1,R_2$ are correlation matrices? I do have (unconstrained) gradients for all parameters.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A January 2008 , Volume 21 , Issue 1 A special issue dedicated to Edward Norman Dancer on the occasion of his 60th birthday Select all articles Export/Reference: Abstract: Professor Edward Norman Dancer, known to his friends and colleagues as Norm or Norman, was born in Bundaberg in north Queensland, Australia in December 1946. He graduated from the Australian National University in 1968 with first class honours, and continued to obtain a PhD from the University of Cambridge in 1972. He was appointed a Lecturer in 1973 at the University of New England, Armidale, where he received a Personal Chair in 1987. He left Armidale in 1993 to become a Professor of Mathematics at the University of Sydney, a position he has held since. He was elected a Fellow of the Australian Academy of Science (FAA) in 1996. He has held distinguished visiting professorships at many institutions in Europe and North America. In 2002 he received the prestigious Alexander von Humboldt Research Award, the highest prize awarded in Germany to foreign scientists. For more information please click the “Full Text” above. Abstract: To understand the impact of spatial heterogeneity of environment and movement of individuals on the persistence and extinction of a disease, a spatial SIS reaction-diffusion model is studied, with the focus on the existence, uniqueness and particularly the asymptotic profile of the steady-states. First, the basic reproduction number $\R_{0}$ is defined for this SIS PDE model. It is shown that if $\R_{0} < 1$, the unique disease-free equilibrium is globally asymptotic stable and there is no endemic equilibrium. If $\R_{0} > 1$, the disease-free equilibrium is unstable and there is a unique endemic equilibrium. A domain is called high (low) risk if the average of the transmission rates is greater (less) than the average of the recovery rates. It is shown that the disease-free equilibrium is always unstable $(\R_{0} > 1)$ for high-risk domains. For low-risk domains, the disease-free equilibrium is stable $(\R_{0} < 1)$ if and only if infected individuals have mobility above a threshold value. The endemic equilibrium tends to a spatially inhomogeneous disease-free equilibrium as the mobility of susceptible individuals tends to zero. Surprisingly, the density of susceptibles for this limiting disease-free equilibrium, which is always positive on the subdomain where the transmission rate is less than the recovery rate, must also be positive at some (but not all) places where the transmission rates exceed the recovery rates. Abstract: For $\Omega$ a bounded open set in $\R^N$ we consider the space $H^1_0(\bar{\Omega})=${$u_{|_{\Omega}}: u \in H^1(\R^N):$ $u(x)=0$ a.e. outside $\bar{\Omega}$}. The set $\Omega$ is called stableif $H^1_0(\Omega)=H^1_0(\bar{\Omega})$. Stability of $\Omega$ can be characterised by the convergence of the solutions of the Poisson equation $ -\Delta u_n = f$ in $D(\Omega_n)^$´, $ u_n \in H^1_0(\Omega_n)$ and also the Dirichlet Problem with respect to $\Omega_n$ if $\Omega_n$ converges to $\Omega$ in a sense to be made precise. We give diverse results in this direction, all with purely analytical tools not referring to abstract potential theory as in Hedberg's survey article [Expo. Math. 11 (1993), 193--259]. The most complete picture is obtained when $\Omega$ is supposed to be Dirichlet regular. However, stability does not imply Dirichlet regularity as Lebesgue's cusp shows. Abstract: This paper is concerned with time-dependent reaction-diffusion equations of the following type: $\partial_t u=$Δ$u+f(x-cte,u),t>0,x\in\R^N.$ These kind of equations have been introduced in [1] in the case $N=1$ for studying the impact of a climate shift on the dynamics of a biological species. In the present paper, we first extend the results of [1] to arbitrary dimension $N$ and to a greater generality in the assumptions on $f$. We establish a necessary and sufficient condition for the existence of travelling wave solutions, that is, solutions of the type $u(t,x)=U(x-cte)$. This is expressed in terms of the sign of the generalized principal eigenvalue $\l$ of an associated linear elliptic operator in $\R^N$. With this criterion, we then completely describe the large time dynamics for this equation. In particular, we characterize situations in which there is either extinction or persistence. Moreover, we consider the problem obtained by adding a term $g(x,u)$ periodic in $x$ in the direction $e$: $\partial_t u=$Δ$u+f(x-cte,u)+g(x,u),t>0,x\in\R^N.$ Here, $g$ can be viewed as representing geographical characteristics of the territory which are not subject to shift. We derive analogous results as before, with $\l$ replaced by the generalized principal eigenvalue of the parabolic operator obtained by linearization about $u\equiv0$ in the whole space. In this framework, travelling waves are replaced by pulsating travelling waves, which are solutions of the form $U(t,x-cte)$, with $U(t,x)$ periodic in $t$. These results still hold if the term $g$ is also subject to the shift, but on a different time scale, that is, if $g(x,u)$ is replaced by $g(x-c'te,u)$, with $c'\in\R$. Abstract: We review some recent existence results for the elliptic problem $\Delta u + u^p =0$, $u>0$ in an exterior domain, $\Omega = \R^N\setminus \D$ under zero Dirichlet and vanishing conditions, where $\D$ is smooth and bounded, and $p>\frac{N+2}{N-2}$. We prove that the associated Dirichlet problem has infinitely many positive solutions. We establish analogous results for the standing-wave supercritical nonlinear Schrödinger equation $\Delta u - V(x)u + u^p = 0 $ where $V\ge 0$ and $V(x) = o(|x|^{-2})$ at infinity. In addition we present existence results for the Dirichlet problem in bounded domains with a sufficiently small spherical hole if $p$ differs from certain sequence of resonant values which tends to infinity. Abstract: Asymptotics of solutions to Schrödinger equations with singular dipole-type potentials are investigated. We evaluate the exact behavior near the singularity of solutions to elliptic equations with potentials which are purely angular multiples of radial inverse-square functions. Both the linear and the semilinear (critical and subcritical) cases are considered. Abstract: We consider a stationary nonlinear Schröodinger equation with a repulsive delta-function impurity in one space dimension. This equation admits a unique positive solution and this solution is even. We prove that it is a minimizer of the associated energy on the subspace of even functions of $H^1(\R, \C)$, but not on all $H^1(\R, \C)$, and study its orbital stability. Abstract: For $N\geq3$ and $p>1$, we consider the nonlinear Schrödinger equation $i\partial_{t}w+\Delta_{x}w+V(x) |w| ^{p-1}w=0\text{ where }w=w(t,x):\mathbb{R}\times\mathbb{R}^{N}\rightarrow\mathbb{C}$ with a potential $V$ that decays at infinity like $| x|^{-b}$ for some $b\in (0,2)$. A standing wave is a solution of the form $w(t,x)=e^{i\lambda t}u(x)\text{ where }\lambda>0\text{ and }u:\mathbb{R}^{N}\rightarrow\mathbb{R}.$ For $ 1 < p < 1+(4-2b)/(N-2)$, we establish the existence of a $C^1$-branch of standing waves parametrized by frequencies $\lambda $ in a right neighbourhood of $0$. We also prove that these standing waves are orbitally stable if $ 1 < p < 1+(4-2b)/N$ and unstable if $1+(4-2b)/N < p < 1+(4-2b)/(N-2)$. Abstract: Selfdual variational theory -- developed in [8] and [9] -- allows for the superposition of appropriate "boundary" Lagrangians with "interior" Lagrangians, leading to a variational formulation and resolution of problems with various linear and nonlinear boundary constraints that are not amenable to standard Euler-Lagrange theory. The superposition of several selfdual Lagrangians is also possible in many natural settings, leading to a variational resolution of certain differential systems. These results are applied to nonlinear transport equations with prescribed exit values, Lagrangian intersections of convex-concave Hamiltonian systems, initial-value problems of dissipative systems, as well as evolution equations with periodic and anti-periodic solutions. Abstract: Let us consider the problem $-\Delta u+a(|x|)u=\lambda e^u$in$\ B_1,$ (0.1) $u=0$ on$ \partial B_1.$ where $B_1$ is the unit ball in $R^N$, $N\ge2$, $\lambda>0$ and $a(|x|)\ge0$ is a smooth radial function. Under some suitable assumptions on the regular part of the Green function of the operator $-u''- \frac{N-1}{r}u+a(r)u$, we prove the existence of a radial solution to (0.1) for $\lambda$ small enough. Abstract: We consider a class of $2m$ components competition-diffusion systems which involve $m$ parabolic equations as well as $m$ ordinary differential equation, and prove the strong convergence in $L^p$ of a subsequence of each component as the reaction coefficient tends to infinity. In the special case of $4$ components the solution of this system converges to that of a Stefan problem. Abstract: The precise dynamics of a reaction-diffusion model of autocatalytic chemical reaction is described. It is shown that exactly either one, two, or three steady states exists, and the solution of dynamical problem always approaches to one of steady states in the long run. Moreover it is shown that a global codimension one manifold separates the basins of attraction of the two stable steady states. Analytic ingredients include exact multiplicity of semilinear elliptic equation, the theory of monotone dynamical systems and the theory of asymptotically autonomous dynamical systems. Abstract: The global asymptotic stability with phase shift of traveling wave fronts of minimal speed, in short minimal fronts, is established for a large class of monostable lattice equations via the method of upper and lower solutions and a squeezing technique. Abstract: We consider a class of variational equations with exponential nonlinearities on compact surfaces. From considerations involving the Moser-Trudinger inequality, we characterize some sublevels of the Euler-Lagrange functional in terms of the topology of the surface and of the data of the equation. This is used together with a min-max argument to obtain existence results. Abstract: We consider the problem $-\Delta u= |u|^{\frac4{N-2}} u \mbox{ in } \Omega \setminus \{B(\xi_1,\varepsilon)\cup B(\xi_2,\varepsilon)\},$ $ u = 0 \mbox{ on } \partial( \Omega \setminus \{B(\xi_1,\varepsilon)\cup B(\xi_2,\varepsilon)\}),$ where $\Omega$ is a smooth bounded domain in $R^N$, $N\ge 3,$ $\xi_1,$ $\xi_2$ are different points in $\Omega$ and ε is a small positive parameter. We show that, for ε small enough, the equation has at least one pair of sign changing solutions, whose positive and negative parts concentrate at $\xi_1$ and $\xi_2$ as ε goes to zero. Abstract: We are interested in the time decay estimates of global solutions of the semilinear parabolic equation $u_t= \Delta u+|u|^{p-1}u$ in $\R^N\times\R^+$, where $p>1$. We find several new sufficient and/or necessary conditions guaranteeing that the solution for $t$ large behaves like the solution of the linear heat equation or has the self-similar decay. We are particularly interested in the behaviour of threshold solutions lying on the borderline between global existence and blow-up. Abstract: Using minimization arguments and a limit process, we construct a family of solutions which undergo an infinite number of transitions for an Allen-Cahn model equation. Abstract: We study positive solutions of the equation $\varepsilon^2\Delta u - u + u^{\frac{n+2}{n-2}} = 0$, where $n=3,4,5$, and $\varepsilon > 0$ is small, with Neumann boundary condition in a smooth bounded domain $\Omega \subset R^n$. We prove that, along some sequence $\{\varepsilon_j \}$ with $ \varepsilon_j \to 0$, there exists a solution with an interior bubble at an innermost part of the domain and a boundary layer on the boundary $\partial\Omega$. Abstract: Let $M^{N\times n}$ be the space of real $N\times n$ matrices. We construct non-negative quasiconvex functions $F:M^{N\times n}\to R_+$ of quadratic growth whose zero sets are the graphs $\Gamma_f$ of certain Lipschitz mappings $f:K\subset E\to$ $E^$⊥, where $E\subset M^{N\times n}$ is a linear subspace without rank-one matrices, $K$ a compact subset of $E$ with $E^$⊥ its orthogonal complement. We show that the gradients $DF:M^{N\times n}\to M^{N\times n}$ are strictly quasimonotone mappings and satisfy certain growth and coercivity conditions so that the variational integrals $u\to \int_{\Omega}F(Du(x))dx$ satisfy the Palais-Smale compactness condition in $W^{1,2}$. If $K$ is a smooth compact manifold of $E$ without boundary and the Lipschtiz mapping $f$ is of class $C^2$, then the closed $\epsilon$-neighbourhoods $(\Gamma_f)_\epsilon$ for small $\epsilon>0$ are quasiconvex sets. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Faddeeva Package From AbInitio Revision as of 22:55, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) ← Previous diff Revision as of 21:10, 30 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) Next diff → Line 24: Line 24: :<math>\mathrm{erfc}(x) = e^{-x^2} w(ix) = \begin{cases} e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ 2 - e^{-x^2} w(-ix)) & \mathrm{Re}\,x < 0 \end{cases}</math> (complementary error function) :<math>\mathrm{erfc}(x) = e^{-x^2} w(ix) = \begin{cases} e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ 2 - e^{-x^2} w(-ix)) & \mathrm{Re}\,x < 0 \end{cases}</math> (complementary error function) :<math>\mathrm{erf}(x) = 1 - \mathrm{erfc}(x) = \begin{cases} 1 - e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ e^{-x^2} w(-ix) - 1 & \mathrm{Re}\,x < 0 \end{cases}</math> (error function) :<math>\mathrm{erf}(x) = 1 - \mathrm{erfc}(x) = \begin{cases} 1 - e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ e^{-x^2} w(-ix) - 1 & \mathrm{Re}\,x < 0 \end{cases}</math> (error function) - :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) + :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math>; for '''real''' ''x'', <math>\mathrm{erfi}(x) = \frac{\mathrm{Im}[w(x)]}{\mathrm{Re}[w(x)]}</math> (imaginary error function) - :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right]</math> ([[w:Dawson function|Dawson function]]) + :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right]</math>; for '''real''' ''x'', <math>F(x) = \frac{\sqrt{\pi}}{2}\mathrm{Im}[w(x)]</math> ([[w:Dawson function|Dawson function]]) Note that in the case of erf and erfc, we provide different equations for positive and negative ''x'', in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Note that in the case of erf and erfc, we provide different equations for positive and negative ''x'', in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Revision as of 21:10, 30 October 2012 Contents Faddeeva / complex error function Steven G. Johnson has written free/open-source C++ code (with wrappers for other languages) to compute the scaled complex error function w( z) = e − z2erfc(− iz), also called the Faddeeva function(and also the plasma dispersion function), for arbitrary complex arguments zto a given accuracy. Given the Faddeeva function, one can easily compute Voigt functions, the Dawson function, and similar related functions. Download the source code from: http://ab-initio.mit.edu/Faddeeva_w.cc (updated 29 October 2012) Usage To use the code, add the following declaration to your C++ source (or header file): #include <complex> extern std::complex<double> Faddeeva_w(std::complex<double> z, double relerr=0); The function Faddeeva_w(z, relerr) computes w( z) to a desired relative error relerr. Omitting the relerr argument, or passing relerr=0 (or any relerr less than machine precision ε≈10 −16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of relerr may improve performance (at the expense of accuracy). You should also compile Faddeeva_w.cc and link it with your program, of course. In terms of w( z), some other important functions are: (scaled complementary error function) (complementary error function) (error function) ; for real x, (imaginary error function) ; for real x, (Dawson function) Note that in the case of erf and erfc, we provide different equations for positive and negative x, in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Wrappers: Matlab, GNU Octave, and Python Wrappers are available for this function in other languages. Matlab (also available here): A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_mex.cc (along with the help file Faddeeva_w.m. Compile it into an octave plugin with: mex -output Faddeeva_w -O Faddeeva_w_mex.cc Faddeeva_w.cc GNU Octave: A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_oct.cc. Compile it into a MEX file with: mkoctfile -DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1 -s -o Faddeeva_w.oct Faddeeva_w_oct.cc Faddeeva_w.cc Python: Our code is used to provide scipy.special.wofzin SciPy starting in version 0.12.0 (see here). Algorithm This implementation uses a combination of different algorithms. For sufficiently large | z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680. Unlike those papers, however, we switch to a completely different algorithm for smaller | z|: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151. (I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger | z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing USE_CONTINUED_FRACTION to 0 in the code.) Note that this is SGJ's independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software. Algorithm 916 requires an external complementary error function erfc( x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, by the new erfcx routine is much faster.) Test program To test the code, a small test program is included at the end of Faddeeva_w.cc which tests w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program, #define FADDEEVA_W_TEST in the file (or compile with -DFADDEEVA_W_TEST on Unix) and compile Faddeeva_w.cc. The resulting program prints SUCCESS at the end of its output if the errors were acceptable. License The software is distributed under the "MIT License", a simple permissive free/open-source license: Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Abbreviation: TarskiA A is a structure $\mathbf{A}=\langle A,\to\rangle$ of type $\langle2\rangle$ such that $\to$ satisfies the following identities: Tarski algebra $(x\to y)\to x=x$ $(x\to y)\to y=(y\to x)\to x$ $x\to(y\to z)=y\to(x\to z)$ Let $\mathbf{A}$ and $\mathbf{B}$ be Tarski algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \to y)=h(x) \to h(y)$ Example 1: $\langle\{0,1\},\to\rangle$ where $x\to y=0$ iff $x=1$ and $y=0$. Tarski algebras are the implication subreducts of Boolean algebras. Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct
I am trying to parametrize $x^2+y^2+sin(4x)+sin(4y)=4$. I need to find a way of taking the intersections between $x^2+y^2+\sin(4x)+\sin(4y)=4$, and $\tan(nx)$. As n increases from $0\le{n}\le{2\pi}$, I can take the following in coordinate-form.... $$(n,\text{The x-intersection value})$$ $$(n,\text{The y-intersection value})$$ Finally I need to take the following to graph its parametric derivative. Which is... $$\frac{({\text{The x-intersection value}})^2+4\cos(4(\text{The x-intersection value}))}{-(\text{The y-intersection value})^2-4\cos(4(\text{The y-intersection}))}$$ I have little knowledge with how to use sage. If someone can help I'll be thankful.
I am studying the basics of Computation Theory and I came up with an example I can't understand. Let's have a language $L = \{\langle M\rangle \mid L(M) = \Sigma^{\ast} \}$, so $L$ contains codes of all Turing machines which generate all the words from $\Sigma^{\ast}$. It's been said that we can reduce $H$ (the halting problem) to $L$, so $H \leq_m L$. A mapping $f(\langle M, x\rangle) = \langle N\rangle$ was defined, where $N$ is a Turing machine coded as shown below: N(y): if (M(x) accepts): accept reject Now my problem is: I thought that if $H \leq_m L$ then if we could solve $L$, we would be able to solve $H$ as well. By this I supposed that if we have a TM deciding on language $L$, we would be able to build a TM deciding on language $H$. But as for me, the machine above does not help me solve $H$ at all. It acually looks like if we could solve $H$, then we could solve $L$, but I can only see the machine above generate $\Sigma^{\ast}$, but not decide on it. If any of my intuition is correct, a Turing machine $M_L$ deciding on $L$ would work like: take the code of another Turing machine $M_A$ and accept if $M_A$ generates all words from $\Sigma^{\ast}$ or reject when $M_A$ does not generate all words from $\Sigma^{\ast}$. How would this machine help me build a Turing machine deciding on the Halting problem?
In economics, capital refers to things that are used to produce other things.So, capital includes tangible things like a delivery truck, an office building, or a factory. These are things which often require energy to run.Hence, if a developing country has low capital per head, then it probably also has few delivery trucks, office buildings, and ... Material capital is any durable good that is used as a factor of production and, by virtue of being durable, it is gradually consumed in production over a maximum possible duration of a length that is determined by (i) how much a unit of capital is used and (ii) the depreciation rate of a unit of capital. Capital forms by labor and savings (which is in ... My guess is that it is not a new factor of production, but simply a type of total factor productivity. This is because:factors of production are a stock, which produce services. What is the stock of AI? The stock of labour is easy to calculate. The stock of capital is more complex, but at least all capital goods (even robots) have a market price. As such, ... This is a subtle issue. First let's present a numerical example to see the head-scratching riddle.Assume $$\alpha =1/2 \implies Y = K^{1/2}L^{1/2}$$and that the exogenously given input prices are$$r=1/8,\,\, w=4.$$The f.o.c are$$\begin{cases} \frac {Y}{2K} = 1/8 \\\\\frac{Y}{2L} = 4 \end{cases} \implies K/4 = 8L \implies \left(L/K\right)^* = 1/32$... A clarification: The values $w,r$ are not independent of input combinations: you have already solved for $K/L$.1) Unless you assume that the price of the output is equal to 1, there is a minor mistake, as output prices affect factor prices.2) If there is a profit maximizing pair $(K,L)$, then for all $\alpha \in \mathbb{R}_+$ $(\alpha K,\alpha L)$ will ... $k_t$ is your capital after investment, so if you subtract the capital you carried over from last period, $k_{t-1}(1-\delta)$, you obtain the amount that must have been invested in order to have $k_t$ of capital today.Remember that $k$ is a stock variable, while $i$ is a flow variable. This is why the flow $i_t$ is the difference between the current and ... From a Marxist perspective, capital is a social relation. Essentially, it is money that begets more money. As such, it only becomes more or less synonim with "means of production" once it takes over production, ie, once society becomes capitalist.In a feudal society, tools and land are not capital, albeit being means of production. The money of money ... In the model with technological progress the capital per effective worker remains constant, implies that capital per worker grows at the rate of exogenous rate of technological progress. See Barro and Martin book, Chapter 1. The broad definition I use is this:A resource is anything that's used to produce goods.(I treat the terms resource, factor of production, means of production, and input as interchangeable.)I neatly divide all resources into three categories:So, labor refers to human beings (or more correctly, the services thereof). Examples: (the services of) a ... What I think you want to consider to be grouped with land, labour and capital is another factor to be considered with labor- technology (A.K.A "Knowledge*).The term "technology" here means an augmenting factor which is external to the present amount of labour. This can mean improved machinery to produce goods and services or alternatively, this can be ... This is no different from any other markets... just for money.On the demand side of the goods, those who want money (those who sell bonds) say how much they're willing to pay in terms of bond yields. The supplier of the goods (bond buyers) come to the bond market and choose the product.Under a developed bond market, it's guaranteed that those who get the ... According to how you've defined the firm's LOMOC, investment produces capital simultaneously. That is, $i_t = k_t - (1-\delta)k_{t-1}$ defines how much capital the firm created in period $t$. If you want a delayed investment function, you could simply set the LOMOC to $k_t = (1-\delta)k_{t-1} + i_{t-1}$. In this setting, firms invest output into capital in ... What is the depreciation rate ($\delta$)?Assumptions:$Y_t = K_t^{\alpha} (A_tL_t)^{1-\alpha}$,$\alpha = 0.5$ ($\alpha$ = capital income share),$A_0 =1$ and $g = 0$ ($g$ = growth rate of technology),$s = 0.2$ ($s$ = savings rate),$n = 0.05$ ($n$ = growth rate of the labor force/population).Capital in the next period is equal to the capital from the ... The Grundrisse’s “fragment on the machines” expounds on the tendency to universal automation rather famously, without Marx having stripped his more idealistic hopes from his exposition, and without the deadening jargon of Capital’s chapters on the organic composition of capital. It isn’t perfect on the automation of science itself, but that just needs to be ... A market is an social institution with various attributes and works under complex conditions. Under some conditions a pure monopoly is the only known limit.Markets are creations by human intelligence and will not exist if not used for merchand. The market mechanisms are used in our society to enable a complex, diverse, work-sharing economy. Not the other ... Per capita availability of capital very strongly correlates with GDP per capita. GDP measures all economic activity and the capital is the input to this activity.So your question is pretty much an extension ofHow is energy consumption related to GDP?Well, if not for differences in energy efficiency, GDP would be determined by energy consumption.... It seems that Mankiw has implicitly assumed that all the investment is used to replace the depreciated capital. See the second sentence in the Investment section of Chapter 3-3:"Firms buy investment goods to add to their stock of capital and to replace existing capital as it wears out. "Usually, the law of motion of capital is written as\begin{equation}... This is actually pretty tricky, as "capital flight" can be a contested term itself. Here's a distinction provided by Darryl McLeod in the Concise Encyclopedia of Economics, edited by David Henderson and provided online by the Library of Economics and Liberty:There is no widely accepted definition of capital flight. The classic use of the term is to ... The key to the answer is good data on Capital. There is a project (KLEMS), which is computing harmonised (i.e. comparable) information on capital, labour, energy, etc for many countries. At the moment it has information mainly on developed countries, but data for more developing countries are coming up.For example, this is a calculation of the capital-... I read through the resource the OP linked to. The authors argue the case for "AI as a new factor of production" as follows:"But what if AI has the potential to be not just another driver of TFP, but an entirely new factor of production? How can this be? Thekey is to see AI as a capital-labor hybrid. AI can replicate laboractivities at much greater ...
For $s \leq t \leq T$, I want to evaluate $E\left[\exp\left(iz(W_t-W_s)+izb(s-t) + bW_T-b^2T/2 \right)|\mathcal{F_s}\right]$, where $W$ is Brownian motion on $\mathcal{F}$, and $b \in \mathbb{R}$. Clearly, $W_t -W_s $ is independent of $\mathcal{F}_s$, and $E[W_T|\mathcal{F}_s] = W_s$. I would like to be able to write $$\begin{aligned} &E\left[\exp\left(iz(W_t-W_s)+izb(s-t) + bW_T-b^2T/2 \right)|\mathcal{F_s}\right]\\ &=E[\exp\left(iz(W_t-W_s)+izb(s-t) \right) | \mathcal{F}_s]\times E[\exp\left(bW_T-b^2T/2 \right) | \mathcal{F}_s]\\ & = E[\exp\left(iz(W_t-W_s)+izb(s-t) \right)] \times E[\exp(bW_T-b^2T/2)]\\ &? = \exp\left(-\frac{1}{2}z^2(t-s)^2\right) \times E[\exp(bW_T-b^2T/2)] \end{aligned}$$ Which would be as far as I need to go for my purposes. My first question is if I am able to split the conditional expectation into the product of two conditional expectations. Am I allowed to do this because $W_t-W_s$ is independent of $\mathcal{F}_s$ and $W_T$ isn't? Also, are my steps after the splitting up of the expectation correct? Thanks very much.
A consider a limit cone on a diagram $D: \mathbf{I} \rightarrow \mathbf{C}$: $$ \left( L \xrightarrow{p_i} D(I) \right)_{I \in \mathbf{I}} $$ Now suppose that $L' \in \mathbf{C}$ is some object isomorphic to $L$, and that there is some cone: $$ \left( L' \xrightarrow{p_i'} D(I) \right)_{I \in \mathbf{I}} $$ Is it necessarily the case that this cone is also a limit cone? It's easy to see that any isomorphism from $L \xrightarrow{\phi} L'$ gives rise to a limit cone with vertex $L'$ simply by writing: $$ \left( L' \xrightarrow{p_i \circ \phi^{-1}} D(I) \right)_{I \in \mathbf{I}} $$ But it seems not at all necessary that this cone and the previous one are the same and/or ismorphic as cones. Yet, it seems as if any cone whose vertex is isomorphic to the limit should be a limit cone; that is, that the uniqueness of limit cones should "extend up" to uniqueness of limit objects.
The answer to your literal question, "Does a logical system have semantics?" is "Obviously, yes. The definition you quoted says so!" So I figure that isn't what you're actually asking. I think the root of your misunderstanding is the word "formal". In this context, it doesn't mean "rigorous", the opposite of "hand-wavy"; it means "depending on form". That is, the truth of a sentence of, say, propositional logic depends only on the form (or shape, if you like) of the sentence. $P\wedge Q$ is true if, and only if the propositions $P$ and $Q$ are both true. It doesn't depend on what, if anything, those propositions represent in the real world: if $P$ is any true thing and $Q$ is any true thing, then $P\wedge Q$ is true. Consider the formula $\text{I am a human} \wedge \text{My mother is a giraffe}$. Reasoning informally (i.e., not based on the form of the sentence), one would say that it is impossible for a human to have a giraffe as a mother so the sentence must be false. Reasoning formally, one would determine whether the propositions $\text{I am a human}$ and $\text{My mother is a giraffe}$ are true individually, by looking at their interpretation, which assigns each of them a truth value. Then, one would combine those truth values using the semantics of the $\wedge$ operator. But let's go back to the informal claim that the human/giraffe sentence must be false. One property that "reasonable" logics have is that renaming variables makes no difference, as long as you don't make two different variables have the same name. You're familiar with this from programming languages and mathematics: if you take a program and rename all the variables, the program does exactly the same thing; in $\lambda$-calculus, this is called $\alpha$-equivalence. So, since propositional logic is eminently reasonable, I could rename the second proposition in that sentence to be $\text{My mother is a human}$. Formally, the sentence is unchanged but now, reasoning informally, one would say that the sentence must be true! Taking things a little farther, one could even make the proposition $\text{I am a human}$ stand for the real-world fact "It is sunny today". That would be a stupid thing to do but, formally ("shape-wise"), it's perfectly valid. Just like it's perfectly valid but stupid to write a program that says something like colour = "Hello, world!"; message = green; setColour (message); print (colour); or to write "Let $v$ be a graph containing a vertex $G$." Summary / tl;dr. Yes, logics do have semantics. The semantics of a logic tells you how to determine whether a formula is true or false, based only on the syntactic structure of the formula and some interpretation that states the truth values of the most simple formulas (propositions in propositional logic, atomic formulas in first-order logic, and so on). That's all the semantics of the logic does: in particular, it has nothing to do with assigning "real world" meaning to the variables and constructs in the formula. Semantics depends only on the form ("shape") of the formula; in particular, it doesn't depend on the names chosen for the variables. That's why the systems are called formal.
Can you explain why does the following Jacobian chain rule holds true? $$ \\ \it J_{f \circ R}(x) = J_{f}(Rx) \circ R $$ Where, $ \ f\in C^2(\Omega; \mathbb{R} ),\ \Omega\subset\mathbb{R^2},\ and \ \it R \in SO(2) $ denotes a $2\times2$ matrix with det($\it R$) = 1 and it can be written as $\begin{pmatrix} \cos \alpha & -\sin \alpha \\ \cos \alpha & \sin \alpha \end{pmatrix}$ In general Jacobian chain rule is, $$ \\ \it J_{f \circ g}(x) = J_{f}(g(x)) \circ J_{g}(x) $$ so by that logic, it should be, $$ \\ \it J_{f \circ R}(x) = J_{f}(Rx) \circ J_{R}(x) $$ but obviously $ J_{R}(x) $ it does not make any sense to me. Can you please explain?