text
stringlengths
256
16.4k
Quasirandomness carblog.bravehost.com BMW 335 Contents Examples of quasirandomness definitions Bipartite graphs Let X and Y be two finite sets and let [math]f:X\times Y\rightarrow [-1,1].[/math] Then f is defined to be c-quasirandom if [math]\mathbb{E}_{x,x'\in X}\mathbb{E}_{y,y'\in Y}f(x,y)f(x,y')f(x',y)f(x',y')\leq c.[/math] Since the left-hand side is equal to [math]\mathbb{E}_{x,x'\in X}(\mathbb{E}_{y\in Y}f(x,y)f(x',y))^2,[/math] it is always non-negative, and the condition that it should be small implies that [math]\mathbb{E}_{y\in Y}f(x,y)f(x',y)[/math] is small for almost every pair [math]x,x'.[/math] If G is a bipartite graph with vertex sets X and Y and [math]\delta[/math] is the density of G, then we can define [math]f(x,y)[/math] to be [math]1-\delta[/math] if xy is an edge of G and [math]-\delta[/math] otherwise. We call f the balanced function of G, and we say that G is c-quasirandom if its balanced function is c-quasirandom. It can be shown that if H is any fixed graph and G is a large quasirandom graph, then the number of copies of H in G is approximately what it would be in a random graph of the same density as G. Subsets of finite Abelian groups If A is a subset of a finite Abelian group G and A has density [math]\delta,[/math] then we define the balanced function f of A by setting [math]f(x)=1-\delta[/math] when x\in A and [math]f(x)=-\delta[/math] otherwise. Then A is c-quasirandom if and only if f is c-quasirandom, and f is defined to be c-quasirandom if [math]\mathbb{E}_{x,a,b\in G}f(x)f(x+a)f(x+b)f(x+a+b)\leq c.[/math] Again, we can prove positivity by observing that the left-hand side is a sum of squares. In this case, it is [math]\mathbb{E}_{a\in G}(\mathbb{E}_{x\in G}f(x)f(x+a))^2.[/math] If G has odd order, then it can be shown that a quasirandom set A contains approximately the same number of triples [math](x,x+d,x+2d)[/math] as a random subset A of the same density. However, it is decidedly not the case that A must contain approximately the same number of arithmetic progressions of higher length (regardless of torsion assumptions on G). For that one must use "higher uniformity". Hypergraphs Subsets of grids A function f from [math][n]^2[/math] to [-1,1] is c-quasirandom if the "sum over rectangles" is at most c. The sum over rectangles is [math]\mathbb{E}_{x,y,a,b}f(x,y)f(x+a,y)f(x,y+b)f(x+a,y+b)[/math]. Again, it is easy to show that this sum is non-negative by expressing it as a sum of squares. And again, one defines a subset [math]A\subset[n]^2[/math] to be c-quasirandom if it has a balanced function that is c-quasirandom. If A is a c-quasirandom set of density [math]\delta[/math] and c is sufficiently small, then A contains roughly the same number of corners as a random subset of [math][n][/math] of density [math]\delta.[/math] A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
This circuit can be solved in many ways but I will use the Extra-Element Theorem or EET described here but also here with the Fast Analytical Techniques. The principle consists of identifying an element that bothers you in the calculation of the transfer function - the extra element - and you calculate a reference gain \$G_{ref}\$ when this element is either set to 0 or open-circuited. I choose to remove \$R_f\$ for the first gain calculation. The schematic is below: The gain in this configuration has been determined in the second link and is equal to: \$G_{ref}=-\frac{R_c}{R_i}\frac{1}{\frac{\frac{R_c}{R_i}+1}{A_{OL}}+1}\frac{R_L}{R_L+R_o}\approx -\frac{R_c}{R_i}\frac{R_L}{R_L+R_o}\$ Ok, we have our reference gain. Now, let's calculate the resistance \$R_d\$ "seen" from \$R_f\$'s terminals when the excitation \$V_{in}\$ is set to 0 V (replaced by a sort circuit). The circuit is below: If you do the maths correctly, you should find: \$R_d=\frac{R_cR_i(R_L+R_o+A_{OL}R_L)}{(R_L+R_o)(R_c+R_i+A_{OL}R_i)}+R_L||R_o\approx \frac{R_cR_iR_L}{(R_L+R_o)R_i}+R_L||R_o\$ For the final lap, we need to determine the resistance \$R_n\$ "seen" from \$R_f\$'s terminals when the response \$V_{out}\$ is a null or equal to 0 V when the excitation is back. Let's see the below schematic: Solving this circuit is easy (I used a Thévenin circuit) and you should find a resistance equal to: \$R_n=-\frac{(R_L||R_o)(R_L+R_o)}{A_{OL}R_L}\approx 0 \;\Omega\$ We can now apply the EET formula defined as: \$G=G_{ref}\frac{1+\frac{R_n}{R_f}}{1+\frac{R_d}{R_f}}\$ which gives (considering an infinite open-loop gain): \$G=-\frac{R_cR_L}{R_i(R_L+R_o)}\frac{1}{1+\frac{\frac{R_cR_iR_L}{(R_L+R_o)R_i}+R_L||R_o}{R_f}}\$ If you bias this circuit input with a 1-V dc source, the output is calculated to -0.884 V, confirmed by the below simulation: The Mathcad file below confirms this number: If you now replace \$R_L\$ by a capacitor impedance, you can plot the frequency response of the circuit: New Edit: I realize that the above formula was correct however, when \$Z_L\$ is a capacitor, it is clearly a high-entropy expression. It is better to select \$Z_L\$ as a capacitor as all the other elements will be resistors. As such, the above formula does not tell you where the pole is located when you load the circuit with a capacitor. To derive a low-entropy formula, we will first determine the dc gain when \$Z_L\$ is removed. The circuit is shown below: The gain in this mode for a perfect op amp is: \$G_0=-\frac{R_cR_f}{R_cR_i+R_fR_i+R_iR_o}\$ Now, install a test current source \$I_T\$ across \$Z_L\$'s terminals and determine the resistance "seen" from these terminals. The sketch is below: The resistance is: \$R_d=\frac{R_fR_iR_o}{R_cR_i+R_fR_i+R_iR_o}\$The time constant is equal to: \$\tau_1=C_1\frac{R_fR_iR_o}{R_cR_i+R_fR_i+R_iR_o}\$ The final transfer function is thus: \$G(s)=G_0\frac{1}{1+\frac{s}{\omega_p}}\$ with \$\omega_p=\frac{1}{\tau_1}\$. The below Mathcad files confirm these results which match the ones previously found. In the latest case however, you can express the pole position in a clear manner.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Lephenixnoir af424d1baa update documentation after writing the wiki hace 3 meses config hace 4 meses include/TeX hace 3 meses src hace 3 meses .gitignore hace 3 meses Makefile hace 4 meses README.md hace 3 meses TODO.md hace 3 meses configure hace 3 meses font5x7.bmp hace 4 meses font8x9.bmp hace 4 meses font10x12.bmp hace 4 meses This library is a customizable 2D math rendering tool for calculators. It can be used to render 2D formulae, either from an existing structure or TeX syntax. \frac{x^7 \left[X,Y\right] + 3\left|\frac{A}{B}\right>} {\left\{\frac{a_k+b_k}{k!}\right\}^5}+ \int_a^b \frac{\left(b-t\right)^{n+1}}{n!} dt+ \left(\begin{matrix} \frac{1}{2} & 5 \\ -1 & a+b \end{matrix}\right) List of currently supported elements: \frac) _ and ^) \left and \right) \sum, \prod and \int) \vec) and limits ( \lim) \sqrt) \begin{matrix} ... \end{matrix}) Features that are partially implemented (and what is left to finish them): See the TODO.md file for more features to come. First specify the platform you want to use : cli is for command-line tests, with no visualization (PC) sdl2 is an SDL interface with visualization (PC) fx9860g builds the library for fx-9860G targets (calculator) fxcg50 builds the library for fx-CG 50 targets (calculator) For calculator platforms, you can use --toolchain to specify a different toolchain than the default sh3eb and sh4eb. The install directory of the library is guessed by asking the compiler, you can override it with --prefix. Example for an SDL setup: % ./configure --platform=sdl2 Then you can make the program, and if it’s a calculator library, install it. You can later delete Makefile.cfg to reset the configuration, or just reconfigure as needed. % make% make install # fx9860g and fxcg50 only Before using the library in a program, a configuration step is needed. The library does not have drawing functions and instead requires that you provide some, namely: TeX_intf_pixel) TeX_intf_line) TeX_intf_size) TeX_intf_text) The three rendering functions are available in fxlib; for monospaced fonts the fourth can be implemented trivially. In gint, the four can be defined as wrappers for dpixel(), dline(), dsize() and dtext(). The type of formulae is TeX_Env. To parse and compute the size of a formula, use the TeX_parse() function, which returns a new formula object (or NULL if a critical error occurs). The second parameter display is set to non-zero to use display mode (similar to \[ .. \] in LaTeX) or zero to use inline mode (similar to $ .. $ in LaTeX). char *code = "\\frac{x_7}{\\left\\{\\frac{\\frac{2}{3}}{27}\\right\\}^2}";struct TeX_Env *formula = TeX_parse(code, 1); The size of the formula can be queried through formula->width and formula->height. To render, specify the location of the top-left corner and the drawing color (which will be passed to all primitives): TeX_draw(formula, 0, 0, BLACK); The same formula can be drawn several times. When it is no longer needed, free it with TeX_free(): TeX_free(formula);
Note: Cross-posted on Physics SE. So I have to show that a superoperator $\$$ cannot increase relative entropy using the monotonicity of relative entropy: $$S(\rho_A || \sigma_A) \leq S(\rho_{AB} || \sigma_{AB}).$$ What I have to prove: $$S(\$\rho|| \$ \sigma) \leq S(\rho || \sigma).$$ Now the hint is that I should use the unitary representation of the superoperator $\$$. I know that we can represent $ \$ \rho = \sum_i M_i \rho M_i^{\dagger} $ with $\sum_i M_i M_i^{\dagger} = I$. Now I am able to write out $S(\$\rho|| \$ \sigma_A)$ in this notation, but that doesn't bring me any further. Does anyone have any idea how to show this in the way that the questions hints to? I already read the original paper of Lindblad but this doesn't help me (he does it another special way). Any clues or how to do this?
Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL SIMPLIFY: \[\large \frac{p^{2}+2pq+q^{2}}{p^{3}-pq^{2}+p^{2}q-q^{3}}\] STEP 1: FACTORIZE THE NOMINATOR AND DENOMINATOR \[\large \frac{(p+q)(p+q)}{p(p^{2}-q^{2})+q(p^{2}-q^{2})}\] \[\large \frac{(p+q)(p+q)}{(p+q)(p+q)(p-q)}\] Step 2: do the cancellation; \[\large \frac{1}{(p-q)}\] 17/5/2019 Read Now Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL All prime numbers less than ten are arranged in descending order to form a number. (a) Write down the number formed. (1 mark) \[\large 7532\] Note that; b) State the total value of the second digit in the number formed in (a) above. (1 mark) \[\large 500\] Note that; More questions on; Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL WITHOUT USING MATHEMATICAL TABLES OR A CALCULATOR, EVALUATE; \[\large \frac{\sqrt[3]{675\times 135}}{\sqrt{2025}}\] Step 1: find the prime factors of each number. this step will help find the cube root of the nominator and the square root of the denominator \[\large \frac{\sqrt[3]{3^{3}\times 5^{2}\times 3^{3}\times 5}}{\sqrt{3^{4}\times 5^{2}}}\] Step 2: Get the cube root and the square root,then compute the answer \[\large \frac{\sqrt{3^{2}\times 5}}{\sqrt{3^{2}\times 5}}\] \[\large = 1\] Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL wHAT ARE nATURAL NUMBERS? Natural numbers also called counting numbers are numbers ranging from 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 Place Value This is the position of a digit in a number. Importance of place values Place value helps students understand values of numbers in a series of numbers, such as computing the total value of a number in a group of numbers, rounding off numbers, changing numbers from figures to words vis a vis and operations on numbers. This is very essential in the counting of numbers as applied in science, real life situations, mathematics, business and accounting etc. example 1. What is the position of 6 in the number 346789 PROCEDURE FOR FINDING A PLACE VALUE exercises What is the place value of 5 in the number 524239 Hundred Thousands What is the place value of 1 in the number 721 Ones state the place values of digit 8 in each of the following numbers (a) 1689 Tens (b) 4008772 Thousands (c) 2847246 Hundred Thousands (d) 184392649 Ten Millions (e) 281199300505 Ten billions Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL MATHEMATICS FORM 1 EXAMINATION PAPERS TERM 1 QUESTIONS AND ANSWERS How to download Answers: Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL OPENER (TUNEUP EXAMS) MID TERM EXAMS END TERM EXAMS Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL Details Author Archives Categories Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL All
Measures.tex \section{Passing between probability measures} \label{sec:measures} The goal of this section is to work out bounds for the error arising when passing back and forth between $\unif_k$ and $\ens{k}$, as described in Section~\ref{sec:outline-dist}. Lemma~\ref{lem:distributions} below gives the bounds we need. The reader will not lose much by just reading its statement; the proof is just technical calculations. Before stating Lemma~\ref{lem:distributions} we need some definitions. \ignore{ \begin{definition} Given a set $A \subseteq [k]^n$ and a restriction $(J,x_\barJ)$, we write $A_{x_\barJ}$ for the subset of $[k]^{J}$ defined by $A_{x_\barJ} = \{y \in [k]^J : (x_{\barJ}, y_J) \in A\}$. \end{definition}} \begin{definition} \label{def:r4r} For $0 \leq \ppn \leq 1$, we say that $J$ is a \emph{$\ppn$-random subset} of $[n]$ if $J$ is formed by including each coordinate $i \in [n]$ independently with probability $\ppn$. Assuming $r \leq n/2$, we say that $J$ is an \emph{$[r,4r]$-random subset} of $[n]$ if $J$ is a $\ppn$-random subset of $[n]$ conditioned on $r \leq \abs{J} \leq 4r$, with $\ppn = 2r/n$. \end{definition} \begin{definition} A \emph{distribution family} $(\distra^m)_{m \in \N}$ (over $[k]$) is a sequence of probability distributions, where $\distra^m$ is a distribution on $[k]^m$. In this paper the families we consider will either be the equal-(nondegenerate-)slices family $\distra^m = \ens{k}^m$ or $\distra^m = \eqs{k}^m$, or will be the product distributions based on a single distribution $\prd$ on $[k]$, $\distra^m = \prd^{\otimes m}$. \end{definition} \begin{lemma} \label{lem:distributions} Let $(\distra^m)$ and $(\distrb^m)$ be distribution families. Assume $2 \ln n \leq r \leq n/2$. Let $J$ be an $[r,4r]$-random subset of $[n]$, let $x$ be drawn from $[k]^{\barJ}$ according to $\distra^{\abs{\barJ}}$, and let $y$ be drawn from $[k]^J$ according to $\distrb^{\abs{J}}$. The resulting distribution on the composite string $(x,y) \in [k]^n$ has total variation distance from $\distra^n$ which can be bounded as follows: \begin{enumerate} \item (Product to equal-slices.) \label{eqn:distrs-prd-eqs} If $\distra^m = \prd^{\otimes m}$ and $\distrb^m = \eqs{\ell}^m$ for $\ell \leq k$, the bound is \noteryan{You know, we only need this result for the uniform distribution, in which case we can bound the below by the simpler $2k \cdot r/\sqrt{n}$.} \[ (2{\textstyle \sqrt{\frac{1}{\min(\prd)}-1}})+2) \cdot r / \sqrt{n}. \] \item (Equal-slices to product.) \label{eqn:distrs-eqs-prd} If $\distra^m = \eqs{k}^m$ and $\distrb^m = \prd^{\otimes m}$, the bound is $4k \cdot r/\sqrt{n}$, independent of $\prd$. \item (Equal-slices to equal-slices.) \label{eqn:distrs-eqs-eqs} If $\distra^m = \eqs{k}^m$ and $\distrb^m = \eqs{\ell}^m$ for $\ell \leq k$, the bound is $4k \cdot r/\sqrt{n}$. \end{enumerate} \end{lemma} Although Lemma~\ref{lem:distributions} involves the equal-slices distribution, one can convert to equal-nondegenerate-slices if desired using Proposition~\ref{prop:degen}. Since $\eqs{k}^n$ is a mixture of product distributions (Proposition~\ref{prop:eqs-mix}), the main work in proving Lemma~\ref{lem:distributions} involves comparing product distributions. \subsection{Comparing product distributions} \begin{definition} For $\distra$ and $\distrb$ probability distributions on $\Omega^n$, the \emph{$\chi^2$ distance} $\dchi{\pi}{\nu}$ is defined by \[ \dchi{\distra}{\distrb} = \sqrt{\Varx_{x \sim \distra}\left[\frac{\distrb[x]}{\distra[x]}\right]}. \] Note that $\dchi{\distra}{\distrb}$ is \emph{not} symmetric in $\distra$ and $\distrb$. \end{definition} The $\chi^2$ distance is introduced to help us prove the following fact: \begin{proposition} \label{prop:mix-distance} Let $\prd$ be a distribution on $\Omega$ with full support; i.e., $\min(\pi) \neq 0$. Suppose $\prd$ is slightly mixed with $\distrb$, forming $\wh{\prd}$; specifically, $\wh{\prd} = (1-\ppn) \prd + \ppn \distrb$. Then the associated product distributions $\prd^{\otimes n}$, $\wh{\prd}^{\otimes n}$ on $\Omega^{n}$ satisfy \[ \dtv{\prd^{\otimes n}}{\wh{\prd}^{\otimes n}} \leq \dchi{\prd}{\distrb} \cdot \ppn \sqrt{n}. \] \end{proposition} \begin{proof} It is a straightforward consequence of Cauchy-Schwarz (see, e.g.~\cite[p.\ 101]{Rei89})\noteryan{This is the part using $\min(\prd) \neq 0$, by the way.} that \[ \dtv{\prd^{\otimes n}}{\wh{\prd}^{\otimes n}} \leq \dchi{\prd}{\wh{\prd}} \cdot \sqrt{n}, \] and the identity $\dchi{\prd}{\wh{\prd}} = \ppn \cdot \dchi{\prd}{\distrb}$ follows easily from the definitions. \end{proof} This can be bounded independently of $\distrb$, as follows: \begin{corollary} \label{cor:mix-distance} In the setting of Proposition~\ref{prop:mix-distance}, \[ \dtv{\prd^{\otimes n}}{\wh{\prd}^{\otimes n}} \leq \sqrt{{\textstyle \frac{1}{\min(\prd)}} - 1} \cdot \ppn \sqrt{n}, \] \end{corollary} \begin{proof} It is easy to check that the distribution $\distrb$ maximizing $\dchi{\prd}{\distrb}$ is the one putting all its mass on the $x$ minimizing $\prd[x]$. In this case one calculates $\dchi{\prd}{\distrb} = \sqrt{\frac{1}{\min(\pi)} - 1}$. \end{proof} \subsection{Proof of Lemma~\ref{lem:distributions}} \begin{definition} \label{def:compos-distr} Let $0 \leq \ppn \leq 1$ and let $(\distra^m)$, $(\distrb^m)$ be distribution families. Drawing from the \emph{$(\ppn, \distra, \distrb)$-composite distribution} on $[k]^n$ entails the following: $J$ is taken to be a $\ppn$-random subset of~$[n]$; $x$ is drawn from $[k]^{\barJ}$ according to $\distra^{\abs{\barJ}}$; and, $y$ is drawn from $[k]^J$ according to $\distrb^{\abs{J}}$. We sometimes think of this distribution as just being a distribution on composite strings $z = (x, y) \in [k]^n$. \end{definition} Note that the distribution described in Lemma~\ref{lem:distributions} is very similar to the $(\ppn, \distra, \distrb)$-composite distribution, except that it uses an $[r, 4r]$-random subset rather than a $\ppn$-random subset. We can account for this difference with a standard Chernoff (large-deviation) bound:\noteryan{Citation needed?} \begin{fact} \label{fact:dev} If $J$ is a $\ppn$-random subset of $[n]$ with $\ppn = 2r/n$ as in Definition~\ref{def:r4r}, then $r \leq \abs{J} \leq 4r$ holds except with probability at most $2\exp(-r/4)$. \end{fact} The utility of using $\ppn$-random subsets in Definition~\ref{def:compos-distr} is the following observation: \begin{fact} If $\prd$ and $\distrb$ are distributions on $[k]$, thought of also as product distribution families, then the $(\ppn, \prd, \distrb)$-composite distribution on $[k]^n$ is precisely the product distribution $\wh{\prd}^{\otimes n}$, where $\wh{\prd}$ is the mixture distribution $(1-\ppn)\prd + \ppn \distrb$ on $[k]$. \end{fact} Because of this, we can use Corollary~\ref{cor:mix-distance} to bound the total variation distance between $\prd^{\otimes n}$ and a composite distribution. We conclude: \begin{proposition} \label{prop:prod-composite} Let $\prd$ and $\distrb$ be any distributions on $[k]$, thought of also as product distribution families. Writing $\wt{\prd}$ for the $(\ppn,\prd,\distrb)$-composite distribution on strings in $[k]^n$, we have \[ \dtv{\prd^{\otimes n}}{\wt{\prd}} \leq {\textstyle \sqrt{\frac{1}{\min(\prd)}-1}} \cdot \ppn \sqrt{n}. \] \end{proposition} Recall that for any $\ell \leq k$, the equal-slices distribution $\eqs{\ell}^{m}$ on $m$ coordinates is a mixture of product distributions $\spac^{\otimes m}$ on $[k]^m$. We can therefore average Proposition~\ref{prop:prod-composite} over $\distrb$ to obtain: \begin{proposition} \label{prop:prod-eqs} If $\wt{\pi}$ denotes the $(\ppn,\pi,\eqs{\ell})$-composite distribution on strings in $[k]^n$, where $\ell \leq k$, then we have \[ \dtv{\pi^{\otimes n}}{\wt{\pi}} \leq {\textstyle \sqrt{\frac{1}{\min(\pi)}-1}} \cdot \ppn \sqrt{n}. \] \end{proposition} Here we have used the following basic bound, based on the triangle inequality: \begin{fact} \label{fact:tv-mix} Let $(\distrb_\kappa)_{\kappa \in K}$ be a family of distributions on $\Omega^n$, let $\varsigma$ be a distribution on $K$, and let $\overline{\distrb}$ denote the associated mixture distribution, given by drawing $\kappa \sim \varsigma$ and then drawing from $\distrb_\kappa$. Then \[ \dtv{\distra}{\overline{\distrb}} \leq \Ex_{\kappa \sim \varsigma}[\dtv{\distra}{\distrb_\kappa}]. \] \end{fact} If we instead use this fact to average Proposition~\ref{prop:prod-composite} over $\prd$, we can obtain: \begin{proposition} \label{prop:eqs-prod} Let $\distrb$ be any distribution on $[k]$. Writing $\distra$ for the $(\ppn, \eqs{k}, \distrb)$-composite distribution on strings in $[k]^n$, we have \[ \dtv{\eqs{k}^n}{\distra} \leq (2k-1)\ppn \sqrt{n}. \] \end{proposition} \begin{proof} Thinking of $\eqs{k}^m$ as the mixture of product distributions $\spac^{\otimes m}$, where $\spac$ is a random spacing on $[k]$, Fact~\ref{fact:tv-mix} and Proposition~\ref{prop:prod-composite} imply \[ \dtv{\eqs{k}^n}{\distra} \leq \Ex_{\spac}\left[{\textstyle \sqrt{\frac{1}{\min(\spac)}-1}}\right] \cdot \ppn \sqrt{n}. \] We can upper-bound the expectation\noteryan{Undoubtedly someone has worked hard on this $-1/2$th moment of the least spacing before (Devroye '81 or '86 perhaps), but I think it's probably okay to do the following simple thing here} by \begin{multline*} \Ex_{\spac}\left[{\textstyle \sqrt{\frac{1}{\min(\spac)}}}\right] \quad=\quad \int_{0}^\infty \Pr_{\spac}\left[{\textstyle \sqrt{\frac{1}{\min(\spac)}}} \geq t\right]\,dt \quad=\quad \int_{0}^\infty \Pr_{\spac}[\min(\spac) \leq 1/t^2]\,dt \\ \leq\quad k + \int_{k}^\infty \Pr_{\spac}[\min(\spac) \leq 1/t^2]\,dt \quad\leq\quad k + \int_{k}^\infty (k(k-1)/t^2) \,dt \quad=\quad 2k-1, \end{multline*} where in the second-to-last step we used Proposition~\ref{prop:rand-min}. \end{proof} Averaging now once more in the second component, we obtain the following: \begin{proposition} \label{prop:eqs-eqs} Let $2 \leq \ell \leq k$ and let $\distra'$ denote the $(\ppn, \eqs{k}, \eqs{\ell})$-composite distribution on strings in $[k]^n$. Then \[ \dtv{\eqs{k}^n}{\distra'} \leq (2k-1) \ppn \sqrt{n}. \] \end{proposition} We can now obtain the proof of Lemma~\ref{lem:distributions}: \begin{proof} The three statements in Lemma~\ref{lem:distributions} essentially follow from Propositions~\ref{prop:prod-eqs}, \ref{prop:eqs-prod}, and \ref{prop:eqs-eqs}, taking $\ppn = 2r/n$. This would give bounds of $2{\textstyle \sqrt{\frac{1}{\min(\pi)}-1}} \cdot r / \sqrt{n}$, $(4k-2) \cdot r/\sqrt{n}$, and $(4k-2) \cdot r/\sqrt{n}$, respectively. However we need to account for conditioning on $r \leq \abs{J} \leq 4r$. By Fact~\ref{fact:dev}, this conditioning increases the total variation distance by at most $2\exp(-r/4)$. Using the lower bound $r \geq 2 \ln n$ from the lemma's hypothesis, this quantity is at most $2r/\sqrt{n}$, completing the proof. \end{proof}
Assuming we have two sets of $n$ qubits. The first set of $n$ qubits is in state $|a\rangle$ and second set in $|b\rangle$. Is there a fixed procedure that generates a superposed state of the two $|a\rangle + |b\rangle$ ? Depending on what precisely your assumptions are about a, b, I think this is essentially impossible, and is something called the "no superposing theorem". Please see this paper. Your question is not quite correctly defined. First of all, $|a\rangle + |b\rangle$ is not a state. You need to normalize it by considering $\frac{1}{|||a\rangle + |b\rangle||}(|a\rangle + |b\rangle)$. Secondly, in fact, you don't have access to the states $|a\rangle$ and $|b\rangle$ but to the states up to some global phase, i.e. you can think that the first register is in the vector-state $e^{i\phi}|a\rangle$ and the second register is in the vector-state $e^{i\psi}|b\rangle$ with inaccessible $\phi, \psi$. Since you don't have access to $\phi, \psi$, you can't define sum $|a\rangle + |b\rangle$. But you can ask to construct normalized state $|a\rangle + e^{ it}|b\rangle$ for some $t$. This question is correct. Though, as DaftWullie pointed out in his answer, it is impossible to construct such a state.
Learn about DokuWiki Advanced Use Corporate Use Our Community Learn about DokuWiki Advanced Use Corporate Use Our Community Compatible with DokuWiki No compatibility info given! This extension has not been updated in over 2 years. It may no longer be maintained or supported and may have compatibility issues. This very simple plug-in makes a connection to the MathTran web service (no longer available), which provides translation of TeX encoded mathematics to bitmap images. MathTran provides very good looking math formulae. <tex>\displaystyle { BF_i = \prod_{j=1}^{N_{loci}}{ {\hat{\nu}_j^{S_{ij}} . { (1-\hat{\nu}_j)^{(1-S_{ij}) } } \over { {\hat{\theta}_j^{S_{ij}} . { (1-\hat{\theta}_j)^{(1-S_{ij}) } } } } </tex> gives: Images are re-built each time the page is accessed. This is no big deal since the MathTran web service is quite responsive. It will be better if we can use some parameters to control the alignment of the image. For example, <tex mediacenter>1+2</tex>, which generate code like this <a alt="tex:1+2" class="mediacenter">. And if the the image can be cached, the plugin can also be used in offline/personal wiki.
Under the auspices of the Computational Complexity Foundation (CCF) The concept of matrix rigidity was first introduced by Valiant in [Val77]. Roughly speaking, a matrix is rigid if its rank cannot be reduced significantly by changing a small number of entries. There has been extensive interest in rigid matrices as Valiant showed that rigidity can be used to prove arithmetic circuit lower bounds. In a surprising result, Alman and Williams showed that the (real valued) Hadamard matrix, which was conjectured to be rigid, is actually not very rigid. This line of work was extended by [DE17] to a family of matrices related to the Hadamard matrix, but over finite fields. In our work, we take another step in this direction and show that for any abelian group $G$ and function $f:G \rightarrow \mathbb C$, the matrix given by $M_{xy} = f(x - y)$ for $x,y \in G$ is not rigid. In particular, we get that complex valued Fourier matrices, circulant matrices, and Toeplitz matrices are all not rigid and cannot be used to carry out Valiant's approach to proving circuit lower bounds. This complements a recent result of Goldreich and Tal who showed that Toeplitz matrices are nontrivially rigid (but not enough for Valiant's method). Our work differs from previous non-rigidity results in that those works considered matrices whose underlying group of symmetries was of the form ${\mathbb F}_p^n$ with $p$ fixed and $n$ tending to infinity, while in the families of matrices we study, the underlying group of symmetries can be any abelian group and, in particular, the cyclic group ${\mathbb Z}_N$, which has very different structure. Our results also suggest natural new candidates for rigidity in the form of matrices whose symmetry groups are highly non-abelian. We are also able to extend these results to matrices with entries in a finite field, assuming sufficiently large dimension. Our proof has four parts. The first extends the results of [AW16,DE17] to generalized Hadamard matrices over the complex numbers via a new proof technique. The second part handles the $N \times N$ Fourier matrix when $N$ has a particularly nice factorization that allows us to embed smaller copies of (generalized) Hadamard matrices inside of it. The third part uses results from number theory to bootstrap the non-rigidity for these special values of $N$ and extend to all sufficiently large $N$. The fourth and final part involves using the non-rigidity of the Fourier matrix to show that the group algebra matrix, given by $M_{xy} = f(x - y)$ for $x,y \in G$, is not rigid for any function $f$ and abelian group $G$.
It is not exactly equal to $x$ except when $x=0$. What was likely meant is that if $|x|$ is not far from $0$, then $-\log(1-x)$ is approximately equal to $x$. One formal way of putting it is that $$\lim_{x\to 0} \frac{-\log(1-x)}{x}=1.$$ Take for example $x=0.1$. The calculator gives that $-\log(0.9)\approx 0.1053605$, so for $x=0.1$, $-\ln(1-x)$ is indeed fairly close to $x$. The second term of the Taylor Series tells you roughly how big the error is when you approximate $-\log(1-x)$ by $x$. Note that if $x=0.1$, then $x^2/2=0.005$. The actual error is approximately $0.0053605$. Sometimes, in modelling physical situations, when we know that $|x|$ is close to $0$, we replace $-\ln(1-x)$ by $x$. That does not mean that the two are strictly equal, only that for our purposes the approximation is good enough. The equation of the tangent line to $y=-\log(1-x)$ at $x=0$ turns out to be $y=x$. Recall that the tangent line to $y=f(x)$ at $x=a$ kisses the curve $y=f(x)$ at $x=a$. So if $x$ is near $0$, then $-\log(1-x)$ is very close to $x$. We call $x$ the linear approximation to $y=-\log(1-x)$ near $x=0$. Similarly, $x+\frac{x^2}{2}$ is the quadratic approximation. Added: Your edit shows some uncertainty about the inequalities$$x\leq \int_{1-x}^{1} \frac{dt}{t} \leq \dfrac{1}{1-x} x.$$That uncertainty is reasonable, particularly for negative $x$. For geometric clarity we should treat the cases $x>0 and $x<0 separately. First assume that $x$ is positive. On our interval from $1-x$ to $1$, the function $\dfrac{1}{t}$ is always $\ge 1$. So the integral (area) is bigger than or equal to $1$ times the length of the interval, which is $x$. That proves the inequality on the left. (It is good to draw a picture.) Similarly, on our interval, $\dfrac{1}{t}$ is always less than or equal to $\dfrac{1}{1-x}$. So the integral is less than or equal to $\dfrac{1}{1-x}$ times the length of the interval. That yields the inequality on the right. Next assume that $x$ is negative. It is all too easy to make mistakes with negative numbers, so we temporarily let $w=-x$. Then $w$ is positive. We are looking at the integral from $1-x$ to $1$, so from $1+w$ to $1$. This is is going the wrong way. But $$\int_{1+w}^1 \frac{dt}{t}=-\int_1^{1+w}\frac{dt}{t}.$$By an argument close to the one given above for the case $x>0$, we find that $$\frac{w}{1+w}\le \int_1^{1+w}\frac{dt}{t}\le w.$$"Multiply" the inequality by $-1$. That reverses the inequality, and we obtain$$-w\le \int_1^{1+w}-\frac{dt}{t}\le -\frac{w}{1+w}.$$Finally, replace $w$ by $-x$, and interchange the bounds on the integral. We get$$x\leq \int_{1-x}^{1} \frac{dt}{t} \leq \frac{x}{1-x}.$$
It all depends on your level of risk aversion and degree of intertemporal substitution.Let's assume you are risk neutral:Game is played once: you are willing to pay $6.5 = \sum^{N=12}_{i=1} \frac{1}{12} i$Game is played 10,000 times. Still willing to play 6.5$ for each game.Card replaced: well you replace everytime your first draw is lower than 6.5. ... In general, the variance of a portfolio is just $$\sigma_p^2 = \sum_i \sum_j w_i w_j \sigma_i \sigma_j \rho_{ij},$$which intuitively makes sense since we are summing over all weighted standard deviations and their correlations.Since $w_i = \frac{1}{3}$ and $\sigma_i = \sigma$ for all $i = \{1,2,3\}$, and $\rho_{ij} = 0$ for all $i \neq j$, it simplifies to ... The optimal investment strategy depends on the investment goals, or equivalently your utility function (which the investment strategy is supposed to maximize).The forward will trade at $\mathbf{E}^*_0(F_T)$ in the market when you invest at $t=0$.If you buy your maximum volume $M$, then gain/loss at $T$ is given by $M(F_T-\mathbf{E}^*_0(F_T))$ (which is ... Note that\begin{equation}E\big[e^{\sigma \alpha \sqrt{T} N(0,1)}\big] = e^{\frac{\sigma^2 \alpha^2}{2}T}\end{equation}Hence $F_T(T)^\alpha$ will be a lognormal variable with expected value $F_T(0)^\alpha e^{-\frac{1}{2}\sigma^2T \alpha + \frac{1}{2}\sigma^2 \alpha^2T}$ and log-variance $\sigma^2 \alpha^2 T$. Compare this to the Black formula for ... The idea is pretty much the same as the one used in the Breeden-Litzenberger result. You'll find many questions related to this here already, see e.g.: Prove that the butterfly condition is always greater than zero.The current value of the derivative is the discounted expected value.\begin{equation}D_0 = e^{-r T} \int_K^\infty (x - K)^2 f(x) \mathrm{d}x,... a.) The market capitalization $m_{cap} = 100*\$1.50 + 150*\$2.0 = \$150 + \$300 = \$450$, so the weight of each asset is $1/3$ and $2/3$ respectively in the market portfolio. You don't need to find the minimum variance portfolio. If you plug in these values you get exactly $E[r_m] = 1/3*0.15 + 2/3*0.12 = 0.13.$b.) The formula is wrong, as you multiply ... If the game is played in exactly the way you stated it, why would you ever bet more than 1 dollar? Assuming you bet 1\$, then you get 1\$ x value on card. And if you bet 12\$, you get 1\$ x Value on card. What's the point of betting more than 1 dollar? To elaborate on my comment: with respect to questions 1 and 2, the distribution of the payoff for one game is discrete uniform with a mean of 6.5 and a SD of about 3.5 according to my calculations. Now, if you are guaranteed to play this 10,000 times, then you are enititled to consider the distribution of the sum of the outcomes, which by the Central Limit ... Because instantaneous variance can be written as follows:$V \left[ dS_t\right]=E\left[ \left( dS_t -E\left[dS_t\right] \right)^2\right]$$V \left[ dS_t\right]=E\left[ \left( dS_t -f \, dt \right)^2\right]$$V \left[ dS_t\right]=E\left[ \left( g \, dW_t \right)^2\right]=g^2dt$Which is the same thing as:$V \left[ dS_t\right]=E\left[ dS_t dS_t\right]=g^... This is pretty standard fare for a Stats 101 course, so as to rationale, etc. you might benefit from picking up a textbook or otherwise do some reading on this.In brief though, hypothesis testing allows us to assess the likelihood sample estimates are different than theorized values in the absence of actual population values.In the cases above, with a ... The standard formula for Capital Asset Pricing Model is:\begin{equation}\bar{r} = r_f + \beta \cdot ( \bar{r_m} - r_f) \quad (1)\end{equation}in which:\begin{equation}\bar{r} \textit{ - expected return of an asset}\end{equation}\begin{equation}\ r_f \textit{ - risk-free rate}\end{equation}\begin{equation}\beta \textit{ - beta of an asset}\end{... Find the conditions under which:$E_{0}^{*}[\max (P_{T} - HR\times G_T, 0)] = \max (P_{0} - HR\times G_0, 0)$We have a no-brainer solution - the condition that the drift and volatility of both $P$ and $G$ is zero, which means $P$ and $G$ are constants in time.Second valid condition - the option is deep in the money or deep out of the money, such that ...
Under the auspices of the Computational Complexity Foundation (CCF) We show that Gallager's ensemble of Low-Density Parity Check (LDPC) codes achieve list-decoding capacity. These are the first graph-based codes shown to have this property. Previously, the only codes known to achieve list-decoding capacity were completely random codes, random linear codes, and codes constructed by algebraic (rather than combinatorial) techniques. ... more >>> We continue the study of list recovery properties of high-rate tensor codes, initiated by Hemenway, Ron-Zewi, and Wootters (FOCS'17). In that work it was shown that the tensor product of an efficient (poly-time) high-rate globally list recoverable code is {\em approximately} locally list recoverable, as well as globally list recoverable ... more >>> For a vector space $\mathbb{F}^n$ over a field $\mathbb{F}$, an $(\eta,\beta)$-dimension expander of degree $d$ is a collection of $d$ linear maps $\Gamma_j : \mathbb{F}^n \to \mathbb{F}^n$ such that for every subspace $U$ of $\mathbb{F}^n$ of dimension at most $\eta n$, the image of $U$ under all the maps, $\sum_{j=1}^d ... more >>>
I'm trying to determine the stationary states and the corresponding energies. My system has an angular momentum that is given by quantum number $l=1$, and the eigenvectors to $L_z$ are given as $|+1\rangle,|0\rangle,|-1\rangle$ and form a base. (thank you for the input @ZeroTheHero) $$|+1\rangle=\begin{pmatrix} 1 \\ 0 \\ 0\end{pmatrix},|0\rangle=\begin{pmatrix} 0 \\ 1 \\ 0\end{pmatrix},|-1\rangle=\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} $$ The Hamiltonian is $$H = \frac {\hbar w}{\sqrt2} \begin{pmatrix} 0 & 1 &0\\ 1 & 0 & 1\\ 0 & 1 & 0\\ \end{pmatrix}$$ When calculating the eigenvalues (energies) of the stationary states, I first conclude that the hamiltonian eigenvalues should be identical to those of $L_z$. This would mean that I can write $$ |n\rangle=\psi _{n} $$ $$H \psi_n (x) = E_n\psi_n (x)$$ So for $\psi_{+1}$, I would get $$ H \psi_{+1}=\frac{\hbar w}{\sqrt 2} \begin{pmatrix} 0 & 1 &0\\ 1 & 0 & 1\\ 0 & 1 & 0\\ \end{pmatrix} \begin{pmatrix} 1 \\ 0 \\ 0\end{pmatrix} = \frac{\hbar w}{\sqrt 2} \begin{pmatrix} 0 \\ 1 \\ 0\end{pmatrix} = E_n \psi_{+1}$$ But this doesn't appear to give me the correct solution.
Whenever you talk about "validity" of a theory/equation in physics, you must also specify the length scales being probed. This is important because physics at different scales is different, with different degrees of freedom and different characteristics. This is why we don't need to worry about Heisenberg's uncertainty principle and the insanely complicated quantum dynamics of air molecules when designing an aircraft; Newton's laws and fluid dynamics work pretty well. This means that a theory $(T_A)$ that is used to explain phenomena at one particular length scale $(L_A)$ should not be used to explain phenomena at a different length scale $(L_B)$. We say that the theory $T_A$ is effective for length scales of the order $L_A$ while another theory $T_B$ is effective for length scales of the order $L_B$. In this sense, every physical theory is an effective field theory. General relativity (GR) is an effective field theory too. The typical way to solve problems in physics is to do perturbation theory. If we take the Einstein-Hilbert action $$ S_{EH} = \int d^4 x \ M_P^2 \sqrt{-g} R \qquad \qquad M_P=\text{Planck mass}$$ and expand it around a background, say flat space ($\bar{g}_{\mu \nu} = \eta_{\mu \nu}$), using the perturbative expansion $g_{\mu \nu} = \bar{g}_{\mu \nu} + h_{\mu \nu}$, we get (schematically) $$ S_{EH} = \int d^4 x \ M_P^2 \left(h \partial^2 h +h (h \partial^2 h) + h^2(h \partial^2 h) + \cdots \right) $$ where we have ignored total derivative terms. Indices are suppressed for clarity. The kinetic term should not have coupling, so canonically normalizing the graviton field $h_{\mu \nu} \to \frac{h_{\mu \nu}}{M_P}$ gives an infinite series of graviton-graviton vertices suppressed by appropriate powers of $M_P$: $$ S_{EH} = \int d^4 x \ \left(h \partial^2 h + \frac{h}{M_P} (h \partial^2 h) + \frac{h^2}{M_P^2} (h \partial^2 h) + \cdots \right) $$ Note the crucial negative power of coupling $M_P$ in interactions. Fourier transforming $\partial^2 \to -k^2$, we see that the perturbative expansion works well for energy scales $k \ll M_P$. As we probe extremely short length scales around Planck $\sim 10^{-35} m$, momenta scale like $k \sim M_P$, and all interaction terms in the perturbative expansion become equally important. The theory becomes strongly coupled and leads to a breakdown of perturbation theory. GR becomes non-perturbative for $k \gtrsim M_P$. If we use just perturbation theory, we don't know what happens to GR for Planck scale energies: GR is non-renormalizable. Actually, pure GR is renormalizable at one loop (in four dimensions, because of Gauss-Bonnet identity) as was shown by 't Hooft and Veltman. They showed, however, that gravity coupled to scalar matter is non-renormalizable at one loop with the counterterm Lagrangian $$ \Delta \mathcal{L} \sim \frac{1}{\epsilon} \sqrt{-g} \ R^2$$ Goroff and Sagnotti finally showed that pure GR is non-renormalizable at two loops, where the counterterm Lagrangian goes like $$ \Delta \mathcal{L} \sim \frac{1}{\epsilon} \sqrt{-g} \ R_{ab}^{\ \ c d} R_{cd}^{\ \ ef} R_{ef}^{\ \ ab}$$ The original Eisntein-Hilbert Lagrangian doesn't contain these terms: GR is non-renormalizable. But these higher derivative curvatures force us to include them in the original Lagrangian if we are to make perturbative (also ambitiously unitary) sense of quantum field theory for gravity at Planck length scales. This is also step $1$ of constructing an effective field theory (EFT): given a Lagrangian, add all possible terms consistent with symmetries of the theory and arrange them in an energy expansion, that is, a sum of higher and higher dimensional operators, suppressed by higher powers of some cutoff scale $\Lambda$. $\Lambda$ is where the EFT breaks down and you need a UV completion (string theory?) to go beyond. Taking these ideas into consideration, we construct, by hand, an effective field theory of gravity with the action $$ S = \int d^4 x \sqrt{-g} \ M_P^2 R + c_1 R^2 + c_2 R_{\mu \nu} R^{\mu \nu} + c_3 R_{\mu \nu \rho \sigma} R^{\mu \nu \rho \sigma} + \text{higher order terms} $$ Note that $c_1, c_2, c_3$ are dimensionless and go like $\sim \frac{M_P^2}{M_s^2}$. $M_s$ is the energy scale where these higher derivative corrections become important. Doing again a perturbative expansion and canonically normalizing, the Einstein-Hilbert term in (Fourier space) goes like $k^2$ suppressed by higher and higher powers of $M_P$. The higher curvature terms go like $k^4$ suppressed by higher and higher powers of $M_P$ and $M_s$. For low enough momenta $k \ll M_s$, these higher curvature terms are severely suppressed and not important compared to the Einstein-Hilbert term. So although we expect these higher curvature terms to be there (also predicted by string theory), they don't affect our daily life on earth. For length scales on earth, the Newtonian potential is enough to explain physics. But for black holes and the singularities at the beginning of the universe, it's been a puzzle for decades now. And perhaps many more to come. Addendum: Perhaps, the one higher curvature correction that makes an interesting appearance which is useful not just for theoretical studies but also for cosmological observations is the $R^2$ term in the form of Starobinsky inflation.
The way I calculate it is summing up the weighted beta of long and shorts but I saw a table where this wasn't the case so I am wondering if this is not the correct way. You can actually show by construction that the beta of the portfolio is the weighted sum of all the underlyings betas. Assume the return of the benchmark and some asset $a$ at time $t$ are respectively denoted $r_{b,t}$ and $r_{a,t}$, then the beta of a given asset is defined by: $$r_{a,t} = \alpha_a + \beta_a r_{b,t} + \epsilon_{a,t}$$ Let's assume you have a portfolio of $n$ assets $(a_1, ..., a_i, ..., a_n)$ each with a weight $w_i$, then the return of the portfolio is at time $t$ is defined as: $$r_{p,t} = \sum_{i=1}^n w_i r_{a_i,t}$$ Now, by expressing each asset's return in terms of their own beta, you get: $$ \begin{align} r_{p,t} &= \sum_{i=1}^n w_i r_{a_i,t}\\ &= \sum_{i=1}^n w_i \left( \alpha_{a_i} + \beta_{a_i} r_{b,t} + \epsilon_{a_i,t} \right)\\ &= \underbrace{\sum_{i=1}^n w_i \alpha_{a_i}}_{\alpha_p} + \underbrace{\left(\sum_{i=1}^n w_i \beta_{a_i} \right)}_{\beta_p} r_{b,t} +\sum_{i=1}^n w_i \epsilon_{a_i,t} \end{align} $$ There might be something in the table that you missed (likely the weights as Alex C pointed out) or maybe it was wrong.
Note: Cross-posted on Physics SE. The quantum Singleton bound states that for an error-correcting code with $n$ physical qubits and $k$ encoded qubits, and some subsystem $R$ of $m$ qubits that can 'access the entire quantum code', it is necessary that $m \ge \frac{n+k}{2}$. As I understand (from section 4.3 of Harlow's TASI notes), one way to state the condition for 'accessing the entire code' is the Knill-Laflamme condition, which is the following. Let $\bar{R}$ denote the complement of $R$, $\mathscr{H}_\bar{R}$ be the space of operators supported on $\bar{R}$, and $P$ denote the projection matrix onto the code subspace $\mathscr{H}_{code}$. Then for any operator $O_{\bar{R}} \in \mathscr{H}_\bar{R}$, $P O_{\bar{R}} P = \lambda P$, where $\lambda$ is some constant that depends on the operator $O_{\bar{R}}$ This means that operator supported on the complement region $\bar{R}$ has no effect on measurements on $\mathscr{H}_{code}$. I'm confused because this does not seem compatible with the toric code. Because, it can be shown that in the toric code (where the number of encoded bits $k=2$), the Knill-Laflamme condition is satisfied for $\bar{R}$ being any contractible region of qubits, i.e. for $R$ containing the union of two distinct nontrivial cycles on the torus. In this case on a torus of length $L \times L$, we will have the number of physical qubits being $n = L^2$ and the number of bits needed to access being $m = 2L$. So, it seems that the singleton bound is explicitly violated. Where does the logic I'm presenting fail, and why should the Toric Code be compatible with this?
Search Now showing items 1-3 of 3 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... Net-baryon fluctuations measured with ALICE at the CERN LHC (Elsevier, 2017-11) First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ...
Version 20 (modified by 5 weeks ago) (diff), 'perturbative' and 'real' particles; the perturbative weight 'perturbative' and 'real' particles (following text is taken - slightly modified - from: O.Buss, PhD thesis, pdf, Appendix B.1) Reactions which are so violent that they disassemble the whole target nucleus can be treated only by explicitly propagating all particles, the ones in the target and the ones produced in the collision, on the same footing. For reactions which are not violent enough to disrupt the whole target nucleus, e.g. low-energy πA, γA or neutrino A collisions at not too high energies, the target nucleus stays very close to its ground state. In this case, one keeps as an approximationthe phase-space density of the target nucleons constant in time ('frozen approximation'). In GiBUU this is controlled by the switch freezeRealParticles. The test-particles which represent this constant target nucleus are called real test-particles. However, one also wants to consider the final state particles. Thus one defines another type of test-particles which are called perturbative.The perturbative test-particles are produced in a reaction on one of the target nucleons. They are then propagated and may collide with other real ones in the target. The products of such collisions are perturbative particles again. These perturbative particles can thus react with real target nucleons, but may not scatter among themselves.Furthermore, their feedback on the actual densities is neglected. In this way one can simulate the effects of the almost constant target on the outgoing particles without actually modifyingthe target. E.g. in πA collisions we initialize all initial state pions as perturbative test-particles.Thus the target automatically remains frozen and all products of the collisions of pions andtarget nucleons are assigned to the perturbative regime. Furthermore, since the perturbative particles do not react among themselves or modify the realparticles in a reaction, one can also split a perturbative particle into \(N_{test}\) pieces (several perturbativeparticles) during a run. Each piece is given a corresponding weight \(1/N_{test}\). In this way one simulates\(N_{test}\) possible final state scenarios of the same perturbative particle during one run. The perturbative weigth 'perWeight' Usually, in the cases mentioned above, where one uses the seperation into real and perturbative particles, one wants to calculate some final quantity like \(d\sigma^A_{tot}=\int_{nucleus}d^3r\int \frac{d^3p}{(2\pi)^3} d\sigma^N_{tot}\,\times\,\dots \). Here we are hiding all medium modifications, as e.g. Pauli blocking, flux corrections or medium modifications of the cross section in the part "\(\,\times\,\dots \)". Now, solving this via the testparticle ansatz (with \(N_{test}\) being the number of test particles), this quantity is calulated as \(d\sigma^A_{tot}=\frac{1}{N_{test}}\sum_{j=1}^{N_{test}\cdot A}d\sigma^j_{tot}\,\times\,\dots \), with \(d\sigma^j_{tot}\) standing for the cross section of the \(j\)-th test-particle. The internal implementation of calculations like this in GiBUU is, that a loop runs over all \(N_{test}\cdot A\) target nucleons and creates some event. Thus all these events have the same probability. But since they should be weighted according \(d\sigma^j_{tot}\), this is corrected by giving all (final state) particles coming out of event \(j\) the weight \(d\sigma^j_{tot}\). This information is stored in the variable perWeight in the definition of the particle type. Thus, in order to get the correct final cross section, one has to sum the perWeight, and not the particles. As an example: if you want to calculate the inclusive pion production cross section, you have to loop over all particles and sum the perWeights of all pions. Simply taking the number of all pions would give false results. The weights can also be negative. This happens, e.g., in the case of pion production on nucleons. In this case the cross section is determined by the square of a coherent sum of resonance and background amplitudes and as such is positive. In the code the resonance contribution is separated out as the square of the resonance amplitude and as such is positive as well. The remainder, i.e. the sum of the square of the background amplitude and the interference term of resonance and background amplitudes, can be negative, however. This latter contribution is just the event type labeled 32 and 33 in the code that describes the 1pi bg plus interference. How to compute cross sections from the perturbative weights for neutrino-induced reactions The output file FinalEvents.dat contains all the events generated. Per event all the four-momenta of final state particles are listed together with the incoming neutrino energy, the 'perWeight' and various other useful properties (see documentation for FinalEvents.dat). In each event there is one nucleon with perWeight=0 which represents the hit nucleon; for 2p2h processes the second initial nucleon is not written out. The final state nucleons may have masses which are spread out around the physical mass in a very narrow distribution. There are two reasons for that: 1. nucleons may still be inside the potential well and thus have lower masses. These nucleons can be eliminated from the final events file by imposing a condition that they are outside the nuclear potential (the spatial coordinates of all particles are also given in the FinalEvents.dat file). 2. For numerical-practical reasons the nucleons are given a Breit-Wigner mass distribution with a width of typically 1 MeV around the physical mass when calculating the QE cross section. As an example we consider here the calculation of the CC inclusive differential cross section dsigma/dE_mu for a neutrino-induced reaction on a nucleus; E_mu is the energy of the outgoing muon. In FinalEvents.dat the lines with the particle number 902 contain all the muon kinematics as well as the perweight. In order to produce a spectrum one first has to bin the muon energies into energy bins. This binning process must preserve the connection between energy and perweight. Then all the perweights in a given energy bin are summed and divided by the bin width to obtain the differential cross section. If the GiBUU run used - for better statistics - a number of runs >1 at the same energies, then this number of runs has to be divided out to obtain the final differential cross section. All cross sections in GiBUU, both the precomputed ones and the reconstructed ones, are given per nucleon. The units are 10 -38 cm 2 for neutrinos and 10 -33 cm 2 for electrons.
In analogy to the electromagnetic tensor, with the components defined as the electric field $E$ and magnetic field $B$ as such: $F^{ab} = \begin{bmatrix} 0 & -E_x/c & -E_y/c & -E_z/c \\ E_x/c & 0 & -B_z & B_y \\ E_y/c & B_z & 0 & -B_x \\ E_z/c & -B_y & B_x & 0 \end{bmatrix}$ Can one form a relativistic tensor from the electric displacement field $D$ and magnetizing field $H$ like so: $\bar{F}^{ab} = \begin{bmatrix} 0 & -D_x/c & -D_y/c & -D_z/c \\ D_x/c & 0 & -H_z & H_y \\ D_y/c & H_z & 0 & -H_x \\ D_z/c & -H_y & H_x & 0 \end{bmatrix}$ and thus obtain an "electromagnetic tensor" that can be easily used to handle the effects of fields in materials? Since $D = \epsilon_0 E + P$ and $H = \frac{1}{\mu_0}B - M$, this is equivalent to asking if $P$ and $M$ can be used to form a relativistic tensor.
Alain Chenciner Jussieu 75251 Paris Cedex 05, 75251, Paris, France Astronomie et Systèmes Dynamiques, IMCCE, Observatoire de Paris & Départment de Mathématiques Publications: Chenciner A. Are Nonsymmetric Balanced Configurations of Four Equal Masses Virtual or Real? 2017, vol. 22, no. 6, pp. 677–687 Abstract Balanced configurations of $N$ point masses are the configurations which, in a Euclidean space of high enough dimension, i.e., up to $2(N - 1)$, admit a relative equilibrium motion under the Newtonian (or similar) attraction. Central configurations are balanced and it has been proved by Alain Albouy that central configurations of four equal masses necessarily possess a symmetry axis, from which followed a proof that the number of such configurations up to similarity is finite and explicitly describable. It is known that balanced configurations of three equal masses are exactly the isosceles triangles, but it is not known whether balanced configurations of four equal masses must have some symmetry. As balanced configurations come in families, it makes sense to look for possible branches of nonsymmetric balanced configurations bifurcating from the subset of symmetric ones. In the simpler case of a logarithmic potential, the subset of symmetric balanced configurations of four equal masses is easy to describe as well as the bifurcation locus, but there is a grain of salt: expressed in terms of the squared mutual distances, this locus lies almost completely outside the set of true configurations (i. e., generalizations of triangular inequalities are not satisfied) and hence could lead most of the time only to the bifurcation of a branch of virtual nonsymmetric balanced configurations. Nevertheless, a tiny piece of the bifurcation locus lies within the subset of real balanced configurations symmetric with respect to a line and hence has a chance to lead to the bifurcation of real nonsymmetric balanced configurations. This raises the question of the title, a question which, thanks to the explicit description given here, should be solvable by computer experts even in the Newtonian case. Another interesting question is about the possibility for a bifurcating branch of virtual nonsymmetric balanced configurations to come back to the domain of true configurations. Chenciner A., Leclerc B. Between Two Moments 2014, vol. 19, no. 3, pp. 289-295 Abstract In this short note, we draw attention to a relation between two Horn polytopes which is proved in [3] as a result, on the one hand, of a deep combinatorial result in [5] and, on the other hand, of a simple computation involving complex structures. This suggests an inequality between Littlewood–Richardson coefficients, which we prove using the symmetric characterization of these coefficients given in [1]. Chenciner A., Féjoz J. Unchained Polygons and the $N$-body Problem 2009, vol. 14, no. 1, pp. 64-115 Abstract We study both theoretically and numerically the Lyapunov families which bifurcate in the vertical direction from a horizontal relative equilibrium in $\mathbb{R}^3$. As explained in [1], very symmetric relative equilibria thus give rise to some recently studied classes of periodic solutions. We discuss the possibility of continuing these families globally as action minimizers in a rotating frame where they become periodic solutions with particular symmetries. A first step is to give estimates on intervals of the frame rotation frequency over which the relative equilibrium is the sole absolute action minimizer: this is done by generalizing to an arbitrary relative equilibrium the method used in [2] by V. Batutello and S. Terracini. In the second part, we focus on the relative equilibrium of the equal-mass regular $N$-gon. The proof of the local existence of the vertical Lyapunov families relies on the fact that the restriction to the corresponding directions of the quadratic part of the energy is positive definite. We compute the symmetry groups $G_{\frac{r}{s}}(N, k, \eta)$ of the vertical Lyapunov families observed in appropriate rotating frames, and use them for continuing the families globally. The paradigmatic examples are the "Eight" families for an odd number of bodies and the "Hip-Hop" families for an even number. The first ones generalize Marchal's $P_{12}$ family for 3 bodies, which starts with the equilateral triangle and ends with the Eight [1, 3–6]; the second ones generalize the Hip-Hop family for 4 bodies, which starts from the square and ends with the Hip-Hop [1, 7, 8]. We argue that it is precisely for these two families that global minimization may be used. In the other cases, obstructions to the method come from isomorphisms between the symmetries of different families; this is the case for the so-called "chain" choreographies (see [6]), where only a local minimization property is true (except for $N = 3$). Another interesting feature of these chains is the deciding role played by the parity, in particular through the value of the angular momentum. For the Lyapunov families bifurcating from the regular $N$-gon whith $N \leqslant 6$ we check in an appendix that locally the torsion is not zero, which justifies taking the rotation of the frame as a parameter. Chenciner A. A note by Poincaré 2005, vol. 10, no. 2, pp. 119-128 Abstract On November 30th 1896, Poincaré published a note entitled "On the periodic solutions and the least action principle" in the "Comptes rendus de l'Académie des Sciences". He proposed to find periodic solutions of the planar Three-Body Problem by minimizing the Lagrangian action among loops in the configuration space which satisfy given constraints (the constraints amount to fixing their homology class). For the Newtonian potential, proportional to the inverse of the distance, the "collision problem" prevented him from realizing his program; hence he replaced it by a "strong force potential" proportional to the inverse of the squared distance. In the lecture, the nature of the difficulties met by Poincaré is explained and it is shown how, one century later, these have been partially resolved for the Newtonian potential, leading to the discovery of new remarkable families of periodic solutions of the planar or spatial $n$-body problem. Chenciner A. Collisions totales, Mouvements Complètement Paraboliques et Réduction des Homothéties Dans le Problème des $n$ corps 1998, vol. 3, no. 3, pp. 93-106 Abstract Nous étudions les propriétés du problème des n corps qui proviennent de l'homogénéité du potentiel et retrouvons dans un cadre conceptuel commun divers résultats de Sundman, McGehee et Saari. Les résultats ne sont pas nouveaux mais il nous a semblé que cette présentation les éclaire agréablement. Nous considérons des potentiels de type newtonien, homogènes de degre $2\kappa$ en la configuration. Pour n'être pas obligés de distinguer divers cas dans les inégalités, nous supposerons, ce qui inclut le cas newtonien, que $-1<\kappa<0$.
Under the auspices of the Computational Complexity Foundation (CCF) Polynomial approximations to boolean functions have led to many positive results in computer science. In particular, polynomial approximations to the sign function underly algorithms for agnostically learning halfspaces, as well as pseudorandom generators for halfspaces. In this work, we investigate the limits of these techniques by proving inapproximability results for ... more >>> We present an explicit pseudorandom generator for oblivious, read-once, width-$3$ branching programs, which can read their input bits in any order. The generator has seed length $\tilde{O}( \log^3 n ).$ The previously best known seed length for this model is $n^{1/2+o(1)}$ due to Impagliazzo, Meka, and Zuckerman (FOCS '12). Our ... more >>> We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is $O(\log^2 n)$, where $n$ is the length of the branching program. The previous best seed length known for this model was $n^{1/2+o(1)}$, ... more >>> We exhibit an explicit pseudorandom generator that stretches an $O \left( \left( w^4 \log w + \log (1/\varepsilon) \right) \cdot \log n \right)$-bit random seed to $n$ pseudorandom bits that cannot be distinguished from truly random bits by a permutation branching program of width $w$ with probability more than $\varepsilon$. ... more >>> We study the online decision problem where the set of available actions varies over time, also called the sleeping experts problem. We consider the setting where the performance comparison is made with respect to the best ordering of actions in hindsight. In this paper, both the payoff function and the ... more >>>
Here is my attempt to prove the Fundamental theorem of algebra from the Brouwer fixed point theorem. Lemma ( Brouwer fixed point theorem). If $f:D_r\rightarrow D_r$ is a continous function, then there is a point $z_0 \in D_r$ such that $f(z_0)=z_0$. Theorem ( Fundamental theorem of algebra). Every non-constant complex polynomial $$p(z)=a_n z^n+a_{n-1}z^{n-1}+...+a_1 z+a_0,$$ where $a_0 \neq 0$, vanish somewhere in $\mathbb{C}$. Proof:If the polynomial $p$ doesn't vanish in $\mathbb{C}$, then for every $z\in \mathbb{C}$ we have the following $$g(z)=\frac{-a_0}{a_n z^{n-1}+a_{n-1}z^{n-2}+...+a_1}\neq z.$$ If $g$ is not continuous, then the polynimial $a_n z^{n-1}+a_{n-1}z^{n-2}+...+a_1$ must vanish somewhere in $\mathbb{C}$, and we are done since $n \in \mathbb{N}$ is arbitrary. Suppose that $g$ is continuous. Now $g$ has the growth condition $|g(z)|\rightarrow 0$ as $|z|\rightarrow \infty$, so there is a constant $R>0$ such that $g(z)\in D_R$ when $z\in D_R$. Due to the Brouwer fixed point theorem $g$ can't be continuous since $g(z)\neq z$ in $D_R$. Again we are done.
DISCUSSION I don't know if this will help you at all, but if you are reading The Art of Electronics, then you will also almost certainly read (in about 20 pages yet) about the long-tailed pair differential amplifier made out of two BJTs. Your circuit can be redrawn (without changing it) to look more like that traditional function: simulate this circuit – Schematic created using CircuitLab The AofE book discusses the Schmitt trigger circuit on the page where you got your schematic. But it's clear that those words didn't help out enough. So I'm putting this into a different form and suggesting that you skip forward a little bit in the book and read about the differential amplifier form. That may also help you as that form (topology) begins to firm up better in your own mind. (You also need to start learning how to re-draw, mentally, schematics you see into forms that are familiar and commonly found. So my redrawing of this schematic shows you the power of such flexible thinking in order to re-arrange things over and over until something "clicks" and you can put it into a form you've spent time on and studied.) I want you to learn to recognize these patterns. Because, over time, it's important. Circuits aren't just "stuff slapped together" to get a job done, where each result is unique. Instead, it's more like "watch-making" where there are certain ideas, like weighted wheels, springs, cogs, gears, and so on. So ideas are used over and over and over again in electronics. And you need to get better able to "find" these ideas. The skill of the person drawing the schematic, the time they have available for helping the schematic to communicate, etc., all vary quite widely. So you have to get good at "seeing the faces" in a painting, which can be anything from barely more than a rorschach test to who knows what else. In this case, it is operated differently (in saturated modes) than a typical diff-amp. So in this particular case that it can be analyzed reasonably well without dragging in the usual diff-amp perspective (that you have yet to read about in the book.) But I still wanted to point out what's ahead of you and that you will recognize this topology quite soon as you read forward about 20 pages from now. So if you look at the above circuit and read forward in your book for a moment, you will notice that there are actually two inputs and two outputs of the differential amplifier. As used here, one of those outputs is already marked as "OUT." The other output is dragged over via a wire to one of its inputs. The other remaining input goes to the switch through a resistor. As I already pointed out, a differential amplifier is "mostly" used differently than here, with an expectation that the inputs (the bases of the two BJTs) will be driven with very nearby voltage values. When operated that way, neither input draws much current. (And this concept forms the input ideas used in BJT opamps, too.) So when you get to that section about 20 pages ahead of where you are, the authors will be talking more about that usage -- leading into ideas about opamps, yet to arrive in the book. Here, the input voltages are "over-driven" in the sense that there is no attempt made to keep them "similar." And just like what would happen with a real opamp based on BJTs, the inputs will draw current when over-driven like this so that the base voltages aren't similar. With that background in mind, plus a little forecasting about where you will soon be headed in the book, let's look at the above circuit. ONE APPROACH Probably the easiest way to see the above circuit working is to just imagine that the switch, via \$R_4\$ either turns \$Q_1\$ ON or OFF. With \$Q_1\$ OFF, then \$R_2\$ pulls \$Q_2\$ ON. With \$Q_2\$ effectively ON, you have a resistor divider formed by \$R_1\$ and \$R_3\$, which will present a voltage at OUT of \$V_\text{OUT}=5\:\text{V}\cdot\frac{20\:\Omega}{20\:\Omega+1\:\text{k}\Omega}\approx 100\:\text{mV}\$ and a source impedance of \$\approx 19.6\:\Omega\$. (Thevenin here.) With \$Q_1\$ ON instead, then \$R_2\$ doesn't matter because now \$Q_1\$ pulls down on the base of \$Q_2\$ through a very low resistance of \$R_1\$, turning \$Q_2\$ OFF. With \$Q_2\$ effectively OFF, you have \$R_3\$ now pulling the OUT terminal upwards towards \$5\:\text{V}\$. So the voltage at OUT is \$V_\text{OUT}=5\:\text{V}\$ with a source impedance of \$1\:\text{k}\Omega\$. ANOTHER APPROACH Let's go at this entirely differently. Think of \$Q_1\$ and \$Q_2\$ arranged like a kind of comparator. If the base of \$Q_1\$ is exactly the same voltage as the base of \$Q_2\$, then both BJTs will have the exact same base-emitter voltage (because their emitters are obviously at the same voltage, too.) So their collector currents have to be equal, as well. (Assuming identical BJTs for now.) So the the voltages at the collectors, in theory anyway, would be equal, too. Now, if you just change the base voltage at \$Q_1\$ just slightly upward, so that it is higher than the voltage at the base of \$Q_2\$, then \$Q_1\$ will try and increase its collector current. But this probably will be at the expense of collector current for \$Q_2\$ -- because there is only a certain supply of it and their shared emitters. Also, since an increasing collector current for \$Q_1\$ implies a declining collector voltage for \$Q_1\$, and since \$Q_1\$'s collector is tied to \$Q_2\$'s base, then it follows that the base of \$Q_2\$ is declining. And that will also have the effect of reducing \$Q_2\$'s collector current, too. So these things work together. If you pull upward just slightly on the base of \$Q_1\$, it tips the balance. And once that tip starts, it continues even more because the changes at the collector of \$Q_1\$ (started by the base voltage change for it) are exactly what will help drive even less current into the collector of \$Q_2\$ and still more into the collector of \$Q_1\$. Etc. Quite rapidly, you find that even the slightest upward imbalance at the base of \$Q_1\$ causes the system to move rapidly in the direction where \$Q_2\$ turns OFF completely and has no collector current, at all. Since we can assume that \$Q_1\$ is probably saturated, there will be a small, fixed voltage drop across its collector and emitter. So this means that the current through the collector will be about \$\frac{5\:\text{V}-100\:\text{mV}}{1\:\text{k}\Omega+20\:\Omega}\approx 4.8\:\text{mA}\$. The current in the emitter will be slightly more, as saturation base current arrives. We can work it out very approximately to about \$\frac{5\:\text{V}-1\:\text{V}}{25\:\text{k}\Omega}\approx 160\:\mu\text{A}\$. So the emitter current will be something between \$4.9-5.0\:\text{mA}\$. From that, we can work out that the emitter voltage is about \$100\:\text{mV}\$ and that the collector voltage is indeed quite low -- perhaps \$200\:\text{mV}\$, or so. So \$Q_2\$ is definitely OFF, as assumed. That's one state. Let's gradually lower the input voltage in our minds. As the base voltage for \$Q_1\$ declines, there will be a point at which \$Q_1\$ is no longer saturated but is starting to be "active." This will take place when the base current declines to the point where \$\beta=\frac{I_C}{I_B}\approx 200\$ relationship arrives again. We know the collector current is \$4.8\:\text{mA}\$, so this means a base current of \$I_B=24\:\mu\text{A}\$. Applying that to \$R_4\$ we find that the input voltage (at the switch) needs to be about \$700\:\text{mV}+25\:\text{k}\Omega\cdot 24\:\mu\text{A}=1.3\:\text{V}\$. So as the voltage goes below about \$1.3\:\text{V}\$, we expect that the collector of \$Q_1\$ should also be coming out of saturation and will start to rise upward from the \$200\:\text{mV}\$ we'd earlier estimated. It will only take a change of about \$500\:\text{mV}\$ at the collector of \$Q_1\$ to turn \$Q_2\$ ON. And this means only a change of \$\frac{1}{2}\:\text{mA}\$. Working that backward to the base using \$\beta=200\$, we find that this is a change of only \$2.5\:\mu\text{A}\$ at the base. So by the time the voltage at the switch reaches \$700\:\text{mV}+25\:\text{k}\Omega\cdot 21.5\:\mu\text{A}=1.24\:\text{V}\$ we should expect that something very new should be taking place. So, we now understand one thing. In the case where the input is going from down from \$5\:\text{V}\$ to \$0\:\text{V}\$, that there is an important transition period right around \$1.2-1.3\:\text{V}\$. What about in the other direction? Well, assume that the input of \$Q_1\$ is at ground. Then \$Q_1\$ is certainly OFF. But now \$R_2\$ is definitely pulling \$Q_2\$ into full saturation. This means that \$Q_2\$'s collector current will now be about that \$4.8\:\text{mA}\$ we computed earlier for \$Q_1\$'s collector when the input was oppositely arranged. But we also must add in very substantial base current coming via a much lower-valued \$R_2\$. This added current will be close to \$4\:\text{mA}\$ (slightly less because we have to account for a \$V_\text{BE}\$ junction voltage.) So now the emitter current is actually higher, or around \$8.8\:\text{mA}\$. This emitter current is almost twice as much as before. So the emitter is now closer to \$200\:\text{mV}\$. Now let's raise the voltage at the switch, moving slowly towards \$5\:\text{V}\$. At what point will \$Q_2\$ come out of saturation so that things can change back again? Well, this happens when the collector of \$Q_1\$ can draw off almost all of that \$4\:\text{mA}\$ that is currently present through \$R_2\$. That means that the base current of \$Q_1\$ must reach about \$20\:\mu\text{A}\$. Given the higher emitter voltage now (which is \$200\:\text{mV}\$ instead of the earlier \$100\:\text{mV}\$), we will need about \$200\:\text{mV}+700\:\text{mV}+20\:\mu\text{A}\cdot 25\:\text{k}\Omega =1.4\:\text{V}\$. So this is higher. In fact, it is higher by just about the differences in the emitter voltage caused by the differences in currents in the two situations. We know that in one case this is about \$4.8\:\text{mA}\$ and in the other case about \$8.8\:\text{mA}\$ through \$R_1\$. So we expect to see a hysteresis of about \$20\:\Omega\cdot \left(8.8\:\text{mA}-4.8\:\text{mA}\right)=80\:\text{mV}\$ at the input side. Roughly speaking, that might be rounded to about \$100\:\text{mV}\$ as there are some "details" we didn't completely work out about the transitions. HYSTERESIS NOTE The hysteresis voltage is usually specified as a \$\pm\$ value, as \$V_h\$. Here, I will be talking about the total width and will call it \$V_w\$. So: $$V_w=R_1\cdot \left(\frac{V_\text{CC}}{R_3} + \frac{V_\text{CC}-V_\text{BE}}{R_2}-\frac{V_\text{CC}}{R_2}\right)=V_\text{CC}\frac{R_1}{R_3}-V_\text{BE}\frac{R_1}{R_2}$$ Since \$R_2=R_3\$ in this circuit, the equation is a much simpler: $$V_w=\left(V_\text{CC}-V_\text{BE}\right)\frac{R_1}{R_2=R_3}$$ So, here I might get: \$V_w=\left(5\:\text{V}-700\:\text{mV}\right)\frac{20\:\Omega}{1\:\text{k}\Omega}\approx \pm 85\:\text{mV}\$. There isn't much here regarding the values of \$\beta\$ for each transistor. That's because I've kept the ideas simple and, besides, the \$\beta\$ doesn't impact the width of the hysteresis band nearly as much as it affects the exact voltage where the transitions take place. In this circuit the exact voltage at which the transition takes place simply does not matter. So there's little need to worry about threshold voltages moving around, so long as the hysteresis width is managed well enough. So \$\beta\$ differences matter less here.
Is there a meaningful physical concept of $distance * velocity$? Came across something analogous in computer science and was wondering if there was any physical analogue. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community In diffusion equations, the diffusion coefficient typically has a dimensionality of $\mathrm{L^2T^{-1}}$. For instance, the heat equation is typically written in this form: $$ \frac{\partial u}{\partial t}-\alpha\nabla u=0, $$ where $u$ is temperature and $\alpha$ is the thermal diffusivity. The thermal diffusivity is defined as: $$ \alpha=\frac{k}{\rho c_p}, $$ where $k$ is the thermal conductivity of the medium, $\rho$ is the mass density and $c_p$ is the specific heat capacity. The SI unit of thermal diffusivity is $\mathrm{m^2/s}$. Angular momentum for a unit mass. Angular momentum is: $$ L= r × (mv) $$ This could also be thought of as an "area speed" (in lack of a better word). For instance if you are painting a wall and $d$ is the width of your brush, and $v$ is the speed you are moving the brush with, the area you would cover per unit time would be $d\cdot v$. I wouldn't say that this is a commonly used physical quantity though.
The product $\sigma$-algebra $\mathcal{F} \otimes \mathcal{G}$ by definition is generated by the family $\mathcal{P}:= \{A \times B: A \in \mathcal{F}, B \in \mathcal{G}\}$ and so $h:(E, \mathcal{E}) \to (F \times G, \mathcal{F} \otimes \mathcal{G})$ is measurable iff $$ \forall A \times B \in \mathcal{P}: h^{-1}[A \times B] \in \mathcal{E}$$ This is a general fact on generating sets of $\sigma$-algebras (analogous to such properties for bases and subbases in topology): If $f: (E,\mathcal{E}) \to (F, \mathcal{F})$ is a function between measurable spaces and $\mathcal{F} = \sigma(\mathcal{P})$, so the image $\sigma$-algebra is generated by some subfamily $\mathcal{P}$, then $f$ is measurable iff $\forall P \in \mathcal{P}: f^{-1}[P] \in \mathcal{E}$. The proof is not hard: the left to right implication of the iff is the definition of measurability, and for the right to left implication, define $\mathcal{F}' = \{A \subseteq F: f^{-1}[A] \in \mathcal{E}\}$, and do the easy check that $\mathcal{F}'$ is a $\sigma$-algebra on $F$, using properties like $f^{-1}[F\setminus A] = E\setminus f^{-1}[A]$ and $f^{-1}[\bigcup_n A_n] = \bigcup_n f^{-1}[A_n]$. By the right hand side assumption, $\mathcal{P} \subseteq \mathcal{F}'$ and so $$\mathcal{E} = \sigma(\mathcal{P}) \subseteq \mathcal{F}'$$ by minimality. But this exactly says that $f$ is measurable. Finally, note that $h^{-1}[A \times B] = f^{-1}[A] \cap g^{-1}[B]$ and as $f^{-1}[A], g^{-1}[B] \in \mathcal{E}$ (because $f,g$ are measurable), so is their intersection. QED.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
As I began working on this modeling project in Mathematica, it quickly became apparent that even just the magnetic field of a single magnetic dipole is pretty complicated to model. Transforming between Spherical and Cartesian coordinates via Mathematica’s built in TransformedField function is more complicated than expected, and may even not work at all. The expressions for $\vec{B_x}$, $\vec{B_y}$, and $\vec{B_z}$, get pretty complicated, as I found out when I began transforming them by hand to check against Mathematica’s TransformedField results. The Mathematica files containing some of my current work can be viewed here. The file named “transforming coordinates.nb” shows my attempts to use the TransformedField function to get the three Cartesian components of the general magnetic field equation, originally expressed in Spherical coordinates. The fact that a 3D vector plot of the resulting expressions shows nothing leads me to believe something more involved is necessary to convert these expressions between Spherical and Cartesian coordinates. I may try calculating the separate expressions for $\vec{B_x}$, $\vec{B_y}$, and $\vec{B_z}$ by hand, typing them all into Mathematica, and using these expressions to try to generate a 3D vector plot, but there may be easier and more informative ways of presenting the magnetic field information. Professor Magnes suggested that I try looking at only a sample of representative points near the magnetic dipole, and evaluating the magnetic field expression, $\vec{B} = \frac{\mu_0 m}{4 \pi r^3} (2 cos \theta \hat{r} + sin \theta \hat{\theta})$ at these points using the Spherical coordinates at first, and converting these points into Cartesian form before plotting a 3D vector plot. Because of the nature of plotting vector fields in 3D in Mathematica, this may or may not prove to be less complicated. Another option is presenting the field information in a different way, utilizing contour plots (ListContourPlot3D in Mathematica) instead of vector arrow plots. A contour plot will show equipotential surfaces within the magnetic field instead of the arrows representing the magnitude and direction of the magnetic field at certain points. However, the equipotential surfaces and vector arrows are intimately related: vector arrows are always perpendicular to equipotential surfaces. Therefore, this may be an easier way to compute and show the magnetic field of a magnetic dipole. Vector arrows can always be added on top of the contour plot to show the magnetic field in another way as well. There are a lot of factors to play with to see what presentation style will be the most effective. The other file in the link to my Mathematica work so far contains the start of generating a list of representative points around the origin in Spherical coordinates and the start of converting them into Cartesian coordinates using the CoordinateTransform function in Mathematica. This file is currently quite preliminary. I still have a long way to go before I get a graph presenting some information about the magnetic field of a magnetic dipole.
Under the auspices of the Computational Complexity Foundation (CCF) We investigate the computational complexity of the Boolean Isomorphism problem (BI): on input of two Boolean formulas F and G decide whether there exists a permutation of the variables of G such that F and G become equivalent. Our main result is a one-round interactive proof ... more >>> We investigate the computational complexity of the isomorphism problem for one-time-only branching programs (BP1-Iso): on input of two one-time-only branching programs B and B', decide whether there exists a permutation of the variables of B' such that it becomes equivalent to B. Our main result is a two-round interactive ... more >>> For a permutation group $G$ acting on the set $\Omega$ we say that two strings $x,y\,:\,\Omega\to\boole$ are {\em $G$-isomorphic} if they are equivalent under the action of $G$, \ie, if for some $\pi\in G$ we have $x(i^{\pi})=y(i)$ for all $i\in\Omega$. Cyclic Shift, Graph Isomorphism ... more >>> Given two $n$-variable boolean functions $f$ and $g$, we study the problem of computing an $\varepsilon$-approximate isomorphism between them. I.e.\ a permutation $\pi$ of the $n$ variables such that $f(x_1,x_2,\ldots,x_n)$ and $g(x_{\pi(1)},x_{\pi(2)},\ldots,x_{\pi(n)})$ differ on at most an $\varepsilon$ fraction of all boolean inputs $\{0,1\}^n$. We give a randomized $2^{O(\sqrt{n}\log(n)^{O(1)})}$ algorithm ... more >>> The solution graph of a Boolean formula on n variables is the subgraph of the hypercube Hn induced by the satisfying assignments of the formula. The structure of solution graphs has been the object of much research in recent years since it is important for the performance of SAT-solving procedures ... more >>> The function $f\colon \{-1,1\}^n \to \{-1,1\}$ is a $k$-junta if it depends on at most $k$ of its variables. We consider the problem of tolerant testing of $k$-juntas, where the testing algorithm must accept any function that is $\epsilon$-close to some $k$-junta and reject any function that is $\epsilon'$-far from ... more >>>
An arithmetic sequence is a sequence where there is a common difference between any two successive terms. $$\require{color} \color{red} u_{n} = u_{1}+(n-1)d$$ where $\require{color} \color{red} u_{1}$ is the first term and $\require{color} \color{red}d$ is the common difference of the arithmetic sequence. Example 1 A city is studies and found to have a population of $5000$ in the first year of the study. The population increases by $200$ each year after that. (a) Write down a rule for the population in month $n$ of the study. \( \begin{align} \displaystyle u_{n} &= 5000+(n-1)\times 200 \\ &= 5000 + 200n – 200 \\ u_{n} &= 200n + 4800 \\ \end{align} \) (b) When will the population double in size? \( \begin{align} \displaystyle 200n + 4800 &= 10000 \\ 200n &= 5200 \\ n &= 5200 \div 200 \\ &= 26 \\ \end{align} \) Thus the population will double in the $26$ th month. Example 2 For the arithmetic sequence $\{21,x,y,36\}$, find the values of $x$ and $y$. \( \begin{align} \displaystyle u_{1} &= 21 \\ u_{2} &= 21 + d = x \cdots (1) \\ u_{3} &= 21 + 2d = y \cdots (2) \\ u_{4} &= 21 + 3d = 36 \cdots (3) \\ 3d &= 36-21 \cdots (3) \\ 3d &= 15 \\ d &= 5 \\ x &= 21 + 5 &\text{substitute } d = 5 \text{ into } (1) \\ &= 26 \\ y &= 21 + 2 \times 5 &\text{substitute } d = 5 \text{ into } (2) \\ &= 21+10 \\ &= 31 \\ \therefore x &= 26 \text{ and } y=31 \\ \end{align} \) Example 3 Find the value of $x$ such that $\{\cdots,x,3x+4,10x-7,\cdots\}$ forms an arithmetic sequence. An arithmetic sequence is a sequence where there is a common difference between any two successive terms. \( \begin{align} \displaystyle (3x+4)-(x) &= (10x-7)-(3x+4) \\ 3x+4-x &= 10x-7 – 3x-4 \\ 2x+4 &= 7x-11 \\ 2x-7x &= -11-4 \\ -5x &= -15 \\ \therefore x &= 3 \\ \end{align} \)
Let $q$ be an integer number. Consider an integer number $N$ such that $\gcd(q-1,N) = 1$. Question: How to show that if $q^d = 1 \pmod{N}$ for some positive integer $d$, then we get $$ 1 + q + q^2 + \cdots + q^{(d-1)} = 0 \pmod N \tag{1} $$ Try: It follows from ($1$) that $$1 + q + q^2 + \cdots + q^{(d-1)}=\frac{q^d-1}{q-1}$$ Now the assumption $\gcd(q-1,N) = 1$ implies that $q-1\neq 0 \mod{N}$. Therefore, we get $$ \frac{q^d-1}{q-1}=\frac{1-1}{q-1}=0 \pmod{N} $$ Is the given proof correct? Thanks for any suggestions.
Current browse context: astro-ph.GA Change to browse by: References & Citations Bookmark(what is this?) Astrophysics > Astrophysics of Galaxies Title: Launching of hot gas outflow by disc-wide supernova explosions (Submitted on 3 Jan 2019) Abstract: Galactic gas outflows are driven by stellar feedback with dominant contribution from supernovae (SN) explosions. The question of whether the energy deposited by SNe initiates a large scale outflow or gas circulation on smaller scales -- between discs and intermediate haloes, depends on SN rate and their distribution in space and time. We consider here gas circulation by disc-wide unclustered SNe with galactic star formation rate in the range from {$\simeq 6\times 10^{-4}$ to $\simeq 6\times 10^{-2}~M_\odot$~yr$^{-1}$~kpc$^{-2}$, corresponding to mid-to-high star formation observed in galaxies. We show that such disc-wide SN explosion regime can form circulation of warm ($T\sim 10^4$ K) and cold ($T<10^3$ K) phases within a few gas scale heights, and elevation of hot {($T>10^5$ K)} gas at higher ($z>1$ kpc) heights. We found that the threshold energy input rate for hot gas outflows with disc-wide supernovae explosions is estimated to be of the order $\sim 4\times 10^{-4}$~erg~s$^{-1}$~cm$^{-2}$. We discuss the observational manifestations of such phenomena in optical and X-ray bands. In particular, we found that for face-on galaxies with SF ($\Sigma_{_{\rm SF}}>0.02~M_\odot$~yr$^{-1}$~kpc$^{-2}$), the line profiles of ions typical for warm gas show a double-peak shape, corresponding to out-of-plane outflows. In the X-ray bands, galaxies with high SF rates ($\Sigma_{_{\rm SF}}>0.006~M_\odot$~yr$^{-1}$~kpc$^{-2}$) can be bright, with a smooth surface brightness in low-energy bands ($0.1\hbox{--}0.3$~keV) and patchy at higher energies ($1.6\hbox{--}8.3$~keV).} Submission historyFrom: Eugene Vasiliev O [view email] [v1]Thu, 3 Jan 2019 17:10:21 GMT (4421kb)
I'm reading Steele's The Cauchy-Schwarz Masterclass: The first problem asks me to prove Cauchy's inequality: $$a_1b_1+a_2b_2+\cdots+a_nb_n\leq\sqrt{a_1^2+a_2^2+\cdots+a_n^2}\sqrt{b_1^2+b_2^2+\cdots+b_n^2}$$ I've tried to do it by creating a trivial instance of the inequality: $$ab\leq \sqrt{a^2}\sqrt{b^2}$$ And then thinking about each side as a function: $$f(a,b)=ab\quad\quad g(a,b)=\sqrt{a^2}\sqrt{b^2}$$ I can see that the image of $f$ is $\mathbb{R}$ while the image of $g$ is $\mathbb{R}_{\geq0}$ then when $a$ or $b$ are positive or negative, I can see that: $$\begin{matrix} {}&{}&{f}&{g}\\ {-a}&{-b}&{ab}&{ab}\\ {-a}&{+b}&{-ab}&{ab}\\ {+a}&{-b}&{-ab}&{ab}\\ {+a}&{+b}&{ab}&{ab} \end{matrix}$$ And the case when $a\vee b=0$, for which $f=g=0$. Is it enough to prove this inequality? Am I missing something? I've added more stuff, I have proved the case: $$a_1b_1+a_2b_2\leq\sqrt{a_1^2+a_2^2}\sqrt{b_1^2+b_2^2}$$ With the same reasoning, by making a function with both sides: $$f(a_1,a_2,b_1,b_2)=a_1 b_1 + a_2b_2 \quad \quad g(a_1, a_2, b_1, b_2)=\sqrt{a_1^2+a_2^2}\sqrt{b_1^2+b_2^2}$$ And pointing that the domain of $f$ is $\mathbb{R}$ and the domain of $g$ is $\mathbb{R_{\geq 0}}$, from here I can deduce that the behavior will be the same for the functions: $$f(a_1,a_2,b_1,b_2,\cdots , \cdots) \quad \quad g(a_1,a_2,b_1,b_2,\cdots, \cdots)$$ Thus giving values to the function, we can only have: $$f=g \quad f < g$$
Magnetic fields never do work for the simple reason that the magnetic force is perpendicular to the velocity. However when you pull a wire orthogonal in a direction mutually orthogonal to the direction of the wire and orthogonal to the direction of the magnetic field, then there is a magnetic force in the direction of the wire. And an electromagnetic force in the direction of the wire is exactly what you need to get an EMF. First, there is a magnetic force on the wire in the magnetic field even if there is no resistor over there in free space. The magnetic force is equal an opposite on the protons and the electrons, the net effect on the protons (and bound electrons) is to stress the wire, and it might get a bit longer because of that. On the electrons, it would actually build up a charge imbalance by pushing the conduction electrons to the ends of the wire (and if you connect that resistor it will actually just act like a battery and there will flow a current around the circuit). Since it acts like the battery, computing $\int (v\times\vec{B})\cdot d \vec{l}$ along the wire in the magnetic field does give you the EMF, because that's the definition of the EMF (force per charge by the battery integrated along the two ends of the battery). So that answers your question: How is that the force produced by the magnetic field can produce an emf on line ab if magnetic fields do no work? An EMF is the force per charge of the battery/source integrated along the circuit element from it's two terminals, it isn't defined as the work done, so that's not a problem. If it helps, keep in mind that at a fixed moment the circuit element is pointing one way (and that's the direction you consider for the EMF) and the velocity of the charges is another direction (and that direction is relevant for work done, and that velocity direction is orthogonal to the magnetic force direction). So the next question. Why is the work done by pulling the wire equal to the work done by the magnetic force? It might be easiest to imagine that the forces happen a bit at a time, impulsively. So in a brief period of time, the magnetic force gives an impulse upwards, but the wire is moving, so the electrons on the right part of the wire if they just went straight up would now be further inside the wire. The electrons in the middle of the wire would end up close to the left side of the wire, and the electrons on the left side of the wire would leave the wire. When you pull the wire you pull the whole thing. The whole lattice of protons is being shifted and they shift the electrons with them because when misaligned they bring them back. You can think of the magnetic force as on it's own polarizing the wire electrically, which normally would draw the two sides (left and right) of the wire together. But the force pulling the right side to the left sides is opposed by the person pulling the wire. How much opposed? Equal and opposite can keep it moving at a steady velocity. But really, doing all the calculations is the correct way to see what is going on, I'm just showing that it is reasonable.
I was playing with simulation of Euler's equations of rotation in MATLAB, $$ I_1\dot{\omega}_1 + (I_3 - I_2)\omega_2\omega_3 = M_1, $$ $$ I_2\dot{\omega}_2 + (I_1 - I_3)\omega_3\omega_1 = M_2, $$ $$ I_3\dot{\omega}_3 + (I_2 - I_1)\omega_1\omega_2 = M_3, $$ where $I_1$, $I_2$ and $I_3$ are the principal moments of inertia of the rigid body, $\omega_1$, $\omega_2$ and $\omega_3$ are the angular velocities around the axes of these moments of inertia, $\dot{\omega}_i$ denotes the time derivative of the angular velocity $\omega_i$ and $M_i$ denotes the external torque applied along the axis of $\omega_i$. I would also like to find the rotation of the body, but the equations above have a rotating reference frame, so finding the rotation does not seem to have a simple answer. In order to express the rotation of the body I would think that a rotation matrix as a function of time, $R(t)$, would be a good option, such that a point $\vec{x}_0$ on the rigid body can be relocated in a inertial frame of reference with, $$ \vec{x}(t) = R(t) \vec{x}. $$ This rotation matrix should change over time as the body rotates, but any two rotations can be combined into one effective rotation by multiplying the two rotation matrices. Thus the rotation matrix after a discrete time step should be, $$ R_{n+1} = D R_n, $$ where $D$ is the rotation during that time step. Any rotation matrix (I believe with the exception of reflections) can be written as, $$ R = \begin{bmatrix} \cos\theta\!+\!n_x^2(1\!-\!\cos\theta) & n_xn_y(1\!-\!\cos\theta)\!-\!n_z\sin\theta & n_xn_z(1\!-\!\cos\theta)\!+\!n_y\sin\theta \\ n_xn_y(1\!-\!\cos\theta)\!+\!n_z\sin\theta & \cos\theta\!+\!n_y^2(1\!-\!\cos\theta) & n_yn_z(1\!-\!\cos\theta)\!-\!n_x\sin\theta \\ n_xn_z(1\!-\!\cos\theta)\!-\!n_y\sin\theta & n_yn_z(1\!-\!\cos\theta)\!+\!n_x\sin\theta & \cos\theta\!+\!n_z^2(1\!-\!\cos\theta) \end{bmatrix}, $$ where $\vec{n}=\begin{bmatrix}n_x & n_y & n_z\end{bmatrix}^T$ is the axis of rotation and $\theta$ the angle of rotation. For an infinitesimal time step the rotation, $D$, has the axis of rotation equal to the normalized angular velocity vector and the angle of rotation equal to the magnitude of the angular velocity vector times the times step. Do I need to use the angular velocity vector in the rotating or inertial reference frame for this? I was not completely sure if I understood the notation of the answer of David Hammen, so I performed a simple test. The test involves applying two rotations. Initially the two reference frames are lined up, where x, y and z axes in the figures for the inertial reference frame always point to the left, right and up respectively, while for the rotating reference frame the x, y and z axes are represented by $\vec{e}_1$, $\vec{e}_2$ and $\vec{e}_3$ respectively. The first rotation is 90° around the x axis: Because in both reference frames the rotation is around the x axis, thus the rotation matrices of this rotation are, $$ R_{I:0\to 1} = R_{R:0\to 1} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & -1\\ 0 & 1 & 0 \end{bmatrix}, $$ where the subscripts $I$ and $R$ stand for the inertial and rotational reference frames respectively. The next rotation is 90° around the z axis or $\vec{e}_2$: The two reference frames now differ, thus the rotation matrices of this rotation are, $$ \begin{array}{rrrrrr} R_{I:1\to 2} = \begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} & & & & & R_{R:1\to 2} = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0 \end{bmatrix} \end{array}. $$ The final orientation should look like this: The corresponding total rotation matrix is, $$ R_{0\to 2} = \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}. $$ When combining the two rotations in both reference frames, then the total rotation matrix can be obtained with the following order of multiplications, $$ R_{0\to 2} = R_{R:0\to 1} R_{R:1\to 2} = R_{I:1\to 2} R_{I:0\to 1}, $$ thus it appears that the following is true, $$ R_{n+1} = R_n D_R = D_I R_n. $$ Since Euler's equations use the rotational reference frame, thus $D_R$ will be used. The rotation, $D_R$, using the general equation for a rotation matrix, can be simplified, by using linear approximation since $dt$ is assumed to be small, to, $$ R(t+dt) = R(t) \begin{bmatrix} 1 & -\omega_3dt & \omega_2dt \\ \omega_3dt & 1 & -\omega_1dt \\ -\omega_2dt & \omega_1dt & 1 \end{bmatrix} = R(t) \left( \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & -\omega_3 & \omega_2 \\ \omega_3 & 0 & -\omega_1 \\ -\omega_2 & \omega_1 & 0 \end{bmatrix}dt \right) $$ Or in terms of the time derivative, $$ \dot{R}(t) = \frac{R(t+dt)-R(t)}{dt} = R(t) \begin{bmatrix} 0 & -\omega_3 & \omega_2 \\ \omega_3 & 0 & -\omega_1 \\ -\omega_2 & \omega_1 & 0 \end{bmatrix}. $$ Question: When numerically integrating this, together with Euler's equation of rotation, is there a way to ensure that the determinant of $R$ remains equal to one (otherwise $\vec{x}(t)$ will also be scaled)?
You use the quantile regression estimator $$\hat \beta(\tau) := \arg \min_{\theta \in \mathbb R^K} \sum_{i=1}^N \rho_\tau(y_i - \mathbf x_i^\top \theta).$$ where $\tau \in (0,1)$ is constant chosen according to which quantile needs to be estimated and the function $\rho_\tau(.)$ is defined as $$\rho_\tau(r) = r(\tau - I(r<0)).$$ In order to see the purpose of the $\rho_\tau(.)$ consider first that it takes the residuals as arguments, when these are defined as $\epsilon_i =y_i - \mathbf x_i^\top \theta$. The sum in the minimization problem can therefore be rewritten as $$\sum_{i=1}^N \rho_\tau(\epsilon_i) =\sum_{i=1}^N \tau \lvert \epsilon_i \lvert I[\epsilon_i \geq 0] + (1-\tau) \lvert \epsilon_i \lvert I[\epsilon_i < 0]$$ such that positive residuals associated with observation $y_i$ above the suggested quantile regression line $\mathbf x_i^\top \theta$ are given the weight of $\tau$ while negative residuals associated with observations $y_i$ below the suggested quantile regression line $\mathbf x_i^\top \theta$ are weighted with $(1-\tau)$. Intuitively: With $\tau=0.5$ positive and negative residuals are "punished" with the same weight and an equal number of observation are above and below the "line" in optimum so the line $\mathbf x_i^\top \hat \beta$ is the median regression "line". When $\tau=0.9$ each positive residual is weighted 9 times that of a negative residual with weight $1-\tau= 0.1$ and so in optimum for every observation above the "line" $\mathbf x_i^\top \hat \beta$ approximately 9 will be placed below the line. Hence the "line" represent the 0.9-quantile. (For an exact statement of this see THM. 2.2 and Corollary 2.1 in Koenker (2005) "Quantile Regression") The two cases are illustrated in these plots. Left panel $\tau=0.5$ and right panel $\tau=0.9$. Linear programs are predominantly analyzed and solved using the standard form $$(1) \ \ \min_z \ \ c^\top z \ \ \mbox{subject to } A z = b , z \geq 0$$ To arrive at a linear program on standard form the first problem is that in such a program (1) all variables over which minimization is performed $z$ should be positive. To achieve this residuals are decomposed into positive and negative part using slack variables: $$\epsilon_i = u_i - v_i$$ where positive part $u_i = \max(0,\epsilon_i) = \lvert \epsilon_i \lvert I[\epsilon_i \geq 0]$ and $v_i = \max(0,-\epsilon_i) =\lvert \epsilon_i \lvert I[\epsilon_i < 0]$ is the negative part. The sum of residuals assigned weights by the check function is then seen to be $$\sum_{i=1}^N \rho_\tau(\epsilon_i) = \sum_{i=1}^N \tau u_i + (1-\tau) v_i = \tau \mathbf 1_N^\top u + (1-\tau)\mathbf 1_N^\top v,$$ where $u = (u_1,...,u_N)^\top$ and $v=(v_1,...,v_N)^\top$ and $\mathbf 1_N$ is vector $N \times 1$ all coordinates equal to $1$. The residuals must satisfy the $N$ constraints that $$y_i - \mathbf x_i^\top\theta = \epsilon_i = u_i - v_i$$ This results in the formulation as a linear program $$\min_{\theta \in \mathbb R^K,u\in \mathbb R_+^N,v\in \mathbb R_+^N}\{ \tau \mathbf 1_N^\top u + (1-\tau)\mathbf 1_N^\top v\lvert y_i= \mathbf x_i\theta + u_i - v_i, i=1,...,N\},$$ as stated in Koenker (2005) "Quantile Regression" page 10 equation (1.20). However it is noticeable that $\theta\in \mathbb R$ is still not restricted to be positive as required in the linear program on standard form (1). Hence again decomposition into positive and negative part is used $$\theta = \theta^+ - \theta^- $$ where again $\theta^+=max(0,\theta)$ is positive part and $\theta^- = \max(0,-\theta)$ is negative part. The $N$ constraints can then be written as $$\mathbf y:= \begin{bmatrix} y_1 \\ \vdots \\ y_N\end{bmatrix} = \begin{bmatrix} \mathbf x_1^\top \\ \vdots \\ \mathbf x_N^\top \end{bmatrix}(\theta^+ - \theta^-) + \mathbf I_Nu - \mathbf I_Nv ,$$ where $\mathbf I_N = diag\{\mathbf 1_N\}$. Next define $b:=\mathbf y$ and the design matrix $\mathbf X$ storing data on independent variables as $$ \mathbf X := \begin{bmatrix} \mathbf x_1^\top \\ \vdots \\ \mathbf x_N^\top \end{bmatrix} $$ To rewrite constraint: $$b= \mathbf X(\theta^+ - \theta^-) + \mathbf I_N u- \mathbf I_N v= [\mathbf X , -\mathbf X , \mathbf I_N ,\mathbf - \mathbf I_N] \begin{bmatrix} \theta^+ \\ \theta^- \\ u \\ v\end{bmatrix}$$ Define the $(N \times 2K + 2N )$ matrix $$A := [\mathbf X , -\mathbf X , \mathbf I_N ,\mathbf - \mathbf I_N]$$and introduce $\theta^+$ and $\theta^-$ as variables over which to minimize so they are part of $z$ to get $$b = A \begin{bmatrix} \theta^+ \\ \theta^- \\ u \\ v\end{bmatrix} = Az$$ Because $\theta^+$ and $\theta^-$ only affect the minimization problem through the constraint a $\mathbf 0$ of dimension $2K\times 1$ must be introduced as part of the coeffient vector $c$ which can the appropriately be defined as $$ c = \begin{bmatrix}\mathbf 0 \\ \tau \mathbf 1_N \\ (1-\tau) \mathbf 1_N \end{bmatrix},$$ thus ensuring that $c^\top z = \underbrace{\mathbf 0^\top(\theta^+ - \theta^-)}_{=0}+\tau \mathbf 1_N^\top u + (1-\tau)\mathbf 1_N^\top v = \sum_{i=1}^N \rho_\tau(\epsilon_i).$ Hence $c,A$ and $b$ are then defined and the program as given in $(1)$ completely specified. This is probably best digested using an example. To solve this in R use the package quantreg by Roger Koenker. Here is also illustration of how to set up the linear program and solve with a solver for linear programs: base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE) attach(base) library(quantreg) library(lpSolve) tau <- 0.3 # Problem (1) only one covariate X <- cbind(1,base$area) K <- ncol(X) N <- nrow(X) A <- cbind(X,-X,diag(N),-diag(N)) c <- c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N)) b <- base$rent_euro const_type <- rep("=",N) linprog <- lp("min",c,A,const_type,b) beta <- linprog$sol[1:K] - linprog$sol[(1:K+K)] beta rq(rent_euro~area, tau=tau, data=base) # Problem (2) with 2 covariates X <- cbind(1,base$area,base$yearc) K <- ncol(X) N <- nrow(X) A <- cbind(X,-X,diag(N),-diag(N)) c <- c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N)) b <- base$rent_euro const_type <- rep("=",N) linprog <- lp("min",c,A,const_type,b) beta <- linprog$sol[1:K] - linprog$sol[(1:K+K)] beta rq(rent_euro~ area + yearc, tau=tau, data=base)
Evaluate the flux integral $\displaystyle \int \int_S {\bf F \cdot n} \ dS$ Where ${\bf F}(x,y,z) = z^2 {\bf k}$ where S is the part of the cone $z^2 = x^2 + y^2$ that lies between the planes $z = 1$ and $z = 2$. My course teaches the specific case formula $\displaystyle \int \int_S {\bf F \cdot n} \ dS = \int \int_R (-F_1 f_x - F_2 f_y + F_3) \ dx \ dy$ I get $\displaystyle \int \int_S {\bf F \cdot n} \ dS = -\int \int_R -z^2 \ dx \ dy = \int \int_R x^2+y^2 \ dx \ dy = \int \int_R r^3 \ dr \ d\theta$, with bounds $0\leq r\leq{2}$ and $0\leq\theta\leq{2}\pi$ the negative sign in front of the integral with $z^2$ is because the unit normal is pointing downwards (or so I think...) Where have I made the mistake?
On the existence of solutions of the Cauchy problem for porous medium equations with radon measure as initial data 1. Graduate School of Information Sciences, Tohoku University, Aoba-ku, Sendai 980-77, Japan $ u_t =\Delta\phi(u)\qquad\text{in}\quad R^N\times(0,T);\qquad u(\cdot,0) =\mu(\cdot)\ge 0\quad \text{in}\quad R^N, $ where $\phi'(s)$ ~ $\log^m s$, $m<-1$, as $s\to\infty$. On the other hand, for the case $m\ge -1$, we give a sufficient condition for the solvability of the Cauchy problem. Mathematics Subject Classification:76S05, 35K1. Citation:Kazuhiro Ishige. On the existence of solutions of the Cauchy problem for porous medium equations with radon measure as initial data. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 521-546. doi: 10.3934/dcds.1995.1.521 [1] [2] [3] Verena Bögelein, Frank Duzaar, Ugo Gianazza. Very weak solutions of singular porous medium equations with measure data. [4] Luis Caffarelli, Juan-Luis Vázquez. Asymptotic behaviour of a porous medium equation with fractional diffusion. [5] [6] [7] María Astudillo, Marcelo M. Cavalcanti. On the upper semicontinuity of the global attractor for a porous medium type problem with large diffusion. [8] Ansgar Jüngel, Ingrid Violet. Mixed entropy estimates for the porous-medium equation with convection. [9] [10] Xinfu Chen, Jong-Shenq Guo, Bei Hu. Dead-core rates for the porous medium equation with a strong absorption. [11] Sofía Nieto, Guillermo Reyes. Asymptotic behavior of the solutions of the inhomogeneous Porous Medium Equation with critical vanishing density. [12] Gabriele Grillo, Matteo Muratori, Fabio Punzo. On the asymptotic behaviour of solutions to the fractional porous medium equation with variable density. [13] [14] Zhilei Liang. On the critical exponents for porous medium equation with a localized reaction in high dimensions. [15] [16] [17] Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. [18] Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. [19] Marie Henry, Danielle Hilhorst, Robert Eymard. Singular limit of a two-phase flow problem in porous medium as the air viscosity tends to zero. [20] Liangwei Wang, Jingxue Yin, Chunhua Jin. $\omega$-limit sets for porous medium equation with initial data in some weighted spaces. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Patil, PM and Kulkarni, PS (2008) Effects of chemical reaction on free convective flow of a polar fluid through a porous medium in the presence of internal heat generation. In: International Journal of Thermal Sciences, 47 (8). pp. 1043-1054. PDF sdarticle.pdf Restricted to Registered users only Download (453kB) | Request a copy Abstract This paper is focused on the study of combined effects of free convective heat and mass transfer on the steady two-dimensional, laminar,polar fluid flow through a porous medium in the presence of internal heat generation and chemical reaction of the first order. The highly nonlinear coupled differential equations governing the boundary layer flow, heat and mass transfer are solved by using two-term perturbation method with Eckert number E as perturbation parameter. The parameters that arise in the perturbation analysis are Eckert number E (viscous dissipation),Prandtl number Pr (thermal diffusivity), Schmidt number Sc (mass diffusivity), Grashof number Gr (free convection), solutal Grashof number Gm,chemical reaction parameter \Delta (rate constant), internal heat generation parameter Q, material parameters \alpha and \beta (characterizes the polarity of the fluid), $C_f$ (skin friction coefficient), Nusselt number Nu (wall heat transfer coefficient) and Sherwood number Sh (wall mass transfer coefficient).Analytical expressions are computed numerically. Numerical results for the velocity, angular velocity, temperature and concentration profiles as well as for the skin friction coefficient, wall heat transfer and mass transfer rate are obtained and reported graphically for various conditions to show interesting aspects of the solution. Further, the velocity distribution of polar fluids is compared with the corresponding flow problems for a viscous (Newtonian) fluid and found that the polar fluid velocity is decreasing. Item Type: Journal Article Additional Information: Copyright of this article belongs to Elsevier. Keywords: Chemical reaction;Free convection;Polar fluid;Porous medium;Internal heat generation;Couple stress. Department/Centre: Division of Mechanical Sciences > Aerospace Engineering(Formerly Aeronautical Engineering) Depositing User: Mr. Ramesh Chander Date Deposited: 04 Aug 2008 Last Modified: 19 Sep 2010 04:48 URI: http://eprints.iisc.ac.in/id/eprint/15419 Actions (login required) View Item
Nice question! I claim that this property does not necessarily imply CH. As Toddguessed in his comment, the answer is related to certain cardinalcharacteristics of the continuum. Specifically, let us define the closed-partition number to be the size $\kappa$ of the smallest nontrivial partition of the unit interval $[0,1]$ into closed sets. That is, $\kappa$ is the smallest size of a set $I$, such that the unit interval admits a partition $$[0,1]=\bigsqcup_{i\in I}C_i$$ into at least two pairwise disjoint nonempty closed sets $C_i$. It is astandard exercise to show that $\kappa$ is uncountable (see thisnice explanation of TimothyGowers); in other words, the unit interval is not a nontrivial union of countably many disjoint nonempty closed sets. A somewhat morerefined observation is that $\text{cov}(\mathcal{M})\leq\kappa$, asexplained in this MO answer of AndreasBlass. A further refinedobservation is that $\frak{d}\leq\kappa$, made by Taras Banakh in acomment on Andreas's answer. I'm not sure if this $\kappa$ alreadyhas a name or if it is provably equal to one of the well-knowncardinal invariants. Meanwhile, Arnie Miller proved in that it is consistent with ZFC that $\kappa=\omega_1$, even whileCH fails. So the unit interval can be the disjoint union of$\omega_1$ many nonempty closed sets, even when CH fails. Thissituation will be the key to answering your question. Let's begin with the following observation. Observation. If $X$ is a $T_1$ path-connected space with atleast two points, then $X$ has size at least the closed-partition number $\kappa$. Proof. If $f:[0,1]\to X$ is a path between two distinct points,then the sets $C_x=\{t\mid f(t)=x\}$ are disjoint closed sets,whose union is $[0,1]$. So $X$ must have size at least $\kappa$.$\Box$. One can now characterize exactly which cofinite spaces are contractible. Theorem. Suppose that $X$ is a cofinite space with at least twopoints. Then the following are equivalent. $X$ is contractible. $X$ is path connected. $X$ has size at least $\kappa$, the closed-partition number. Proof. Clearly every contractible space is path connected. Andwe proved in the observation that every path-connected $T_1$ space (and the cofinitetopology is $T_1$) has size at least $\kappa$. What remains is to prove that every cofinite space $X$ of size atleast $\kappa$ is contractible. Fix a closed particition$[0,1]=\sqcup_{i\in I} C_i$, where $I$ has size $\kappa$. Also fixdistinct points $x_i\in X$ and another distinct point $a\in X$. Define a map $H:X\times[0,1]\to X$ as follows. This is an analogueto the contraction defined in the OP under CH. Namely, let$H(x,0)=x$ and $H(x,1)=a$; and for other values of $t$, let$H(x,t)=x_i$, where $t\in C_i$. I claim that this map is continuous and hence is a contraction ofthe space. To see that it is continuous, consider any open set in$X$, which is the complement of a finite set in $X$. The preimageof any point $x\in X$ not of the form $x_i$ or $a$ is just thepoint $(x,0)$, which is closed. The pre-image of $a$ is$X\times\{1\}\cup\{(a,0)\}$, which also is closed. And thepre-image of $x_i$ is $X\times C_i$, plus the point $(x_i,0)$, andthis also is a closed set. So the preimage of any closed set in $X$is a finite union of closed sets and hence is closed. And so themap $H$ is continuous, and therefore $X$ is contractible, asdesired. $\Box$ Corollary. If ZFC is consistent, then it is consistent with ZFCthat every uncountable cofinite space is contractible, yet CHfails. Proof. Miller provides a model where $\kappa=\omega_1$, yet CHfails. In this model, every uncountable cofinite space has size atleast $\kappa$, and so all of them are contractible, yet CH fails.$\Box$.
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B September 2007 , Volume 8 , Issue 2 Select all articles Export/Reference: Abstract: We study the variational inequality for a 1-dimensional linear-quadratic control problem with discretionary stopping. We establish the existence of a unique strong solution via stochastic analysis and the viscosity solution technique. Finally, the optimal policy is shown to exist from the optimality conditions. Abstract: Consider a 2-D, linearized Navier-Stokes channel flow with periodic boundary conditions in the streamwise direction and subject to a wall-normal control on the top wall. There exists an infinite-dimensional subspace $E^0$, where the normal component $v$ of the velocity vector, as well as the vorticity $\omega$, are not influenced by the control. The corresponding control-free dynamics for $v$ and $\omega$ on $E^0$ are inherently exponentially stable, though with limited decay rate. In the case of the linear 2-D channel, the stability margin of the component $v$ on the complementary space $Z$ can be enhanced by a prescribed decay rate, by means of an explicit, 2-D wall-normal controller acting on the top wall, whose space component is subject to algebraic rank conditions. Moreover, its support may be arbitrarily small. Corresponding optimal decays, by the same 2-D wall-normal controller, of the tangential component $u$ of the velocity vector; of the pressure $p$; and of the vorticity $\omega$ over $Z$ are also obtained, to complete the optimal analysis. Abstract: This paper analyzes the investment-consumption problem of a risk averse investor in discrete-time model. We assume that the return of a risky asset depends on the economic environments and that the economic environments are ranked and described using a Markov chain with an absorbing state which represents the bankruptcy state. We formulate the investor's decision as an optimal stochastic control problem. We show that the optimal investment strategy is the same as that in Cheung and Yang [5], and a closed form expression of the optimal consumption strategy has been obtained. In addition, we investigate the impact of economic environment regime on the optimal strategy. We employ some tools in stochastic orders to obtain the properties of the optimal strategy. Abstract: In this paper we study the global stability of two epidemic models by ruling out the presence of periodic orbits, homoclinic orbits and heteroclinic cycles. One model incorporates exponential growth, horizontal transmission, vertical transmission and standard incidence. The other one incorporates constant recruitment, disease-induced death, stage progression and bilinear incidence. For the first model, it is shown that the global dynamics is completely determined by the basic reproduction number $R_0$. If $R_0\leq1$, the disease free equilibrium is globally asymptotically stable, whereas the unique endemic equilibrium is globally asymptotically stable if $R_0>1$. For the second model, it is shown that the disease-free equilibrium is globally stable if $R_0\leq1$, and the disease is persistent if $R_0>1$. Sufficient conditions for the global stability of an endemic equilibrium of the model are also presented. Abstract: Recently, Srzednicki and Wójcik developed a method based on Wazewski Retract Theorem which allows, via construction of so called isolating segments, a proof of topological chaos (positivity of topological entropy) for periodically forced ordinary differential equations. In this paper we show how to arrange isolating segments to prove that a given system exhibits distributional chaos. As an example, we consider planar differential equation ż$=(1+e^{i \kappa t}|z|^2)\bar{z}$ for parameter values $0<\kappa \leq 0.5044$. Abstract: In this paper, we first give an important interpolation inequality. Secondly, we use this inequality to prove the existence of local and global solutions of an inhomogeneous Schrödinger equation. Thirdly, we construct several invariant sets and prove the existence of blowing up solutions. Finally, we prove that for any $\omega>0$ the standing wave $e^{i \omega t} \phi (x)$ related to the ground state solution $\phi$ is strongly unstable. Abstract: In this article we compare the post-processing Galerkin (PPG) method with the reformed PPG method of integrating the two-dimensional Navier-Stokes equations in the case of non-smooth initial data $u_0 \epsilon\in H^1_0(\Omega)^2$ with div$u_0=0$ and $f,~f_t\in L^\infty(R^+;L^2(\Omega)^2)$. We give the global error estimates with $H^1$ and $L^2$-norm for these methods. Moreover, if the data $\nu$ and the $\lim_{t \rightarrow \infty}f(t)$ satisfy the uniqueness condition, the global error estimates with $H^1$ and $L^2$-norm are uniform in time $t$. The difference between the PPG method and the reformed PPG method is that their error bounds are of the same forms on the interval $[1,\infty)$ and the reformed PPG method has a better error bound than the PPG method on the interval $[0,1]$. Abstract: The paper extends investigations of identification problems by shape optimization methods for perfectly conducting inclusions to the case of perfectly insulating material. The Kohn and Vogelius criteria as well as a tracking type objective are considered for a variational formulation. In case of problems in dimension two, the necessary condition implies immediately a perfectly matching situation for both formulations. Similar to the perfectly conducting case, the compactness of the shape Hessian is shown and the ill-posedness of the identification problem follows. That is, the second order quadratic form is no longer coercive. We illustrate the general results by some explicit examples and we present some numerical results. Abstract: The bifurcation analysis of a generalized predator-prey model depending on all parameters is carried out in this paper. The model, which was first proposed by Hanski et al. [6], has a degenerate saddle of codimension 2 for some parameter values, and a Bogdanov-Takens singularity (focus case) of codimension 3 for some other parameter values. By using normal form theory, we also show that saddle bifurcation of codimension 2 and Bogdanov-Takens bifurcation of codimension 3 (focus case) occur as the parameter values change in a small neighborhood of the appropriate parameter values, respectively. Moreover, we provide some numerical simulations using XPPAUT to show that the model has two limit cycles for some parameter values, has one limit cycle which contains three positive equilibria inside for some other parameter values, and has three positive equilibria but no limit cycles for other parameter values. Abstract: In this paper, a model composed of two Lotka-Volterra patches is considered. The system consists of two competing species $X, Y$ and only species $Y$ can diffuse between patches. It is proved that the system has at most two positive equilibria and then that permanence implies global stability. Furthermore, to answer the question whether the refuge is effective to protect $Y$, the properties of positive equilibria and the dynamics of the system are studied when $X$ is a much stronger competitor. Abstract: Pulse propagation in randomly perturbed single-mode waveguides is considered. By an asymptotic analysis the pulse front propagation is reduced to an effective equation with diffusion and dispersion. Apart from a random time shift due to a random total travel time, two main phenomena can be distinguished. First, coupling and energy conversion between forward- and backward-propagating modes is responsible for an effective diffusion of the pulse front. This attenuation and spreading is somewhat similar to the one-dimensional case addressed by the O'Doherty-Anstey theory. Second, coupling between the forward-propagating mode and the evanescent modes results in an effective dispersion. In the case of small-scale random fluctuations we show that the second mechanism is dominant. Abstract: We consider the homogenization of the wave equation with high frequency initial conditions propagating in a medium with highly oscillatory random coefficients. By appropriate mixing assumptions on the random medium, we obtain an error estimate between the exact wave solution and the homogenized wave solution in the energy norm. This allows us to consider the limiting behavior of the energy density of high frequency waves propagating in highly heterogeneous media when the wavelength is much larger than the correlation length in the medium. Abstract: We consider sequences $U^\epsilon$ in $W^{1,m}(\Omega;\RR^n)$, where $\Omega$ is a bounded connected open subset of $\RR^n$, $2\leq m\leq n$. The classical result of convergence in distribution of any null Lagrangian states, in particular, that if $U^\ep$ converges weakly in $W^{1,m}(\Omega)$ to $U$, then det$(DU^\epsilon)$ converges to det$(DU)$ in $\D'(\Omega)$. We prove convergence in distribution under weaker assumptions. We assume that the gradient of one of the coordinates of $U^\epsilon$ is bounded in the weighted space $L^2(\Omega,A^\epsilon(x)dx;\RR^n)$, where $A_\epsilon$ is a non-equicoercive sequence of symmetric positive definite matrix-valued functions, while the other coordinates are bounded in $W^{1,m}(\Omega)$. Then, any $m$-homogeneous minor of the Jacobian matrix of $U^\epsilon$ converges in distribution to a generalized minor provide that $|A_\epsilon^{-1}|^{n/2}$ converges to a Radon measure which does not load any point of $\Omega$. A counter-example shows that this latter condition cannot be removed. As a by-product we derive improved div-curl results in any dimension $n\geq 2$. Abstract: We consider the realization of the operator $L_{\theta, a}u(x) $:$= x^{2 a}u''(x) \ + \ (a x^{2 a - 1} + \theta x^a)u'(x)$, acting on $C[0,+\infty]$, for $\theta\in\R$, $a\in\R$. We show that $L_{\theta, a}$, with the so called Wentzell boundary conditions, generates a Feller semigroup for any $\theta\in\R$, $a\in\R$. The problem of finding optimal estimators for the corresponding diffusion processes is also discussed, in connection with some models in financial mathematics. Here $C[0,+\infty]$ is the space of all real valued continuous functions on $[0,+\infty)$ which admit finite limit at $+\infty$. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Some broadly applicable background might be in order, since I remember this aspect of quantum mechanics not being stressed enough in most courses. [What follows is very good to know, and very broadly applicable, but may be considered overkill for this particular problem. Caveat lector.] What the OP lays out is exactly the motivation for finding how an initial wavefunction can be written as a sum of eigenfunctions of the Hamiltonian - if only we could have that representation, the Schrödinger equation plus linearity get us the wavefunction for all time. As Siva alludes to, this amounts to finding how a vector (our wavefunction) looks in a particular basis (the set of eigenfunctions of any Hermitian operator is guaranteed to be a basis). In general, one does this by taking inner products with the basis vectors, and the reasoning is as follows. We know the set of vectors $\{\lvert \psi_E \rangle\}$ (yes, I'm using Dirac notation here - it's a good thing to get used to), where $E$ is an index ranging over (possibly discrete possibly continuous) energies, forms a basis for the space of all wavefunctions. Therefore, there must be complex numbers $c_E$ such that$$ \lvert \psi \rangle = \sum_E c_E \lvert \psi_E \rangle, $$where $\lvert \psi \rangle$ is our initial wavefunction. If there are infinitely many energies, the sum has infinitely many terms. If there is a continuum of energies, it is an integral. 1 Now the problem is clearly one of finding the coefficients $c_E$. To do that, we take inner products with the basis vectors, one by one, where presumably our energy basis is orthonormal. Pick a generic, unspecified basis element $\lvert \psi_{E'} \rangle$. Then we have$$ \langle \psi_{E'} \vert \psi \rangle = \sum_E c_E \langle \psi_{E'} \vert \psi_E \rangle = \sum_E c_E \delta_{E'E} = c_{E'}. $$Whether the delta function is of the Kronecker or Dirac variety depends on whether the "sum" is a sum or an integral. Here then we have our formula for coefficients, which reads (after removing the primes),$$ c_E = \langle \psi_E \vert \psi \rangle. $$How does one go about solving this. At this point, it is okay to switch out of abstract vector notation and go into the position basis. We can do this with the somewhat cryptic yet awesome-sounding spectral resolution of the identity in, say, the position basis:$$ c_E = \langle \psi_E \vert I \vert \psi \rangle = \int_{-\infty}^\infty \langle \psi_E \vert x \rangle \langle x \vert \psi \rangle \ \mathrm{d}x. $$Here $\langle \psi \vert x \rangle \equiv \psi(x)$ is just your wavefunction, expressed in more familiar terms. 2 Furthermore, as you have hopefully been told, the correct inner product at play here introduces a complex conjugation if you switch the ordering, so$$ \langle \psi_E \vert x \rangle = \langle x \vert \psi_E \rangle^* \equiv \psi_E^*(x). $$ You now have enough to evaluate the coefficients $c_E$ for any initial problem given any orthonormal basis arising from a Hamiltonian. Given the free-particle form of $\psi_E(x)$ you can see that this process will essentially be a Fourier transform, so if you keep your wits about you you don't even need to do any messy integrals at all. Furthermore, depending on what is ultimately desired, the position basis may not be the most suitable basis for this problem, but doing a few problems the hard way builds character if nothing else. 1 Math aside: Countable infinities are not a big deal, since one of the assumptions of quantum mechanics is that our vector space isn't just a fancy inner product space, but also a really fancy Hilbert space. Then well-behaved linear combinations of wavefunctions, even countably infinitely many, well converge to perfectly well-defined wavefunctions. Justifying the integral is trickier, but it can be done. 2 Yes, this is the connection between Dirac notation and traditional "probability density as a function of space" notation students often learn first. Abstract kets become functions of position only when "bra-ed" with a generic position basis element.
I’ve added a new library to Incanter called incanter.latex that adds the ability to include LaTeX formatted equations as annotations and subtitles in charts. The library is based on the fantastically useful JLaTeXMath library. The following examples require Incanter version 1.2.2-SNAPSHOT or greater. Add the following dependency to your project.clj file: [incanter "1.2.2-SNAPSHOT"] Load the necessary libraries. (use '(incanter core stats charts latex)) Define the latex-formatted equation; I’ll use the str function so I can break the equation across multiple lines. Notice that I have to use two backslashes where I would only need one if I were were working directly in LaTeX; this is because the backslash is an escape character in Clojure/Java strings. (def eq (str "f(x)=\\frac{1}{\\sqrt{2\\pi \\sigma^2}}" "e^{\\frac{-(x - \\mu)^2}{2 \\sigma^2}}")) The equation can be rendered as an image with the latex function. The rendered equation can then be viewed in a window or saved as a png file with the view and save functions respectively. (view (latex eq)) (save (latex eq) filename) Use the add-latex function to add an annotation to a chart. The following example adds the above equation to a function-plot of the Normal PDF. (doto (function-plot pdf-normal -3 3) (add-latex 0 0.1 eq) view) Use the add-latex-subtitle function to add a rendered LaTeX equation as a subtitle to the chart (this particular chart does not have a main title). (doto (function-plot pdf-normal -3 3) (add-latex-subtitle eq) view) The complete code for the above examples can be found here.
According to the document, Matlab estimates the parameters for Beta distribution with the maximum likelihood. In this case, I am afraid that the sampling distributions for the estimators are not available. Let $\{X_i\}_{i=1}^n$ be a set of i.i.d. observations from standard Beta distribution $B(\alpha,\beta)$, obtaining the MLEs $\hat{\alpha}$ and $\hat{\beta}$ requires one to solve simultaneous equations \begin{align}\psi(\hat{\alpha})-\psi(\hat{\alpha}+\hat{\beta})&=\frac{1}{n}\sum_{i=1}^n\ln (X_i),\\\psi(\hat{\beta})-\psi(\hat{\alpha}+\hat{\beta})&=\frac{1}{n}\sum_{i=1}^n\ln (1-X_i),\end{align} where $\psi$ denotes the digamma function (see e.g. Gnanadesikan, Pinkham, and Hughes' Maximum Likelihood Estimation of the Parameters of the Beta Distribution from Smallest Order Statistics for more details). As far as I know, no general analytical solutions are available, let alone the exact distributions of $\hat{\alpha}$ and $\hat{\beta}$. It is possible to solve them in special cases however, e.g. when $\beta$ are known to be exactly $1$, by properties of digamma function the first equation can be reduced to $$\hat{\alpha}=-\frac{n}{\sum_{i=1}^n\ln (X_i)},$$ where one can derive that $-\sum_{i=1}^n\ln (X_i)\sim\varGamma(n,1/\alpha)$, which implies $\hat{\alpha}$ has scaled inverse-Gamma distribution. Other than that, one has to resort to the asymptotic normality of MLEs to make inferences on $\hat{\alpha}$ and $\hat{\beta}$, also see e.g. Lau and Lau's Effective procedures for estimating betadistribution's parameters and Their confidenceintervals for procedure using Pearson distribution. In addition, one may utilize the normality to derive confidence intervals for the mean, since $$\hat{\mu}=\frac{\hat{\alpha}}{\hat{\alpha}+\hat{\beta}}$$ will have correlated normal ratio distribution if both $\hat{\alpha}$ and $\hat{\beta}$ are normal. For your second question, I take that logs of estimates are used most likely to cope with the problem of parameter space. Note that both $\alpha$ and $\beta$ must be in $(0,\infty)$. But if your $\hat{\alpha}=1$ and you use normal approximation directly, you may end up with confidence interval such as $(-0.5,2.5)$. You can get around this by mapping to log-scale and then back, though I am not sure how accurate the result will be then. As an interesting side note, the book (Hahn and Shapiro's Statistical Models in Engineering) cited in the documentation actually talked about method of moments estimate instead of MLE, it is not clear to me why it is cited here.
This question already has an answer here: Flaw in my NP = CoNP Proof? 5 answers I am working trough the book "Introduction to the theory of computation", 3rd edition, by M. Sipser. On page 294, the book states: A problem is in NP iff it is decided by some non-deterministic, polynomial time turing machine. I get that it is decidable by some NDPTTM if the problem is in NP, the other way around I do not quite get.This is because I feel like all problems in $CoNP-NP$ can also be decided by some Non Deterministic polynomial time turing machine. I have the following solution in mind: Lets assume $P \neq NP \land NP \neq CoNP$. Take a problem $A \in NP - CoNP$. Now there is problem $\overline{A}$, which is in $CoNP$. Lets say NDTM $N$ decides $A$ in polynomial time. Now we modify $N$ by replacing every reject with accept and every accept with reject. We now have a polynomial NDTM $N'$ which decides $\overline{A}$ in poly time. According to the theorem proposed by the book, this should imply that $\overline{A}$ is in $NP$, but it is not. Am I missing something here? Edit: So, I have seen the possible duplicates and I learned a lot: I see now that $N'$ does not necessarily decide $\overline{A}$. I Do still believe my question is subtly different, so here I go: Take problem $B \in CoNP-NP$. There is a NDTM which decides $B$ in poly time, right? So, according to the theorem, $B$ should be in NP. I firmly believe the book is correct, but does that mean that $B$ cannot be decided by any NDTM in poly time?
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Traveling Salesman Problem and Approximation Algorithms One of my research interests is a graphical model structure learning problem in multivariate statistics. I have been recently studying and trying to borrow ideas from approximation algorithms, a research field that tackles difficult combinatorial optimization problems. This post gives a brief introduction to two approximation algorithms for the (metric) traveling salesman problem: the double-tree algorithm and Christofides’ algorithm. The materials are mainly based on §2.4 of Williamson and Shmoys (2011). 1. Approximation algorithms In combinatorial optimization, most interesting problems are NP-hard and do not have polynomial-time algorithms to find optimal solutions (yet?). Approximation algorithms are efficient algorithms that find approximate solutions to such problems. Moreover, they give provable guarantees on the distance of the approximate solution to the optimal ones. We assume that there is an objective function associated with an optimization problem. An optimal solution to the problem is one that minimizes the value of this objective function. The value of the optimal solution is often denoted by \(\mathrm{OPT}\). An \(\alpha\)-approximation algorithm for an optimization problem is a polynomial-time algorithm that for all instances of the problem produces a solution, whose value is within a factor of \(\alpha\) of \(\mathrm{OPT}\), the value of an optimal solution. The factor \(\alpha\) is called the approximation ratio. 2. Traveling salesman problem The traveling salesman problem (TSP) is NP-hard and one of the most well-studied combinatorial optimization problems. It has broad applications in logistics, planning, and DNA sequencing. In plain words, the TSP asks the following question: Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city? Formally, for a set of cities \([n] = \{1, 2, \ldots, n \}\), an \(n\)-by-\(n\) matrix \(C = (c_{ij})\), where \(c_{ij} \geq 0 \) specifies the cost of traveling from city \(i\) to city \(j\).By convention, we assume \(c_{ii} = 0\) and \(c_{ij} = c_{ji}\), meaning that the cost of traveling from city \(i\) to city \(j\) is equal to the cost of traveling from city \(j\) to city \(i\).Furthermore, we only consider the metric TSP in this article; that is,the triangle inequality holds for any \(i,j,k\):$$c_{ik} \leq c_{ij} + c_{jk}, \quad \forall i, j, k \in [n].$$ Given a permutation \(\sigma\) of \([n]\), a tour traverses the cities in the order \(\sigma(1), \sigma(2), \ldots, \sigma(n)\). The goal is to find a tour with the lowest cost, which is equal to $$ c_{\sigma(n) \sigma(1)} + \sum_{i=1}^{n-1} c_{\sigma(i) \sigma(i+1)}. $$ 3. Double-tree algorithm We first describe a simple algorithm called the double-tree algorithm and prove that it is a 2-approximation algorithm. Double-tree algorithm Find a minimum spanning tree \(T\). Duplicate the edges of \(T\). Find an Eulerian tour. Shortcut the Eulerian tour. Figure 1 shows the algorithm on a simple five-city instance. We give a step-by-step explanation of the algorithm. A spanning tree of an undirected graph is a subgraph that is a tree and includes all of the nodes. A minimum spanning tree of a weighted graph is a spanning tree for which the total edge cost is minimized. There are several polynomial-time algorithms for finding a minimum spanning tree, e.g., Prim’s algorithm, Kruskal’s algorithm, and the reverse-delete algorithm. Figure 1a shows a minimum spanning tree \(T\). There is an important relationship between the minimum spanning tree problem and the traveling salesman problem. Lemma 1.For any input to the traveling salesman problem, the cost of the optimal tour is at least the cost of the minimum spanning tree on the same input. The proof is simple. Deleting any edge from the optimal tour results in a spanning tree, the cost of which is at least the cost of the minimum spanning tree. Therefore, the cost of the minimum spanning tree \(T\) in Figure 1(a) is at most \( \mathrm{OPT}\). Next, each edge in the minimum spanning tree is replaced by two copies of itself, as shown in Figure 1b. The resulting (multi)graph is Eulerian. A graph is said to be Eulerian if there exists a tour that visits every edge exactly once. A graph is Eulerian if and only if it is connected and each node has an even degree. Given an Eulerian graph, it is easy to construct a traversal of the edges. For example, a possible Eulerian tour in Figure 1b is 1–3–2–3–4–5–4–3–1. Moreover, since the edges are duplicated from the minimum spanning tree, the Eulerian tour has cost at most \(2 \, \mathrm{OPT}\). Finally, given the Eulerian tour, we remove all but the first occurrence of each node in the sequence; this step is called shortcutting.By the triangle inequality, the cost of the shortcut tour is at most the cost of the Eulerian tour, which is not greater than \(2 \, \mathrm{OPT}\).In Figure 1c, the shortcut tour is 1–3–2–4–5–1.When going from node 2 to node 4 by omitting node 3, we have \(c_{24} \leq c_{23} + c_{34}\).Similarly, when skipping nodes 4 and 3, \(c_{51} \leq c_{54} + c_{43} + c_{31}\). Therefore, we have analyzed the approximation ratio of the double-tree algorithm. Theorem 1.The double-tree algorithm for the metric traveling salesman problem is a 2-approximation algorithm. 4. Christofides’ algorithm The basic strategy of the double-tree algorithm is to construct an Eulerian tour whose total cost is at most \(\alpha \, \mathrm{OPT}\), then shortcut it to get an \(\alpha\)-approximation solution.The same strategy can be carried out to yield a 3⁄ 2-approximation algorithm. Christofides’ algorithm Find a minimum spanning tree \(T\). Let \(O\) be the set of nodes with odd degree in \(T\). Find a minimum-cost perfect matching \(M\) on \(O\). Add the set of edges of \(M\) to \(T\). Find an Eulerian tour. Shortcut the Eulerian tour. Figure 2 illustrates the algorithm on a simple five-city instance of TSP. The algorithm starts again with the minimum spanning tree \(T\). The reason we cannot directly find an Eulerian tour is that its leaf nodes all have degrees of one. However, by the handshaking lemma, there is an even number of odd-degree nodes. If these nodes can be paired up, then it becomes an Eulerian graph and we can proceed as before. Let \(O\) be the set of odd-degree nodes in \(T\). To pair them up, we want to find a collection of edges that contain each node in \(O\) exactly once. This is called a perfect matching in graph theory. Given a complete graph (on an even number of nodes) with edge costs, there is a polynomial-time algorithm to find the perfect matching of the minimum total cost, known as the blossom algorithm. For the minimum spanning tree \(T\) in Figure 2a, \( O = \{1, 2, 3, 5\}\). The minimum-cost perfect matching \(M\) on the complete graph induced by \(O\) is shown in Figure 2b. Adding the edges of \(M\) to \(T\), the result is an Eulerian graph, since we have added a new edge incident to each odd-degree node in \(T\). The remaining steps are the same as in the double-tree algorithm. We want to show that the Eulerian graph has total cost of at most 3⁄ 2 \(\mathrm{OPT}\).Since the total cost of the minimum spanning tree \(T\) is at most \(\mathrm{OPT}\), we only need to show that the perfect matching \(M\) has cost at most 1⁄ 2 \(\mathrm{OPT}\). We start with the optimal tour on the entire set of cities, the cost of which is \(\mathrm{OPT}\) by definition.Figure 3a presents a simplified illustration of the optimal tour; the solid circles represent nodes in \(O\).By omitting the nodes that are not in \(O\) from the optimal tour, we get a tour on \(O\), as shown in Figure 3b.By the shortcutting argument again, the total cost of the tour on \(O\) is at most \(\mathrm{OPT}\).Next, color the edges yellow and green, alternating colors as the tour is traversed, as illustrated in Figure 3c.This partitions the edges into two sets: the yellow set and the green set; each is a perfect matching on \(O\).Since the total cost of the two matchings is at most \(\mathrm{OPT}\), the cheaper one has cost at most 1⁄ 2 \(\mathrm{OPT}\).In other words, there exists a perfect matching on \(O\) of cost at most 1⁄ 2 \(\mathrm{OPT}\).Therefore, the minimum-cost perfect matching must have cost not greater than 1⁄ 2 \(\mathrm{OPT}\).This completes the proof of the following theorem. Theorem 2.Christofides’ algorithm for the metric traveling salesman problem is a 3⁄ 2-approximation algorithm. References Williamson, D. P., & Shmoys, D. B. (2011). The Design of Approximation Algorithms. Cambridge University Press.
Defining parameters Level: \( N \) = \( 8 = 2^{3} \) Weight: \( k \) = \( 5 \) Nonzero newspaces: \( 1 \) Newforms: \( 2 \) Sturm bound: \(20\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{5}(\Gamma_1(8))\). Total New Old Modular forms 11 5 6 Cusp forms 5 3 2 Eisenstein series 6 2 4 Decomposition of \(S_{5}^{\mathrm{new}}(\Gamma_1(8))\) We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. Label \(\chi\) Newforms Dimension \(\chi\) degree 8.5.c \(\chi_{8}(7, \cdot)\) None 0 1 8.5.d \(\chi_{8}(3, \cdot)\) 8.5.d.a 1 1 8.5.d.b 2
With only pen and paper, is it possible to calculate the ideal Q-factor for e.g. a dipole antenna, loop etc, given a frequency? There are at least two definitions of Q factor. One is the reciprocial of fractional bandwidth: $$ Q = {f_r \over \Delta f} $$ Where $ f_r $ is the resonant frequency (or center of the antenna's bandwidth) and $ \Delta f $ is the difference between the upper and lower frequencies of the bandwidth. By this definition, it's easy to calculate with pen and paper if you are allowed some empirical data. You might define "bandwidth" as the range of frequencies where the VSWR is better than 2:1 and empirically measure this bandwidth. Then calculating the Q factor is some simple arithmetic. The other definition of Q is the ratio of energy stored to energy dissipated per cycle: $$ Q = 2 \pi {\text{energy stored} \over \text{energy dissipated per cycle}} $$ This is in theory calculable, but it's complex enough I don't think any normal person would attempt the calculation with pen and paper. This is all about definition. First, define what is acceptable as maximum VSWR for your particular situation, at a given impedance for a given antenna. Once you have that you can measure the lowest frequency (call this f1) where the maximum VSWR is measured on that antenna, then you measure the highest frequency (call this f2) that maximum VSWR is reached. Now your Q can be calculated as $$ f_c = \frac{f_1+f_2}{2} \\ Q = \frac{f_c}{f_2-f_1} $$
TIFR 2013 Problem 24 Solution is a part of TIFR entrance preparation series. The Tata Institute of Fundamental Research is India’s premier institution for advanced research in Mathematics. The Institute runs a graduate programme leading to the award of Ph.D., Integrated M.Sc.-Ph.D. as well as M.Sc. degree in certain subjects. In general, TIFR entrance exam hits the floor during the month of December. The image is a front cover of a book named Introduction to Real Analysis by R.G. Bartle, D.R. Sherbert. This book is very useful for the preparation of TIFR Entrance. Also Visit: College Mathematics Program of Cheenta Problem:True/False There exists a continuous surjective function from \(S^1 \) onto \(\mathbb{R}\). Hint: Search for topological invariants. Discussion: We know that continuous image of a compact set is compact. \(S^1\) is a subset of \(\mathbb{R}^2\), and in \(\mathbb{R}^2\) a set is compact if and only if it is closed and bounded. By definition, every element of \(S^1\) has unit modulus, so it is bounded. Let’s say \(z_n\to z\) as \(n\to \infty \). Where {\(z_n\)} is a sequence in \(S^1\). Since modulus is a continuous function, \(|z_n| \to |z| \), the sequence {\(|z_n|\)} is simply the constant sequence \(1,1,1,… \) hence \(|z|=1\). What does above discussion mean? Well it means that if \(z\) is a limit point (or even a point of closure) of \(S^1\) then \(z\in S^1\). Therefore, \(S^1\) is closed. The immediate consequence is that the given statement is False. Because, \(\mathbb{R}\) is not compact. \(S^1\) is compact, and continuous image of a compact set has to be compact. Helpdesk What is this topic:Real Analysis What are some of the associated concept:Compact Set, continuous image of a compact set, continuous function, Limit point,Closed and bounded Sequence Book Suggestions:Introduction to Real Analysis by R.G. Bartle, D.R. Sherbert
Under the auspices of the Computational Complexity Foundation (CCF) We study space complexity in the framework of propositional proofs. We consider a natural model analogous to Turing machines with a read-only input tape, and such popular propositional proof systems as Resolution, Polynomial Calculus and Frege systems. We propose two different space measures, corresponding to the maximal number of bits, ... more >>> We call a pseudorandom generator $G_n:\{0,1\}^n\to \{0,1\}^m$ {\em hard} for a propositional proof system $P$ if $P$ can not efficiently prove the (properly encoded) statement $G_n(x_1,\ldots,x_n)\neq b$ for {\em any} string $b\in\{0,1\}^m$. We consider a variety of ``combinatorial'' pseudorandom generators inspired by the Nisan-Wigderson generator on the one hand, and ... more >>> Recently, Raz established exponential lower bounds on the size of resolution proofs of the weak pigeonhole principle. We give another proof of this result which leads to better numerical bounds. Specifically, we show that every resolution proof of $PHP^m_n$ must have size $\exp\of{\Omega(n/\log m)^{1/2}}$ which implies an $\exp\of{\Omega(n^{1/3})}$ bound when ... more >>> This paper gives two distinct proofs of an exponential separation between regular resolution and unrestricted resolution. The previous best known separation between these systems was quasi-polynomial. We show that every resolution proof of the {\em functional} version $FPHP^m_n$ of the pigeonhole principle (in which one pigeon may not split between several holes) must have size $\exp\of{\Omega\of{\frac n{(\log m)^2}}}$. This implies an $\exp\of{\Omega(n^{1/3})}$ bound when the number of pigeons $m$ is arbitrary. Having good algorithms to verify tautologies as efficiently as possible is of prime interest in different fields of computer science. In this paper we present an algorithm for finding Resolution refutations based on finding tree-like Res(k) refutations. The algorithm is based on the one of Beame and Pitassi \cite{BP96} ... more >>> We study the Chv\'atal rank of polytopes as a complexity measure of unsatisfiable sets of clauses. Our first result establishes a connection between the Chv\'atal rank and the minimum refutation length in the cutting planes proof system. The result implies that length lower bounds for cutting planes, or even for ... more >>> We continue a study initiated by Krajicek of a Resolution-like proof system working with clauses of linear inequalities, R(CP). For all proof systems of this kind Krajicek proved an exponential lower bound that depends on the maximal absolute value of coefficients in the given proof and the maximal clause width. The matrix cuts of Lov{\'{a}}sz and Schrijver are methods for tightening linear relaxations of zero-one programs by the addition of new linear inequalities. We address the question of how many new inequalities are necessary to approximate certain combinatorial problems with strong guarantees, and to solve certain instances of Boolean satisfiability.... more >>> We show that tree-like OBDD proofs of unsatisfiability require an exponential increase ($s \mapsto 2^{s^{\Omega(1)}}$) in proof size to simulate unrestricted resolution, and that unrestricted OBDD proofs of unsatisfiability require an almost-exponential increase ($s \mapsto 2^{ 2^{\left( \log s \right)^{\Omega(1)}}}$) in proof size to simulate $\Res{O(\log n)}$. The ``OBDD proof ... more >>> A number of works have looked at the relationship between length and space of resolution proofs. A notorious question has been whether the existence of a short proof implies the existence of a proof that can be verified using limited space. In this paper we resolve the question by answering ... more >>> We show that the asymptotic complexity of uniformly generated (expressible in First-Order (FO) logic) propositional tautologies for the Nullstellensatz proof system (NS) as well as for Polynomial Calculus, (PC) has four distinct types of asymptotic behavior over fields of finite characteristic. More precisely, based on some highly non-trivial work by ... more >>> We associate a CNF-formula to every instance of the mean-payoff game problem in such a way that if the value of the game is non-negative the formula is satisfiable, and if the value of the game is negative the formula has a polynomial-size refutation in $\Sigma_2$-Frege (i.e.~DNF-resolution). This reduces mean-payoff ... more >>> A propositional proof system based on ordered binary decision diagrams (OBDDs) was introduced by Atserias et al. Krajicek proved exponential lower bounds for a strong variant of this system using feasible interpolation, and Tveretina et al. proved exponential lower bounds for restricted versions of this system for refuting formulas derived ... more >>> Propositional proof complexity is an area of complexity theory that addresses the question of whether the class NP is closed under complement, and also provides a theoretical framework for studying practical applications such as SAT solving. Some of the most well-studied contradictions are random $k$-CNF formulas where each clause of ... more >>>
The basic trick is to note that if $\sigma\in S_n$, the complete permutation group on $n$ elements, and $\sigma^2=1$, then there is a fixed point of $\sigma$. That's only true if $n$ is odd. So this means that in the definition: $$\det A = \sum_{\sigma \in S_n} \mathrm{sgn}(\sigma) a_{1\sigma(1)}\cdots a_{n\sigma(n)}$$ the terms with $\sigma^2=1$ contribute zero, since $a_{ii}=0$, and the term from other $\sigma$ can be paired with th term from $\sigma^{-1}$. Those terms are equal by the symmetry of the matrix and the sign of a permutation being equal to the sign of its inverse. (Technically, you don't need to know the signs are the same, since if they are different, they still contribute an even number together, $0$.)
I would like to know, if my proof to the below theorem is correct. Theorem. (a) If $A$ is a set with $m$ elements and $B$ is a set with $n$ elements and if ${A}\cap{B}=\phi$, then ${A}\cup{B}$ has $m+n$ elements. (b) If $A$ is a set with $m\in\mathbb{N}$ elements and ${C}\subseteq{A}$ is a set with $1$ element, then ${A}\backslash{C}$ is a set with $m-1$ elements. (c) If $C$ is an infinite set and $B$ is a finite set, then ${C}\backslash{B}$ is an infinite set. Proof. (a) Let $f$ be a bijection from $N_{m}:=\{1,2,\ldots,m\}$ onto $A$ and $g$ be a bijection from $N_{n}$ onto $B$. We define a new function $h$ from $N_{m+n}$ onto ${A}\cup{B}$ such that, $$h(i)=\begin{cases} f(i) & i=1,2,\ldots,m\\ g(i-m) & i=m+1,\ldots,m+n \end{cases}$$ Further, let us prove that $h$ is a bijection. (i) $h$ is injective. Firstly, $f$ and $g$ are injections. Let $x_{1},x_{2}$ be two distinct elements in the set $N_{m+n}$. We always have ${f(x_{1})\neq{f(x_{2})}}$, if $x_{1}\neq{x_{2}}$. Similarly, ${g(x_{1})\neq{g(x_{2})}}$, if $x_{1}\neq{x_{2}}$. Since, ${A}\cap{B}=\phi$, we always have $f(x_{1})\neq{g(x_{2})}$, if $x_{1}\neq{x_{2}}$. Thus, $h(x_{1})\neq{h(x_{2})}$, if $x_{1}\neq{x_{2}}$ (ii) $h$ is a surjection. Clearly, for any $y\in({A}\cup{B})$, there exists atleast one $x\in{N_{m+n}}$, such that $h(x)=y$. Thus, $h$ is a bijection. (b) Suppose $y_{k}\in{C}$ and $f(k)=y_{k}$. We define a new function $h$ from $N_{m-1}$ onto $A\backslash{C}$ as follows: $$h(i)=\begin{cases} f(i) & i=1,2,\ldots,k-1\\ f(i+1) & i=k,\ldots,m-1 \end{cases}$$ We can easily prove that $h$ is a bijection, and consequently $A\backslash{C}$ contains $m-1$ elements. (c) Intuitively, I know that the set $N$ of natural numbers is countably infinite, and if I define $N_{m}:=\{1,2,3,\ldots,m\}$ and remove a finite number of $m$ elements from an infinite set, $N-N_{m}$, we still have an infinite set. I am not sure, how to write a proof for this in mathematical language.
Hint $\,\ {\rm mod}\ 7\!:\,\ \color{#c00}{5\equiv -2}\,\Rightarrow\, \dfrac{3\cdot 2}{\color{#c00}5}\equiv \dfrac{3\cdot 2}{\color{#c00}{\,-2}}\equiv -3\equiv 4 $ Beware $ $ While, as usual, fractions may serve to greatly simplify arithmetic, their modular analog increases the probability of division exceptions, since the modular analog of "you can't divide by $0$" is "you can't divide by a zero- divisor", i.e. divisors and denominators $d$ must be coprime to the modulus $m$ ($\!\iff\! d$ is invertible mod $m)$ for the quotient to be well-defined. Then the fraction $\,c/d \equiv cd^{-1}$ and the usual grade-school arithmetic is true for such fraction expressions, e.g. the addition law $\,a/b+c/d = (ad+bc)/(bd)\,$ holds because $$\,(ad+bc)\color{#0a0}{(bd)^{-1}}\equiv ad\,\color{#0a0}{d^{-1}b^{-1}}+ \color{#0a0}{d^{-1}b^{-1}}bc\equiv ab^{-1}+d^{-1}b $$ Recall that by Euclid: $\,b,d\,$ coprime to $m$ $\iff$ $bd$ coprime to $m,\,$ indeed $\,\color{#0a0}{(bd)^{-1}\! \equiv d^{-1}b^{-1}}$ so the fractions in the addition rule are all well-defined, i.e. have denominators coprime to $m.\,$ It is essential to restrict to such fractions else one can deduce contradictions such as the following: ${\rm mod}\ 10\!:\,$ $\,0\equiv 0/2\equiv 10/2\equiv 5.\,$ Let's examine more closely what goes wrong here. Generally a fraction $\,x \equiv a/b\,$ with noninvertible denominator (not coprime to modulus) is not well-defined because the equation $\,b x \equiv a\,$ does not have a unique solution, i.e. there may be no solutions, or there may be more than one solution. For example, mod $\rm 10,$ $\rm\:4\,x\equiv 2\:$ has solutions $\rm\:x\equiv 3,8,\:$ so the "fraction" $\rm\:x \equiv 2/4\pmod{10}\,$ cannot designate a unique solution of $\,4x\equiv 2.\,$ Indeed, the solution is $\rm\:x\equiv 1/2\equiv 3\pmod 5,\,$ which requires canceling $\,2\,$ from the modulus too, because $\rm\:10\:|\:4x-2\iff5\:|\:2x-1.\,$ An unsolvable example is the fraction $\,x \equiv 1/4,\,$ since $\,10\mid 4x-1\,\Rightarrow 10n = 4x-1\,$ $\Rightarrow$ $\,4x-10 = 1\,$ is even, contradiction. See here for further discussion, including use of multi-valued modular fractions in the Extended Euclidean Algorithm. Ring-theoretically, this may be viewed as a generalization of the fact that division by zero is not well-defined, i.e. division by a $\,\rm\color{#c00}{zero\!-\!divisor}\,$ is not well-defined (in a nontrivial ring), since if $\,\color{#c00}{bc=0,\ b,c\ne 0}\,$ then $\,bx = a\,\Rightarrow\,\color{#c00}b(\color{#c00}c\!+\!x) = a\,$ so if a solution $\,x\,$ exists then it is not unique. Generally the grade-school rules of fraction arithmetic apply universally (i.e. in all rings) where the denominators are all invertible, e.g. $\, 1/2 - 1/3 = 1/6\,$ can be interpreted in any ring where $6$ is invertible, e.g. in the integers mod $n$ for all $n$ coprime to $6$, e.g. mod $5$ it is $3-2\equiv 1,\,$ and mod $11$ it is $6 - 4 \equiv 2$. This fundamental universal property of fractions will be clarified conceptually when one learns in university algebra about the universal properties of fractions rings (and localizations).
Why do we model it as sqrt root of v(t)? Is that because we don't want the volatility to go negative? If this is the case, can we model it as square of v(t)? Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community Why do we model it as sqrt root of v(t)? Is that because we don't want the volatility to go negative? If this is the case, can we model it as square of v(t)? V(t) is the variance process of the stock price, not volatility process. Cox-Ingersoll-Ross demonstrated that that specific process can be non-negative under certain conditions, which is what you want for variance. In this paper http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2626552 the authors compare the Heston model with volatility given by $ dV_t = \kappa_V(\bar{V}-V_t)dt+\sigma_V\sqrt{V_t}dW_t $ with the a model where the volatiltiy is given by $ dV_t = \kappa_V(\bar{V}-V_t)dt+\sigma_VV_tdW_t $. They show that the latter is inverse gamma distributed and leads to a more stable volatility distribution with higher kurtosis and fater tails. However, a quick read shows, that the inverse gamma model seems to be relatively unexplored. The reason is that Heston managed to solve the case with square root. The log-normal vol process leads to nasty properties. The 3/2 model is another case that have been solved. we assume $X_t$ follows the differential stochastic process $$d X(t)=\mu (t,{{X}_{t}})dt+\sigma (t,{{X}_{t}}) dW(t)$$ if $$\underset{{{X}_{t}}\to 0}{\mathop{\lim }}\,\,\mu (t,{{X}_{t}} )-\frac{1}{2}\frac{\partial }{\partial x}{{\sigma }^{2}}(t,{{X}_{t}})\geq 0$$ then $$P(\{\,t\in [0\,,\infty )|\,X(t\,,x )\leq 0\})=0$$ in the C.I.R Model ,we have $$\underset{{{v}_{t}}\to 0}{\mathop{\lim }}\,\,\left( \kappa (\theta -{{v}_{t}})-\frac{1}{2}\frac{\partial }{\partial v}{{(\sigma \sqrt{{{v}_{t}}})}^{2}} \right)=\underset{{{v}_{t}}\to 0}{\mathop{\lim }}\,\,\kappa (\theta -{{v}_{t}})-\frac{1}{2}{{\sigma }^{2}}=\kappa \theta -\frac{1}{2}{{\sigma }^{2}}$$ another property of the square-root process for instantaneous variance is the fact that it leads in many case of interest to close-form or semi-close form solution for the characteristic function. we are also able to drive a close-form solution based on hyper-geometric functions when the underlying follow as mean reverting process.
I'm reading the paper by Zhao et al (2008) and have a problem with used definitions in the text on the page 1535. First, we generate a sample, $R$, of a given size from the distribution (21). Let $\hat{\mu}$ and $\hat{\sigma}^2$ be the sample mean vector and the sample variance covariance matrix. Then, we modify the generated sample to $$\hat{R}=\mu + (R-\hat{\mu})\hat{\sigma}^{-1}\sigma.$$ Then, the modified sample $\hat{R}$ has the same first and second moments as the original distribution. To further test, whether the modified sample is arbitrage free, we use the Matlab backslash function to examine whether the solution to $(1 + R)^{\top} \backslash 1$ is componentwise positive. Question. What means $1$ in the last line? Is it an identity matrix or a column vector of ones? And what is dimensions of this $1$? I have tried to examine the solution from Table 1. library(pracma)n <- 5R<-matrix(c(0.0025, 0.0377, 0.0110, 0.0769, 0.0047,0.0025, 0.0431, 0.0001, 0.0045, 0.0562,0.0025, 0.0469, 0.0643, 0.0400, 0.0370,0.0025, 0.0504, 0.0422, 0.0169, 0.0333,0.0025, 0.0596, 0.0038, 0.1896, 0.0663), ncol=n)#I<-ones(n)I <-diag(n)mldivide(t(I+R),I) Add after JejeBelfort's comment. If the $1$ is a column vector of ones then what is $+$? Union operation or Kronecker product operator? I <- rep(1, n)cbind(I, R) Reference. Yonggan Zhao, William T. Ziemba (2008) Calculating risk neutral probabilities and optimal portfolio policies in a dynamic investment model with downside risk control. European Journal of Operational Research 185 (2008) 1525–1540.
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful? closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. Here's a cute and lovely theorem. There exist two irrational numbers $x,y$ such that $x^y$ is rational. Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$ (Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.) How about the proof that $$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$ I remember being impressed by this identity and the proof can be given in a picture: Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments. Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list. I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction! Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$. Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that $$x+iy = (a+ib)(c+id)$$ Taking the magnitudes of both sides are squaring gives $$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$ I would go for the proof by contradiction of an infinite number of primes, which is fairly simple: Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes. I think I learned that both in high-school and at 1st year, so it might be a little too simple... By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$ The first player in Hex has a winning strategy. There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy. You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$. For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$." Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$. Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros. But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction. Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks. Proof: Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the samecolor. Thus, it can no longer be possible to cover the remaining area. (Well, it may be too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...) One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree $\qquad\qquad$ Descent in the tree is given by the formula $$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$ e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant. Ascent in the tree by inverting this map, combined with trivial sign-changing reflections: $\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$ $\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$ $\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$ See my MathOverflow post for further discussion, including generalizations and references. I like the proof that there are infinitely many Pythagorean triples. Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$ One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1. Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere. If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other. Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.) In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first. The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice: Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal. This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles. Parity of sine and cosine functions using Euler's forumla: $e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$ $e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$ $cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$ Thus $cos\ (-\theta) = cos\ \theta$ $sin\ (-\theta) = -\ sin\ \theta$ $\blacksquare$ The proof is actually just the first two lines. I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$ If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic. Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$: $$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$ Proposition (No universal set): There does not exists a set which contain all the sets (even itself) Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set $$C=\{A\in X: A \notin A\}$$ of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction. Edit: Assuming that one is working in ZF (as almost everywhere :P) (In particular this proof really impressed me too much the first time and also is very simple) Most proofs concerning the Cantor Set are simple but amazing. The total number of intervals in the set is zero. It is uncountable. Every number in the set can be represented in ternary using just 0 and 2. No number with a 1 in it (in ternary) appears in the set. The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval. The Menger sponge which is a 3d extension of the Cantor set simultaneously exhibits an infinite surface area and encloses zero volume. The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here: Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as: $y=f(x)$ This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a). Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$. The slope of the line joining $P$ and $Q$ is given by: $tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$ Suppose now that the point $Q$ moves along the curve towards $P$. In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish. What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$: $m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$ The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$. It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$. Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as: $\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$, which is the required formula. This proof that $n^{1/n} \to 1$ as integral $n \to \infty$: By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $. Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner? The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible. The Eigenvalues of a skew-Hermitian matrix are purely imaginary. The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices. I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep.
We know that a lightning rod or lightning conductor is a metal rod or metallic object mounted on top of an elevated structure and, if we look closely, most of them have a sharp point at the top. What is the reason for this sharp point? Suppose there is a charged cloud floating over your conductor. Then making your lightning conductor pointy at the edge would facilitate better discharge by setting up a high electric field. We will take a spherical approximation of the pointed end, then ${\sigma}=\frac{q}{4\pi r^2}$ is the surface charge density of the end. It has a very high surface charge density due to its small radius. Hence, in this case, the electric field over that small part will be $E=\frac{\sigma}{\epsilon_0}$ which is also very high. Then, for a pointy metal rod, the electric field set up at pointy ends is high. Now for some reason, if the discharge of the cloud occurs, the charge will be easily passed through the lightning conductor and conducted to the ground. Your artifact which you are trying to save is ultimately protected from damage. The point of the point is to increase the electric field near the point. Small radius curves will have a higher local electric field, eventually creating a localize area where the field is greater than the dielectric strength of the air. This results in what I refer to as "micro-lightning." This microlightning discharges the air (or cloud) before the charge difference between the cloud and ground builds to the point where a very long path of breakdown is formed. The main idea is to prevent big lightning by have near-continual (during storms) microlightning. You can demonstrate this with a small Tesla coil or classroom Van de Graaff generator. Set up a situation with the coil or generator causing long (>10 cm) sparks. Then get a pointy object like a key or a nail, ground it, and bring it near the discharge. The spark will stop, but if you listen carefully, you can hear a crackle near the pointy object. You won't get a large spark around the pointy object until you get close to the coil tip or generator sphere. Then remove the pointy object and the long sparks will start again. This section might help: Which also states that : Finding that moderately rounded or blunt-tipped lightning rods act as marginally better strike receptors. Research conducted with actual lightning demonstrates that blunt lightning rods are as effective as pointed (tapered) rods. This has been codified in the National Fire Protection Association standard NFPA 780 - Installation of Lightning Protection Systems. The blunt tips are also pose less risk to someone who might fall while working on a roof. If there is an excess charge in the atmosphere, as happens during thunderstorms, a substantial charge of the opposite sign can build up on this blunt end. As a result, when the atmospheric charge is discharged through a lightning bolt, it tends to be attracted to the charged lightning rod rather than to other nearby structures that could be damaged. (A conducting wire connecting the lightning rod to the ground then allows the acquired charge to dissipate harmlessly.) A lightning rod with a sharp end would allow less charge buildup and hence would be less effective. protected by Qmechanic♦ Jan 2 '17 at 6:42 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Logistic regression is a generalized linear model most commonly used for classifying binary data. It’s output is a continuous range of values between 0 and 1 (commonly representing the probability of some event occurring), and its input can be a multitude of real-valued and discrete predictors. Motivating Problem Suppose you want to predict the probability someone is a homeowner based solely on their age. You might have a dataset like Age HomeOwner 13 FALSE 13 FALSE 15 FALSE … … 74 TRUE 75 TRUE 79 TRUE As with any binary variable, it makes sense to code TRUE values as 1s and FALSE values as 0s. Then you can plot the data. Our homeowner dataset looks like this There’s definitely more positive samples as age increases which makes sense. If we decide to group the data into equal size bins we can calculate the proportion of positive samples for each group. Bin Samples MedianAge PctHomeOwner [10, 24) 8 17.0 0.125 [24, 38) 8 34.0 0.250 [38, 52) 8 47.0 0.500 [52, 66) 8 54.0 0.750 [66, 80] 8 70.5 0.875 Notice the data starting to take an S shape. This is a common and natural occurrence for a variety of random processes, particularly where the explanatory variable and the response variable have a monotonic relationship. One such S-shaped function is the logistic function. \[ \sigma (t) = \frac{1}{1+e^{-t}} \] Writing \(t\) as \(B_0 + B_1\) lets us change the horizontal position of the curve by varying \(B_0\) and the steepness of the curve by varying \(B_1\). \[ F(x) = \frac {1}{1+e^{-(\beta_0 + \beta_1 x)}} \] Fitting a model to the data At this point we’d like to fit a logistic curve to our data. There are two distinct ways to do this depending on the type of data to be fitted. Method 1 First we’ll look at fitting a logistic curve to binned, or grouped data. For example, suppose we didn’t have individual Yes/No responses of whether someone was a homeowner, but instead had an aggregated data set like MedianAge PctHomeOwner 17.0 0.125 34.0 0.250 47.0 0.500 54.0 0.750 70.5 0.875 Recall our last form of the logistic function \(F(x) = \frac {1}{1+e^{-(\beta_0 + \beta_1 x)}}\) where we interpret \(F(x)\) to be the probability that someone is a homeowner. We can rearrange this equation as follows \(\ln (\frac{F(x)}{1 - F(x)}) = \beta_0 + \beta_1 x\) Notice that the modified function is linear in terms of \(x\). So, if we take our sample data and create a transformed column \(Y'\) equal to \(\ln(\frac{PcntHomeowner}{1-PcntHomeowner})\) then we can fit \(Y' = B_0 + B_1x\) using ordinary least squares. MedianAge PctHomeOwner Y’ 17.0 0.125 -1.945910 34.0 0.250 -1.098612 47.0 0.500 0.000000 54.0 0.750 1.098612 70.5 0.875 1.945910 From here we can transform the fitted linear model to a logistic model. We have \(Y' = \ln (\frac{F(x)}{1 - F(x)}) = \beta_0 + \beta_1x \Rightarrow F(x) = \frac{1}{1+e^{-(\beta_0 + \beta_1x)}}\) (Nothing special here - just the logistic function.) Finally, plotting the model against our data.. Before we wrap up method 1, let’s take another look at our linear model \(\ln (\frac{F(x)}{1 - F(x)}) = \beta_0 + \beta_1 x\) First of all, this is the inverse of the logistic function. Secondly, notice the \(\frac{F}{(1-F)}\) part. Remember, \(F\) is the probability of success. So, \(\frac{F}{(1-F)}\) is the odds of success. The odds of something happening is the probability of it happening divided by the probability of it not happening. If the probability of a horse winning the Kentucky Derby is 0.2, then the odds of that horse winning are 0.2/0.8 = 0.25 (or more commonly stated, the odds of the horse losing are 4 or “4 to 1”). Recapping, for some probability of success \(p\), the odds of success are \(\frac{p}{1-p}\) and the log-odds are \(ln(\frac{p}{1-p})\). The function \(ln(\frac{p}{1-p})\) is special enough to warrant its own name - the logit function. Notice it’s equivalent to the linear model we fit, \(ln(\frac{F}{1-F}) = B_0 + B_1x\). In other words, we fit a logistic regression to our data by fitting a linear model to the log-odds of our sample data. Method 2 In Method 1 we were able to use linear regression to fit our data because our dataset had probabilities of homeownership. On the other hand, if our data just has {0, 1} response values we’ll have to use a more sophisticated technique - maximum likelihood estimation. First, recall the Bernoulli distribution. A Bernoulli random variable X is just a binary random variable (0 or 1) with probability of success p. (I.e. P(X=1) = p). Thus the Bernoulli distribution is defined by a single parameter, \(p\). Furthermore, its expected value equals \(p\). Next, let’s take another look at our plotted data. Now pick an x value, say 50, and imagine slicing the data in a small neighborhood around 50. Age HomeOwner 49 0 49 1 51 1 51 0 Looking at the data we find 2 positive and 2 negative samples. In this case we can think of each response variable near \(Age = 50\) as a random variable from some Bernoulli distribution whose \(p\) value is somewhere in the neighborhood of 0.5. Now let’s slice the data in the neighborhood of 70. Age HomeOwner 68 1 68 1 68 1 70 0 71 1 We can think of these samples as random variables sampled from some other Bernoulli distribution which have a \(p\) value close to 0.80. This coincides with our intuition that the probability of someone being a homeowner generally increases with their age. Generalizing this idea, we can assume that at each point \(x\) the samples close to \(x\) follow a Bernoulli distribution whose expected value is some function of \(x\). We’ll call that function \(p(x)\). Since we want to model our data with the logistic function \(F(x) = \frac {1}{1+e^{-(\beta_0 + \beta_1 x)}}\), we can treat \(F(x)\) and \(p(x)\) to be the same. In other words, we can think of our logistic function as defining an infinite set of Bernoulli distributions. Now suppose we guess some parameters \(B_0\) and \(B_1\) which appear to fit the data well, say \(B_0 = -3.5\) and \(B_1 = 0.06\). Assuming our guessed model is the true model for whether or not someone is a homeowner based on their age, what is the probability of our sampled data occurring? In other words, what is the probability that a random 13-year-old isn’t a homeowner AND another random 13-year-old isn’t a homeowner … AND a random 75-year-old is a homeowner AND a random 79-year-old is a homeowner? According to our model, the probability that a random 13-year-old isn’t a homeowner is \(p(Y = 0 \mid x = 13) = 1 - F(13) = 0.93\). The probability that a random 75-year-old is a homeowner is \(F(75) = 0.73\), etc. If we assume each of these instances are independent, then the probability of all of them occurring is \((1-F(13)) \cdot (1-F(13)) \cdots F(75) \cdot F(79)\). What we just described is calculating the probability or “likelihood” of our samples having their response values according to our model \(F(x, B_0 = -3.5, B_1 = 0.06)\). For a set of samples assumed to be from some logistic regression model, we can define the likelihood of the specific model with parameters \(B_0\) and \(B_1\) as \[ \begin{aligned} \mathcal{L}(B0, B1; \boldsymbol{samples}) &= \prod_{i=1}^n P[Y_i=y_i \mid p_i=F(x_i, B0, B1)]\\ &= \prod_{i=1}^n F(x_i, B0, B1)^{y_i}(1-F(x_i, B0, B1))^{(1-y_i)} \end{aligned} \] where \((x_i, y_i)\) is the ith observation in our sample data. Plugging in \(F\) gives \[ \mathcal{L}(B0, B1; \boldsymbol{samples}) = \prod_{i=1}^n (\frac {1}{1+e^{-(\beta_0 + \beta_1 x_i)}})^{y_i}(1-\frac {1}{1+e^{-(\beta_0 + \beta_1 x_i)}})^{(1-y_i)} \] This is called the likelihood function. The parameters (\(B_0\), \(B_1\)) that yield the largest value of \(L\) are exactly the parameters we want to use for our logistic regression model. The process of finding those optimum parameters is called maximum likelihood estimation. With that said, maximum likelihood estimation is a deep topic and probably warrants its own separate article. For that reason, I’ll leave out the gritty details. Fortunately for the practitioners out there, a number of maximum likelihood estimation methods have been implemented in open-source statistical libraries. Using scikit-learn with our sample data yields \(B_0 = -3.23\) and \(B_1 = 0.0723\). Enjoyed this article?Show your support and buy some GormAnalysis merch.
K Dutta Articles written in Pramana – Journal of Physics Volume 61 Issue 4 October 2003 pp 759-772 Brief Reports Simultaneous calculation of the dipole moment μ j and the relaxation time τ j of a certain number of non-spherical rigid aliphatic polar liquid molecules (j) in non-polar solvents (i) under 9.8 GHz electric field is possible from real ε′ ij and imaginary ε″ ij parts of the complex relative permittivity ε* ij. The low frequency and infinite frequency permittivities ε 0ij and ε ∞ij measured by Purohit s. The ratio of the individual slopes of imaginary σ″ ij and real σ′ ij parts of high frequency (hf) complex conductivity σ* ij with weight fractions jat j → 0 and the slopes of σ″ ij— σ′ ij curves for different js [4] are employed to obtain τ js. The former method is better in comparison to the existing one as it eliminates polar-polar interaction. The hf μ js in Coulomb metre (C m) when compared with static and reported μs indicate that μ s s favour the monomer formations which combine to form dimers in the hf electric field. The comparison among μs shows that a part of the molecule is rotating under X-band electric field [5]. The theoretical μ theos from available bond angles and bond moments of the substituent polar groups attached to the parent molecules differ from the measured μ js and μs to establish the possible existence of mesomeric, inductive and electromeric effects in polar liquid molecules. Volume 70 Issue 3 March 2008 pp 543-552 Research Articles The dielectric relaxation times $\tau_{jk}$'s and dipole moments $\mu_{jk}$'s of the binary ($j_{k}$) polar liquid mixture of N,N-dimethyl acetamide (DMA) and acetone (Ac) dissolved in benzene (i) are estimated from the measured real $\sigma_{ijk}^{'}$ and imaginary $\sigma_{ijk}^{''}$ parts of complex high frequency conductivity $\sigma_{ijk}^{*}$ of the solution for different weight fractions $w_{jk}$'s of 0.0, 0.3, 0.5, 0.7 and 1.0 mole fractions $x_{j}$ of Ac and temperatures (25, 30, 35 and 40°C) respectively under 9.88 GHz electric field. $\tau_{jk}$'s are obtained from the ratio of slopes of $\sigma_{ijk}^{''} - w_{jk}$ and $\sigma_{ijk}^{'} - w_{jk}$ curves at $w_{jk} \rightarrow 0$ as well as linear slope of $\sigma_{ijk}^{''} - \sigma_{ijk}^{'}$ curves of the existing method (Murthy Volume 76 Issue 5 May 2011 pp 693-698 Raghavan Rangarajan Ajit Srivastava A Bandyopadhyay A Basak M Bastero-Gil A Berera J Bhatt K Bhattacharya S Chakraborty M Das S Das K Dutta D Ghosh S Goswami U Gupta P Jain Y-Y Keum E Masso D Majumdar A P Mishra S Mohanty R Mohapatra A Nautiyal T Prokopec S Rao D P Roy N Sahu A Sarkar P Saumia A Sen A Shivaji This is the report of the cosmology and astroparticle physics working group at WHEPPXI. We present the discussions carried out during the workshop on selected topics in the above fields. The problems discussed concerned axions, infrared divergences in inflationary theories, supersonic bubbles in a first-order electroweak phase transition, dark matter, MOND, interacting dark energy, composite Higgs models and statistical anisotropy of the Universe. Current Issue Volume 93 | Issue 6 December 2019 Click here for Editorial Note on CAP Mode
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. Post Reply 10 posts • Page 1of 1 Hello everyone! I proposed a new Stack Exchange Q&A site here: http://area51.stackexchange.com/proposa ... gevQBdmIA2 Remember to support this proposal by following it and asking more example questions on it! Thank you, Testitem Qlstudio (I'm glad that this is in The Sandbox and not in other forums.) I proposed a new Stack Exchange Q&A site here: http://area51.stackexchange.com/proposa ... gevQBdmIA2 Remember to support this proposal by following it and asking more example questions on it! Thank you, Testitem Qlstudio (I'm glad that this is in The Sandbox and not in other forums.) Supported the proposal. Sadly, it still needs 57 more followers, which is about the total number of active members on these forums. So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This is quite unfortunate, since I would like to see the proposal work out. On the other hand, there seems to be no time limit for proposals on this site. Maybe it will collect the necessary number of followers over the years. Sadly, it still needs 57 more followers, which is about the total number of active members on these forums. So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This is quite unfortunate, since I would like to see the proposal work out. On the other hand, there seems to be no time limit for proposals on this site. Maybe it will collect the necessary number of followers over the years. There are 10 types of people in the world: those who understand binary and those who don't. Tell me if i should promote this topic to the Patterns forum lol (isn't this advertizing? Well at least I found a good place to post this since my past CA proposals kept getting deleted from inactivity ) (isn't this advertizing? Well at least I found a good place to post this since my past CA proposals kept getting deleted from inactivity ) I need one follower a month or the proposal would be deletedAlexey_Nigin wrote:Supported the proposal. Sadly, it still needs 57 more followers, which is about the total number of active members on these forums. So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This is quite unfortunate, since I would like to see the proposal work out. On the other hand, there seems to be no time limit for proposals on this site. Maybe it will collect the necessary number of followers over the years. Just went and looked this morning. Another ten followers have showed up, so there are 47 to go.testitemqlstudop wrote:Tell me if i should promote this topic to the Patterns forum... I'll try linking to this thread from a moderately relevant question-gathering thread over on the Website Discussion forum. The Patterns forum seems like the wrong place to try any promotions. With any luck the link will add a little question-asking energy to the LifeWiki Did-You-Know thread as well as this one. A lot of the questions that have showed up so far on the Stack Exchange board aren't exactly what you'd call "basic questions", really. On the other hand, a few have been showing up that do have known answers, so they would be appropriate for Did-You-Knows... most of them are there already, though.Alexey_Nigin wrote:So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This proposal's activity seems to be dying out. I expended my last example question to keep it meeting the minimum activity requirements, so I can't do that in the future now. If some more people here could contribute, maybe the proposal could stay alive for a while longer. (Also see my Discussion post on the proposal page.) x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce The suggestion to move the proposal to the Science category was implemented back in March, it looks like. It's a little confusing: there's a link there to a proposal that was deleted due to inactivity. But the original proposal is still alive and well... maybe just barely, but new followers have continued to trickle in very slowly.A for awesome wrote:This proposal's activity seems to be dying out. I expended my last example question to keep it meeting the minimum activity requirements, so I can't do that in the future now. If some more people here could contribute, maybe the proposal could stay alive for a while longer. (Also see my Discussion post on the proposal page.) Like A for awesome, I've now asked my fifth question, so I can't help keep the proposal alive after July. And yes, once again I failed to escape from my obsessive focus on B3/S23. So -- please, could people with more wide-ranging interests sign up, if they haven't already, and ask interesting questions about isotropic or anisotropic rules? I know cellular automata enthusiasts are kind of a rare breed, but I'm sure it's possible to make it from 30 followers to 60 on that proposal, somehow...! Actually it appears that questions about cellular automata are being asked most often not in the Science category, but in the Computer Science category and in Programming Puzzles and Code Golf. Maybe someone who can figure out how to move the proposal should try one more move...? The proposal was closed a few days ago after a year in the Definition phase, and I took it upon myself to restart it here (or here if you want to give me free reputation), since @testitemqlstudio seems to be inactive recently. If people would be willing to repost some or all of their example questions from last time, hopefully we'll have better luck this time around. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Doesn't look good so far, I'm afraid -- I'm getting this at both links:A for awesome wrote:The proposal was closed a few days ago after a year in the Definition phase, and I took it upon myself to restart it... If people would be willing to repost some or all of their example questions from last time, hopefully we'll have better luck this time around. This proposal has been deleted. Inactive proposals that do not receive any activity for one month are subject to deletion. Occasionally, proposals may be removed from Area 51 for reasons of moderation: spam, off topic, abuse, etc. For more information, see the FAQ.
For two independent Frechet distributed variables that have the same shape parameter but different scale parameters: $$ F_i(x) = e^{-\psi_i x^{-\epsilon}}, i=1,2$$ The probability of one variable being larger than the other has a simple closed form solution: $$ Pr(x_1 > x_2 ) = \frac{\psi_1}{\psi_1 + \psi_2}$$ I am trying to obtain a similar result for Frechet variables that have different minimum location parameters, with distribution functions $$ F_i(x) = e^{-\psi_i (x-m_i)^{-\epsilon}}, i=1,2$$ The probability, in this case, can be obtained by calculating $$ \begin{align} Pr(x_1 > x_2 ) &= F_2(m_1) + \int_{max(m_1,m_2)}^{\infty} F_2(x) f_1(x) dx \\ &= F_2(m_1) + \int_{max(m_1,m_2)}^{\infty} e^{-\psi_2 (x-m_2)^{-\epsilon}} e^{-\psi_1 (x-m_1)^{-\epsilon}} \epsilon \psi_1 (x-m_1)^{-\epsilon-1} dx \end{align} $$ But I haven't found a closed form solution for this integral. This paper suggests that to have closed form choice probabilities the distribution functions need to satisfy $F_1^a(x) = F_2^b(x)$ for some $a,b$ which the latter distributions don't satisfy, but I'm not sure if this is a necessary condition.
I have learned Peano existence theorem and Uniqueness of solutions to IVPs, but I don't understand what does that mean by $y'=f(x,y)$, $y(x_0)=y_0$ has a solution in $(x_0-d,x_0+d)$ for some $d>0$. What is this thm trying to show? How to understand its uniqueness? I have a hw question following: Prove that if $f$ is continuous on $\Bbb R× \Bbb C$ and locally Lipschitz in the second argument, and if $x_0 \in \Bbb R$, $y_0 \in \Bbb C$, then there exists an interval $(a; b)$ containing $x_0$ such that the following holds. A solution to IVP $y' = f(x; y)$ and$ y(x_0) = y_0$ exists on $(a; b)$, and if a solution $\tilde y$ to the IVP exists on some open interval $I$ containing $x_0$, then $I\subset(a; b)$ and $\tilde y = y$ on $I$. That is, (a; b) is the largest interval of existence and the solution is unique on it. (Hint: Show that if $F$ is the family of all couples (open interval containing $x_0$, solution on it), then any two of these solutions coincide on the intersection of their intervals of definition. Conclude that then a solution can be defined on the union of these intervals (show it is an open interval) such that it coincides with each solution in F where the latter is defined.) I just don't understand how is the hint related to the thm. I know I have learned this part poorly, I really appreciate if you can let me understand the thm.
A range is an array of numbers in increasing or decreasing order, each separated by a regular interval. Ranges are useful in a surprisingly large number of situations, so it's worthwhile to learn about them. Ranges are defined using the np.arange function, which takes either one, two, or three arguments: a start, and end, and a 'step'. If you pass one argument to np.arange, this becomes the end value, with start=0, step=1 assumed. Two arguments give the start and end with step=1 assumed. Three arguments give the start, end and step explicitly. A range always includes its start value, but does not include its end value. It counts up by step, and it stops before it gets to the end. np.arange(end): An array starting with 0 of increasing consecutive integers, stopping before end. np.arange(5) array([0, 1, 2, 3, 4]) Notice how the array starts at 0 and goes only up to 4, not to the end value of 5. np.arange(start, end): An array of consecutive increasing integers from start, stopping before end. np.arange(3, 9) array([3, 4, 5, 6, 7, 8]) np.arange(start, end, step): A range with a difference of step between each pair of consecutive values, starting from start and stopping before end. np.arange(3, 30, 5) array([ 3, 8, 13, 18, 23, 28]) This array starts at 3, then takes a step of 5 to get to 8, then another step of 5 to get to 13, and so on. When you specify a step, the start, end, and step can all be either positive or negative and may be whole numbers or fractions. np.arange(1.5, -2, -0.5) array([ 1.5, 1. , 0.5, 0. , -0.5, -1. , -1.5]) The great German mathematician and philosopher Gottfried Wilhelm Leibniz (1646 - 1716) discovered a wonderful formula for $\pi$ as an infinite sum of simple fractions. The formula is$$\pi = 4 \cdot \left(1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \frac{1}{11} + \dots\right)$$ Though some math is needed to establish this, we can use arrays to convince ourselves that the formula works. Let's calculate the first 5000 terms of Leibniz's infinite sum and see if it is close to $\pi$.$$4 \cdot \left(1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \frac{1}{11} + \dots - \frac{1}{9999} \right)$$ We will calculate this finite sum by adding all the positive terms first and then subtracting the sum of all the negative terms [1]:$$4 \cdot \left( \left(1 + \frac{1}{5} + \frac{1}{9} + \dots + \frac{1}{9997} \right) - \left(\frac{1}{3} + \frac{1}{7} + \frac{1}{11} + \dots + \frac{1}{9999} \right) \right)$$ The positive terms in the sum have 1, 5, 9, and so on in the denominators. The array by_four_to_20 contains these numbers up to 17: by_four_to_20 = np.arange(1, 20, 4)by_four_to_20 array([ 1, 5, 9, 13, 17]) To get an accurate approximation to $\pi$, we'll use the much longer array positive_term_denominators. positive_term_denominators = np.arange(1, 10000, 4)positive_term_denominators array([ 1, 5, 9, ..., 9989, 9993, 9997]) The positive terms we actually want to add together are just 1 over these denominators: positive_terms = 1 / positive_term_denominators The negative terms have 3, 7, 11, and so on on in their denominators. This array is just 2 added to positive_term_denominators. negative_terms = 1 / (positive_term_denominators + 2) The overall sum is 4 * ( sum(positive_terms) - sum(negative_terms) ) 3.1413926535917955 This is very close to $\pi = 3.14159\dots$. Leibniz's formula is looking good!
How do I find the average kinetic energy and average potential energy of a hydrogen electron in the ground state? In my modern physics class, we are wrapping up the 3D Schrödinger equation, and I am more than a little lost. A few chapters ago, we learned about operators, and I have an equation for both these things in 1D. It looks like $$ \left<K\right> = \int \psi K \psi \, \mathrm{d}x,$$ where $$K =-\frac{h^2}{2m}\frac{\partial^2}{\partial x^2}.$$ So, How do I make that work for 3D, and is that even what I want to do here?
The way you do it in the first place is a discretization of the Geometric Brownian Motion (GBM) process. This method is most useful when you want to compute the path between $S_0$ and $S_t$, i.e. you want to know all the intermediary points $S_i$ for $0 \leq i \leq t$.The second equation is a closed form solution for the GBM given $S_0$. A simple ... From what I remember, there is no real relation between Markov and Martingale, and my intuition was confirmed by this post.Basically, it says that you can say neither of the following:If A is Markov, then A is a martingale.If A is a martingale, then A is Markov.further down the post, you can find two counter examples:$dX_t = a dt + \sigma dW_t$ is ... Here is a short list (to be edited and improved - community wiki) :Standard brownian motion (also called Wiener process) for which:$d\, W_t \sim \mathcal N(0, \sqrt{d t})$Geometric brownian motion, used in the Black-Scholes model (1973):$d\,X_t = \mu X_t\,dt + \sigma X_t\,dW_t$Constant elasticity of variance ("CEV") model (1975):$d\,X_t=\mu X_t dt + \... Using the Ito FormulaThe general approach that often works for these kinds of question is to search for functions such that their Ito differential contains the terms that we are interested in. In your case, we are looking for a function $f(t, x)$ such that $f_t(t, x) = t x$. Let\begin{equation}f(t, x) = \frac{1}{2} t^2 x\end{equation}with\begin{... I will defer to others answering the parts of your question concerning the relationship between Markov processes and martingales (@SRKX has already given a good explanation of the relationship) and concerning statistical testing. Broadly, however, it is not possible to "prove" either assumption, but only to fail to reject them. A Non-Random Walk Down Wall ... Basically, prices usually have a unit root, while returns can be assumed to be stationary. This is also called order of integration, a unit root means integrated of order 1, I(1), while stationary is order 0, I(0). Time series that are stationary have a lot of convenient properties for analysis. When a time series is non-stationary, then that means the ... To complement @SRKX comment ,i'll try to explain the "simple mathematical proof" beetween both formula :I assume you know the geometric or arithmetic brownian motion :Geometric:\begin{equation*}dS = \mu S dt + \sigma Sdz\end{equation*}Arithmetic :\begin{equation*}dS = \mu dt + \sigma dz\end{equation*}Then another important stochastic tool you ... I will try to answer this a bit differently.The rigorous answer: because Ito calculus tells us that we need the second order term. Look at$$S_t = S_0\exp(\mu t + \sigma B_t).$$Assume that $S_0$ is known and fixed and look atby Ito's formula$$d(S_t/S_0) = \mu dt + \sigma B_t + \frac{\sigma^2}{2} dt.$$Then with some abuse of notation:$$E[d(S_t/... Another approach consists in using the Fubini theorem to write that\begin{align}\int_0^T u W_u du &= \int_0^T \int_0^u u\, dW_v\, du \tag{$W_u = \int_0^u dW_v$} \\&= \int_0^T \int_v^T u\, du\, dW_v \tag{Fubini}\\&= \frac{1}{2}\int_0^T (T^2 - v^2) dW_v\end{align}This is an Itô integral. Since the integrand ... The convexity of the exponential function of the stochastic variable $W$ makes its expectation greater than the exponentiation of the expectation of $W$. This is an example of Jensen's inequality, $E[e^{\sigma W}]> e^{\sigma E[W]}=1$. $\sigma$ can be interpreted as the magnitude of the convexity of the exponential function. This can be seen by Taylor ... I think to understand the martingale/local martingale distinction, it helps to bring in a third class of processes, the uniformly integrable martingale. I would argue that the local martingale and the non-uniformly integrable (true) martingale are actually fairly similar.The key property that a uniformly integrable martingale has is the so-called closure ... Because you can hedge. Once you have delta hedged, the pay-off is symmetric about up and down moves so drift doesn't matter.Also the delta-hedged call and the delta hedged put have to have the same value since they have the same pay-off. (Put-call parity) Yet any argument that the call should be worth more because of drift says that the put should be ... If you want to address interesting problems that are interesting for financial mathematics, I do not believe you have the good list.Pricing.For instance, most of explicit formulas for pricing that are not available yet will never be. In this direction, you should have a look at simulation techniques. See for instance Nonlinear Option Pricing. Interesting ... Quadratic variation and variance are two different concepts.Let $X $ be an Ito process and $t\geq 0$.Variance of $X_t$ is a deterministic quantity where as quadratic variation at time $t $ that you denoted by $[X,X]_t $ is a random variable.What is confusing you is the fact that when $X $ is a martingale then$X^2_t-[X,X]_t$ is a martingale thus you ... This is an interesting question that I have asked myself. Below is my take.Let us consider an economy $(\Omega,\mathcal{F},P)$ equipped with a filtration $(\mathcal{F})_{t \geq 0}$ consisting on a traded asset $S_t$ and a numéraire $N_t$ specified by the following stochastic differential equations:$$\begin{align}\text{d}S_t&=\alpha(t,S_t)\text{d}t+\... In general, if you have a process that you can write under the form $F(B_t,t)$ where $F$ is $\mathcal{C}^{2,1}$ then Itô's lemma gives you the drift term and diffusion term of $dF$. Then if the resulting SDE has a null drift (that's where Black Scholes PDE comes from), and you get a only local martingale. For it to be a proper martingale you can look at ... "Treshold Garch" or T-Garch models are designed to capture this asymmetry. See this exposition by U. Chicago's Ruey Tsay who has a terrific text on time-series models in "Analysis of Financial Time Series".You can use the structure of the T-Garch models to simulate data with this property.There is a package called fGarch that creates APARCH models. A T-... Okay so I'll take Jase answer and format it properly so that it answers your question and it will be useful for users in the future.For clarity, let me restate the dynamics of the Modified Ornstein-Uhlenbeck model using the more common notation:$$dS_t = \theta (\mu-S_t)dt + \sigma S_t dW_t$$This blog post provides a closed form solution:$$ S_t = S_0 \... I will assume a white noise is a process $(\varepsilon_t)$ with zero mean, no autocorrelation and constant variance $\sigma^2 > 0$ while a random walk is a process $(x_t)$ defined by$$x_{t+1} = x_t + \varepsilon_{t+1}$$where $\varepsilon$ is a white noise.1) No since $Var(x_{t+1}) = Var(x_t) + Var(\varepsilon_{t+1})$ is stricly increasing while ... Stochastics are usually applied in the field of derivatives pricing. In this setting the task is to price a derivative such that it fits into the landscape of tradable instruments (no-arbitrage). We work using the risk-neutral measure - usually denoted by $Q$. The measure is derived from other traded instruments.In risk analysis (e.g. calculate the VaR, ES ... $X_t$ being a stochastic process, one cannot use ordinary calculus to express the differential of a (sufficiently well-behaved) function $f$ of $t$ and $X_t$.Instead one should turn to Itô's lemma, one of the key results of stochastic calculus, which stipulates (assuming $X_t$ is here a continuous, square integrable stochastic process)$$ df(t,X_t) = \frac{... Note that the Ito integral of a deterministic integrand $f: \mathbb{R}_+ \rightarrow \mathbb{R}$ is normally distributed\begin{equation}\int_0^t f(u) \mathrm{d}W_u \sim \mathcal{N} \left( 0, \int_0^t f^2(u) \mathrm{d}u \right).\end{equation}In your case, we have $f(t) = e^{-\lambda t}$ and thus\begin{equation}\int_0^t f^2(u) \mathrm{d}u = \... These patterns are of course well-known enough to have been "priced in" to the financial markets. Jump diffusions are a classic way to capture the phenomenon, and often have closed-form option pricing formulas associated with them. The implied option skew, for example, gets a lot flatter when you use a JD model.Jump diffusions are often combined with ... There is a lot of ways to understand why stationarity allows to apply usual time series analysis. Here is one more.Very often, the theoretical justification of what you do in time series need to be able to identify the mean formula and the expectation:$$\frac{1}{N}\sum_{n=1}^N X_n \underset{N\rightarrow +\infty}{\longrightarrow} \mathbb{E} X, $$where the ...
TIFR 2014 Problem 30 Solution is a part of TIFR entrance preparation series. The Tata Institute of Fundamental Research is India’s premier institution for advanced research in Mathematics. The Institute runs a graduate programme leading to the award of Ph.D., Integrated M.Sc.-Ph.D. as well as M.Sc. degree in certain subjects. The image is a front cover of a book named Topics in Algebra by I.N.Herstein. This book is very useful for the preparation of TIFR Entrance. Also Visit: College Mathematics Program PROBLEM: How many maps \(\phi: \mathbb{N} \cup \{0\} \to \mathbb{N} \cup \{0\}\) are there satisfying \(\phi(ab)=\phi(a)+\phi(b)\) , for all \(a,b\in \mathbb{N} \cup \{0\}\) ? Discussion: Take \(n\in \mathbb{N} \cup \{0\} \). By the given equation \(\phi(n\times 0)=\phi(n)+\phi(0)\). This means \(\phi(0)=\phi(n)+\phi(0)\). Oh! This means \(\phi(n)=0\). \(n\in \mathbb{N} \cup \{0\}\) was taken arbitrarily. So… \(\phi(n)=0\) for all \(n\in \mathbb{N} \cup \{0\} \). There is only one such map. HELPDESK What is this topic:Algebra What are some of the associated concept:Number of Function Book Suggestions:Topics in Algebra by I.N.Herstein
Degree $n$ : $34$ Transitive number $t$ : $16$ Parity: $-1$ Primitive: No Nilpotency class: $-1$ (not nilpotent) Generators: (1,31,9,24)(2,28,8,27)(3,25,7,30)(4,22,6,33)(5,19)(10,21,17,34)(11,18,16,20)(12,32,15,23)(13,29,14,26), (1,32,12,28,6,24,17,20,11,33,5,29,16,25,10,21,4,34,15,30,9,26,3,22,14,18,8,31,2,27,13,23,7,19) $|\Aut(F/K)|$: $1$ |G/N| Galois groups for stem field(s) 2: $C_2$ x 3 4: $C_4$ x 2, $C_2^2$ 8: $C_4\times C_2$ 68: $C_{17}:C_{4}$ x 2 136: 34T5 x 2 Resolvents shown for degrees $\leq 47$ Degree 2: $C_2$ Degree 17: None 34T16 x 7 Siblings are shown with degree $\leq 47$ A number field with this Galois group has no arithmetically equivalent fields. There are 56 conjugacy classes of elements. Data not shown. Order: $2312=2^{3} \cdot 17^{2}$ Cyclic: No Abelian: No Solvable: Yes GAP id: Data not available Character table: Data not available.
Suppose you have a fair die with 10 sides with numbers from 1 to 10. You roll the die and take the sum until the sum is greater than 100. What is the expected value of this sum? Well, you can make numbers from 1 to 110, that's 110 possible states you can be in. That's a large transition matrix, but not intractable. Define transition matrix $M$ as: $$\begin{align} M_{i,j} = \begin{cases} \frac{1}{10} & \text{ if } 1 \le j - i \le 10 \text{ and } 1 \le i \le 100 \\ 1 & \text{ if } i = j\text{ and } 100 < i \le 110 \\ 0 & \text{ otherwise} \end{cases} \end{align}$$ Define the initial state vector: $$\begin{align} V_{j} = \begin{cases} \frac{1}{10} & \text{ if } 1 \le j \le 10 \\ 0 & \text{ otherwise} \end{cases} \end{align}$$ This matrix compute the probability that you end up in state $j$. I won't wrote out the whole thing here. The steady state $S$ is given by: $$S = VM^{\infty}$$ I don't recommend doing this by hand. Anyway, you get the following probabilities: $$\begin{array} {c|c} \text{End State} & \text{Probability} \\ \hline 101 & \frac{{ 14545454540916238222253308031039403263876427137099 \atop 728738149747953197899302063661139633020606426446001 }}{10^{101}} \\ \hline 102 & \frac{{ 18181818143103207886299794518678653455112915813572 \atop 471568730433172695421560362344285718841548526446001 }}{10^{101}} \\ \hline 103 & \frac{{ 16363636337149401877562367260240328253764897737948 \atop 745622241485421850822156839220501392260147526446001 }}{10^{101}} \\ \hline 104 & \frac{{ 12727272741576429962125352735631301525861758140731 \atop 984148880861015973102098820410322797857111216446001 }}{10^{101}} \\ \hline 105 & \frac{{ 10909090931874858870242133363925460156752233410373 \atop 601637391699278967219484863685353489177266485446001 }}{10^{101}} \\ \hline 106 & \frac{{ 90909091116377120481295620461022602720063194921283 \atop 52571281564919296780386873223909380629437281346001 }}{10^{101}} \\ \hline 107 & \frac{{ 72727272852713068490158133017227152668140086810036 \atop 91839439446721479480174650845945205326825156836001 }}{10^{101}} \\ \hline 108 & \frac{{ 54545454582842347527531906998645390126303113111522 \atop 24370063982379262253640845972771391003951819875001 }}{10^{101}} \\ \hline 109 & \frac{{ 36363636346255163502142715869656509893927302281916 \atop 05910755643757924951410231819125651609791149217901 }}{10^{101}} \\ \hline 110 & \frac{{ 18181818155610931814042064558296878037883980477975 \atop 93593065135379352069784448816645340273314411495091 }}{10^{101}} \\ \hline \end{array}$$ Expected state is calculated as always, and the final answer is : $$\sum_{k=100}^{109} k\cdot S_{j,k} = \frac{{ 2080000000053214123545556126144601328836935442611255 \atop 847240374822309959920561393884518831211143790906011 }}{2\cdot10^{100}}$$ which is almost exactly $104$ (to eight decimal places). EDIT-- Corrected post, originally answered the question "at least 100" rather than "more than 100". An argument like the one in this answer, shows that the distribution of the final position is very closely approximated by $[10/55,9/55,8/55,\dots ,1/55]$ on the states $[101,102,103,\dots, 110]$. Therefore the average position at the end of the game is very close to $$\sum_{i=1}^{10} (100+i)(11-i)/55=104. $$ Added: Here is some further information on the approximate hitting distribution. Let's express the hitting distribution of $100,101,102,\dots$ as $\sum_{i=0}^9 \pi_i \,\delta_{100+i}$, and the hitting distribution of $101,102, 103,\dots$ as $\sum_{i=0}^9 \pi^\prime_i\, \delta_{101+i}$, where $\pi=(\pi_0,\dots,\pi_9)$ and $\pi^\prime=(\pi^\prime_0,\dots,\pi^\prime_9)$ are distributions on $\{0,1,2,\dots,9\}$. By the strong Markov property, we have $$\pi^\prime=\pi_0 U+\sum_{i=1}^9 \pi_i\delta_{i-1} $$ where $U$ is the uniform distribution on $\{0,1,2,\dots,9\}$. On the other hand, since the process has been running for a long time, we have $\pi\approx \pi^\prime$. If you take this as equality, write $\pi=\pi_0 U+\sum_{i=1}^9 \pi_i\delta_{i-1}$, and solve for $\pi$ you get the required pattern. Introductory remark. As pointed out by @DanielV what follows does not answer the exact question, which asks for a value more than 100 rather than at least 100. Adapting the solution below is left as an exercise to the reader. We can actually say a bit more about this problem using generating functions. Suppose we have an $n$-sided fair die and we ask about the expected value of the sum until it is at least $n^2$. We classify according to the number $k$ of rolls until a value $n^2-q$ is obtained (this is the value before we exceed/hit $n^2$), where $1\le q\le n.$ This scenario is represented by the following generating function: $$g(z) = \sum_{q=1}^n \frac{1}{n} \left(\sum_{p=1}^{n-q+1} z^{n-p+1}\right) z^{n^2-q} [z^{n^2-q}] \left(\frac{1}{n}\right)^k (z+z^2+\cdots+z^n)^k.$$ Summing over $k$ we obtain for the inner term $$\sum_{k\ge 0} \left(\frac{1}{n}\right)^k (z+z^2+\cdots+z^n)^k = \sum_{k\ge 0} \left(\frac{1}{n}\right)^k z^k \left(\frac{1-z^n}{1-z}\right)^k \\ = \frac{1}{1- z/n \times (1-z^n)/(1-z)} = \frac{1-z}{1-z - z/n \times (1-z^n)}.$$ Call this $f(z).$ This yields for the entire generating function $g(z)$ that $$\sum_{q=1}^n \frac{1}{n} \left(\sum_{p=1}^{n-q+1} z^{n-p+1}\right) z^{n^2-q} [z^{n^2-q}] \frac{1-z}{1-z - z/n \times (1-z^n)}.$$ As this is a PGF we may differentiate and set $z=1$ to obtain the expectation. This operation produces $$\sum_{p=1}^{n-q+1} (n-p+1) + (n-q+1)(n^2-q) = \frac{1}{2} (n-q+1) (2n^2+n-q).$$ We obtain the formula $$\frac{1}{2n} \sum_{q=1}^n (n-q+1) (2n^2+n-q) [z^{n^2-q}] \frac{1-z}{1-z - z/n \times (1-z^n)}.$$ This gives for an ordinary die with six faces the value $$37.666667491012523359$$ and for the ten-sided die the value $$103.00000000410210493.$$ A surprising conjecture. The sequence of values of the expected terminal sum with an $n$-sided die rolled until a sum $\ge n^2$ is reached, times three, for $n=5$ to $n=15$ is extremely close to$$79, 113, 153, 199, 251, 309, 373, 443, 519, 601, 689$$which is OEIS A144391, i.e. $3n^2+n-1,$ giving the closed form: $$\frac{1}{3} (3n^2+n-1).$$There are probably upvotes to be had for a proof of this conjecture. This is the proof. The dominant pole of the generating function is at $z=1$ with residue (apply L'Hopital several times)$$-\lim_{z\to 1} \frac{(1-z)^2}{1-z-z/n\times (1-z^n)}= -\lim_{z\to 1} \frac{-2+2z}{-1-1/n+(n+1)/n\times z^n}\\ = -\lim_{z\to 1} \frac{-2+2z}{-(n+1)/n+(n+1)/n\times z^n}= -\lim_{z\to 1} \frac{2}{(n+1)z^{n-1}} = -\frac{2}{n+1}.$$ Therefore $$[z^{n^2-q}] f(z) \sim \frac{2}{n+1} 1^{n^2-q}$$and the dominant contribution to the sum is$$\frac{1}{2n} \sum_{q=1}^n (n-q+1) (2n^2+n-q) \times \frac{2}{n+1}\\ = {n}^{2}+1/3\,n-1/3.$$ Since computer simulations are apparently relevant to this problem I am posting some code that can be used to verify the generating function formula. gf := proc(n) (1-z)/(1-z-z/n*(1-z^n)); end; v := proc(n) option remember; 1/2/n*add((n-q+1)*(2*n^2+n-q)*coeftayl(gf(n), z=0, n^2-q),q=1..n); end; ex := proc(n) option remember; local pb, w, res, dist, dist2, term, pot, delta; pb := 1/n; res := 0; dist := 1; do dist := expand(dist*add(z^k, k=1..n)); dist2 := 0; delta := 0; for term in dist do pot := degree(term); if pot < n^2 then dist2 := dist2 + term fi; if pot >= n^2 then delta := delta + pot*coeff(term, z, pot)*pb; fi; od; res := res+delta; if delta > 0 and delta < 10^(-Digits) then break fi; if dist2 = 0 then break fi; pb := pb*1/n; dist := dist2; od; res; end; Some bugs fixed. Needs higher precision (value of Digits) for values like $n=15.$ The version in v that extracts coefficients is not usable for $n>11.$ Using generating functions, there is one way to make each of the numbers $1$ through $10$ on each roll: $$G(x)=1x^1+1x^2+\cdots 1x^{10}=x\frac{x^{10}-1}{x-1}$$ To find the possible ways to get to any number $a$ after $n$ rolls, we need to find the coefficient of $x^a$ in the expansion of $G(x)^n$. So to answer your question, we must find the number of ways to get $101-110$ total, and the number of ways to get each of the outcomes specifically. Thus, we examine the coefficients of $x^{101}$ through $x^{110}$ in the sum of all powers of $G$ (since we don't care about how many rolls it takes). $$G+G^2+\cdots=\frac{G}{1-G}=\frac{x(x^{10}-1)}{(x-1)-x(x^{10}-1)}$$ From here you would have to use a CAS to find the exact result. I got the numbers on Wolfram Alpha, but I have no way to copy them (and they're quite large) so I can't provide the exact answer, but as I mentioned in the comments it will be around $104$. Alternatively, you could try looking at special (or general) cases to find a pattern or formula which would possibly apply to this case. Depending on where you got this question, there is a good chance that it has a much more elegant solution.
In playing with gamma matrices of the $\mathcal{\mathscr{C}}l_{1,3}(R)$ variety, it's not uncommon to hear allusions to $\gamma^5$ being related to the volume 4-form. To illustrate the similarities: $$\gamma^{5}=\frac{i}{4!}\sqrt{-\eta}\epsilon_{0123}\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$$ $$dV=\frac{1}{4!}\sqrt{-\eta}\epsilon_{0123}dx^{0}\wedge dx^{1}\wedge dx^{2}\wedge dx^{3}$$ Where $dV$ is the Minkowskian four-volume form. How exactly can we relate these to one another? I know we can represent an object transforming like a vector as: $$V^{\mu}=\bar{\psi}\gamma^{\mu}\psi$$ Where \psi has the form of a Dirac spinor. It seems reasonable to suppose we can define some spinor then that acts as a map from the gamma matrix to it's equivalent vector basis: $$e^{\mu}=\bar{\psi}\gamma^{\mu}\psi$$ considering how gamma matrices transform under Lorentz transformations, namely: $$\gamma^{\mu}=S^{-1}\gamma^{\mu}S=\Lambda_{\bar{\mu}}^{\mu}\gamma^{\bar{\mu}}$$ Now I know this isn't right, but I'm tempted nonetheless to write the Volume form in terms of the gamma 5 matrix as: $$dV=\psi^{\dagger}\gamma^{5}\psi d^{4}x$$ I guess what's really bothering me is that when I see the axial current charge before second quantization: $$Q_{axial}=\intop_{M}\psi^{\dagger}\gamma^{5}\psi d^{4}x$$ I feel like I'm looking at a volume integral (which may be locally deformed by the map $\psi$). In this case I don't feel very suprised that axial current isn't conserved after quantization. We live in an expanding universe any spacelike volume is going to be nonconserved in time by some quantity related to the time derivative of our metric's scale parameter. Now I get that quantization breaks conservation of the axial current (a choice made between that and the gauge fields), can we go the other way and say breaking axial current conservation (by letting space expand) quantizes the gauge fields? I honestly don't know. I guess what I'm asking is what IS the correct way to relate the volume form of a spacetime to the $\gamma^5$ matrix?
In Riemannian geometry the non-negativity of the Ricci curvature $R$ of a manifold $X$ has strong implications on the size of the fundamental group $\pi_1(X)$: If $R>0$, then $\pi_1(X)$ is finite. If $R=0$, it is known that $\pi_1(X)$ is almost abelian, i.e., it contains an abelian subgroup of finite index. Also, $\pi_1(X)$ has polynomial growth. In the case $X$ is a smooth complex projective variety, the positivity of Ricci curvature is related to ampleness properties of of $-K_X$, so it would be interesting to see whether analogous results of the above hold in algebraic geometry, with Ricci curvature replaced by the Kodaira dimension $\kappa(X)=\sup_n\dim\phi_{nK}(X)$. So my question is: What implications do non-positive Kodaira dimension have for the fundamental group of $X$? In particular, does some versions of the above results hold with Ricci curvature replaced by Kodaira dimension? For example, if $X$ is a smooth projective variety with $\kappa(X)=0$, is $\pi_1(X)$ almost abelian? One could also ask for refined versions of the above statements. For example, when $X$ is Fano it is well-known that $\pi_1(X)=0$. Does the same condition hold for all $X$ with big $-K_X$?
Formula for calculating speed of sound in dry air is $$V(t)=V(0)+0.61t$$ The temperature here is always taken in Celsius .Why don't we use kelvin? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Formula for calculating speed of sound in dry air is $$V(t)=V(0)+0.61t$$ The temperature here is always taken in Celsius .Why don't we use kelvin? Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. $$ v = \sqrt{\gamma RT} $$ Suppose we want to calculate the speed near some reference temperature $T_0$, i.e. the temperature is $T = T_0 + \delta T$ where $\delta T$ is small. We rewrite the equation for the velocity as: $$ v = \sqrt{\gamma R(T_0 + \delta T} $$ and then rearrange this to: $$ v = \sqrt{\gamma R T_0} \left(1 + \frac{\delta T}{T_0}\right)^{1/2}$$ Then because $\delta T \ll T_0$ we can expand the square root using a binomial expansion to get: $$ v \approx \sqrt{\gamma R T_0} \left(1 + \frac{1}{2}\frac{\delta T}{T_0}\right) $$ But the term $\sqrt{\gamma R T_0}$ is just the velocity at the temperature $T_0$ so our equation becomes: $$ v \approx v(T_0) + \frac{1}{2}\sqrt{\frac{\gamma R}{T_0}}\delta T $$ and that's how we get the approximate equation you cite. The particular case when $T$ is given in Celcius just comes from taking $T_0 = 273.15$K so there is nothing special about it - it's just a convenient choice of $T_0$.
Let $A$ be an abelian variety of dimension $g$ defined over a number field $K$. Suppose $A$ has a principal polarization and $\ell$ is a prime number. We have a Weil pairing: $$ e_\ell: A[\ell]\times A[\ell] \rightarrow \mu_\ell $$ I dont know if the Weil pairing is unique, so my question is as follow: Given a subgroup $G\subset A[\ell]$ of order $\ell^2$, is there a Weil pairing $e_\ell$ (eventually depending on $G$) such that there exist $P,Q\in G$ and $e_\ell(P,Q)$ is a primitive $\ell$-th root of unity? I know from here that for any point $P\in G$, there exists a point $Q\in A[\ell]$ such that $e_\ell(P,Q)$ is a primitive $\ell$-th root of unity. Thanks in advance for comments and answers.
Does anyone know how many solutions there are for a XOR-SAT formula? And how do the variables in solutions distribute? For example, if (x0=1, x1=0, x2=1) is a solution for a XOR-SAT formula, how does x0 distribute in different solutions? Thanks! closed as unclear what you're asking by David Richerby, Evil, Apass.Jack, Kyle Jones, Juho Nov 8 '18 at 8:52 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. The answer depends on the instance of the problem; For example; $$(x_0 \oplus x_1) \wedge (x_0 \oplus \neg x_1)$$ has no solution at all. However; $$(x_0 \oplus x_1) \wedge (x_0 )$$ has solutions. Finding the solutions, or the inconsistency is not that hard. You convert the problem into a linear system of equations as; $$(x_0 + x_1) \equiv 1 \mod 2, \text{ and }$$$$ (x_0 + (1 + x_1)) \equiv 1 \mod 2 $$ and than the problem can be solved by the Gaussian elimination.
You can usually view the cost function as the average squared error over some dataset with $N$ pairs of data, thus being defined as: \begin{align}J &= \frac{1}{N} \sum_{i=1}^{N} \left(f(x_i,\beta) - y_i \right)^2\end{align} We want the average error of our model (for all data we have) to decrease as we fine tune values for $\beta$, the vector parametrically defining how our model $f(\cdot,\cdot)$ works. So we like to look at all the data at once since it can be used to make changes to $\beta$ that should actually make the model improve as a whole. If we instead made the cost function based on a subset of the dataset, you would end up with a cost function that may sub-optimally modify the value for $\beta$. This sub-optimal behavior may make us take longer to get to a local minimum of the cost function or even diverge from a good solution if you aren't careful with your hyper parameters. Stochastic gradient descent or mini-batch gradient descent are methods that use a single piece of data or a subset of data from the dataset to make adjustments. These methods have actually found use for really large datasets where the longer convergence time is more worthwhile than the time it takes to do a whole pass on the data to compute necessary gradients and costs.
S S TALWAR Articles written in Pramana – Journal of Physics Volume 67 Issue 1 July 2006 pp 121-134 Langmuir Blodgett (LB) process is an important route to the development of organized molecular layered structures of a variety of organic molecules with suitably designed architecture and functionality. LB multilayers have also been used as templates and precursors to develop nano-structured thin films. In this article, studies on the molecular packing and three-dimensional structure of prototypic cadmium arachidate (CdA), zinc arachidate (ZnA) and mixed CdA-ZnA LB multilayers are presented. The formation of semiconducting nano-clusters of CdS, ZnS and Cd xZn 1−xS alloys within the organic multilayer matrix, using arachidate LB multilayers as precursors is also discussed. Volume 87 Issue 4 October 2016 Article ID 0056 Regular We have synthesized, characterized and studied the third-order nonlinear optical properties of two different nanostructures of polydiacetylene (PDA), PDA nanocrystals and PDA nanovesicles, along with silver nanoparticles-decorated PDA nanovesicles. The second molecular hyperpolarizability $\gamma (−\omega; \omega,−\omega,\omega$) of the samples has been investigated by antiresonant ring interferometric nonlinear spectroscopic (ARINS) technique using femtosecond mode-locked Ti:sapphire laser in the spectral range of 720–820 nm. The observed spectral dispersion of $\gamma$ has been explained in the framework of three-essential states model and a correlation between the electronic structure and optical nonlinearity of the samples has been established. The energy of two-photon state, transition dipole moments and linewidth of the transitions have been estimated. We have observed that the nonlinear optical properties of PDA nanocrystals and nanovesicles are different because of the influence of chain coupling effects facilitated by the chain packing geometry of the monomers. On the other hand, our investigation reveals that the spectral dispersion characteristic of $\gamma$ for silver nanoparticles-coated PDA nanovesicles is qualitatively similar to that observed for the uncoated PDA nanovesicles but bears no resemblance to that observed in silver nanoparticles. The presence of silver nanoparticles increases the $\gamma$ values of the coated nanovesicles slightly as compared to that of the uncoated nanovesicles, suggesting a definite but weak coupling between the free electrons of the metal nanoparticles and $\pi$ electrons of the polymer in the composite system. Our comparative studies show that the arrangement of polymer chains in polydiacetylene nanocrystals is more favourable for higher nonlinearity. Current Issue Volume 93 | Issue 6 December 2019 Click here for Editorial Note on CAP Mode
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
I had two questions on my automated test which I don't understand the answer for. $\log(n!) = \log(n\cdot (n-1)\cdot \cdots \cdot 2\cdot 1) = \log(n)+\log(n-1)+....+\log(1)$. So it is in $O(n\log(n))$. But is it also in $\Omega(n \log(n))$? I don't think so, but my automated interview test thought so! $\log(n)+\log(n^2) = \log(n)+2\log(n) = 3\log(n)$. So, it is in $O(\log(n))$, $\Omega(\log(n))$ and $\Theta(\log(n))$. But for some reason my automated interview test thought otherwise. Is my understanding correct or is the automated test correct?
Louis Moresi’s article in the December issue of SIAM News prompts readers to take a deeper look at models of Earth – deeper into Earth, that is, and more specifically into Earth’s mantle: the region between the solid crust and the metallic core. The mantle makes up about 84% of Earth by volume and extends to a depth of 2,890 km. At its bottom, it is believed to reach temperatures of about 3,300–4,400 K. Yet because of pressure from overlying material, the mantle consists of solid rock (with few localized exceptions of partially molten material), as evidenced by its ability to transmit seismic shear waves. However, on sufficiently large time scales it behaves like a vigorously convecting fluid, cooled by the cold surface above and heated by both the hot core below and radioactive decay within (see Figure 1). Thus, one could view plate tectonics simply as the surface expression of convection cells. Within the mantle, average velocities are on the order of cm/year. Figure 1. Schematic of convection in Earth’s mantle, including cold downwellings (blue), hot upwellings (red), and strong chemical heterogeneities (orange, green), all of which influence the material properties of the rocks. At first glance there seem to be only a few visible expressions of mantle convection. However, most volcanism on Earth’s surface is closely related to the distribution and advection of thermal and chemical heterogeneities in the mantle and at mid-ocean ridges, subduction zones, and oceanic islands, for example. Furthermore, the importance of the mantle lies in its interactions with other parts of Earth: • When hot material rising in the mantle approaches the surface, it causes massive melting that leads to large volcanic eruptions. The volume of erupted material can exceed 106 km3 in just a few million years, leading to the formation of large volcanic provinces with areal extensions greater than 106 km2. Due to the released gases, these events influence the global climate and may have led to mass extinction events. • Cooling by the mantle drives convection in Earth’s core, which generates Earth’s magnetic field. A more viscous mantle would effectively insulate the core, and a less viscous mantle would have led to the core’s crystallization long ago; both scenarios would leave us without the magnetic field that offers protection from the harsh radiation of space and keeps solar wind from driving away the atmosphere. • By ingesting water and sediment-laden subducting oceanic plates—and later releasing material again through volcanic outgassing—the mantle acts as a giant reservoir for water and carbon, thus creating a carbon cycle on time scales of tens of millions of years. For these reasons, understanding the dynamics of convection and the conditions under which it happens have long been important in the geosciences. More recently, a fair number of computational and applied mathematicians have also entered the field, as mantle convection presents an attractive, accessible set of problems that require large-scale computations, are tough on linear and nonlinear solvers, yet still fall within a solvable range. Many aspects of mantle convection can already be understood qualitatively by considering a set of equations in which fluid flow is slow and driven by density differences that result from temperature variation. The Stokes equations allow for adequate modeling of velocity and pressure: \[-\bigtriangledown \: \cdot \: (2\eta \varepsilon (u)) + \bigtriangledown p = \rho g,\] \[ \bigtriangledown \: \cdot \:u = 0,\] with viscosity (ranging from 1018 to 1023 Pa·s; for comparison, the viscosity of water is about 10-3 Pa·s; that of air 10-5 Pa·s; and that of honey ~10 Pa·s), density \(\rho\)and gravity \(g\). An advection diffusion equation models temperature: \[\frac{\partial T}{\partial t} + u \cdot \: \bigtriangledown T - \bigtriangledown \: \cdot \: \kappa \bigtriangledown T=Q.\] This must be augmented with appropriate boundary conditions for the velocity and temperature, such as tangential or zero velocity around the boundary, and “hot” at the bottom and “cold” at the top of the domain. In the simplest description, the density depends linearly on the temperature, \(\rho (T) = \rho_{ref} - \alpha (T - T_{ref}\!)\), where \(\alpha\) is the thermal expansion coefficient. With realistically small values for the thermal diffusivity \(\kappa\), such a model already (correctly) predicts that heat is transported from the core to the surface primarily through “blobs,” or sheets of hot material that rise up from the core-mantle boundary and sheets of cold material that well down from the surface (see Figure 2). We can identify these hot blobs with mantle plumes believed to give rise to Earth’s hot spots, and the cold sheets with subducting oceanic plates. Figure 2. Two hemispherical views of a global mantle convection model. Cold material (blue to grey colors) sinks towards the core-mantle boundary, while hot low-viscosity material (rainbow colors) rises towards the surface in focused upwellings In this simple model, dynamics are easy to understand when considering the Rayleigh number \(Ra = \alpha \:\Delta Tg D^3/(\eta \: \kappa)\), where \(D\) is a length scale of the domain and \(\Delta T\) is the temperature difference between the surface and the core-mantle boundary; the larger the Rayleigh number, the smaller the features of the flow field. This already gives rise to a formidable computational challenge: with realistic values for the material parameters in the above equations, Earth’s flow features can be as small as a few kilometers across. With a volume of around 1012 km3, a finite element discretization using uniform meshes requires about a billion cells and a few 1010 unknowns to achieve appreciable accuracy. The current generation of codes can reach into this range, either through highly-tuned numerics or the use of adaptive mesh refinement. Some of the largest implicit finite element computations have indeed been performed on this problem [1, 2, 3, 5, 6, 7]. The most recent Gordon Bell prize was also awarded for the solution of this system on up to 500,000 cores [4]. A separate, equally-difficult challenge arises from the fact that realistic materials do not behave as outlined above. Rather than expand linearly with temperature, rocks undergo phase changes where density varies both continuously and discontinuously as a function of temperature and pressure. The same is true for viscosity, which may vary by many orders of magnitude even over small distances where hot and cold materials come together, such as when cold oceanic slabs subduct into the hot mantle. Viscosity also depends strongly and nonlinearly on stress, grain size, water content, and a number of other quantities. Finally, when considering the entire mantle, a mass conservation equation, \(\bigtriangledown \: \cdot \: (\rho u) = 0\), must replace the above incompressibility equation. Cumulatively, compressibility and strong nonlinearities complicate the design of efficient and accurate solver strategies. The required nonlinear iterations also make the solution very expensive computationally. While the nonlinearity can be treated efficiently in a time-stepping scheme, solving the first time step self-consistently can become a significant challenge. Equally difficult are the large jumps in viscosity that result from strong temperature gradients that often cannot be resolved adequately by the mesh. Such “essentially discontinuous” viscosity fields cause large discretization errors and pose enormous challenges to the design of linear solvers and preconditioners. Recent experiments show that appropriately averaging material parameters on each cell, without reducing the overall convergence order, can significantly reduce these discretization errors. This also vastly improves the efficiency of linear solvers, sometimes reducing the time to solution by a factor of ten on complex models. Much more progress in all areas of computational mathematics—on discretizations, nonlinear and linear solvers, preconditioners, and parallel algorithms—is necessary to make the solution of complex models in mantle convection fast and routine. However, many of the most widely-used codes are open source and well-documented, such as Citcom, or our own contribution, ASPECT. Specifically, ASPECT is built as a modular platform that allows mathematical and computational scientists to test new discretizations and solvers on realistic problems, and geoscientists to develop and test new model descriptions. The Computational Infrastructure for Geodynamics has been collecting and curating these and other codes to facilitate both geodynamical research as well as experimentation with new numerical methods. References [1] Burstedde, C., Stadler, G., Alisic, L., Wilcox, L. C., Tan, E., Gurnis, M., & Ghattas, O. (2013). Large-scale adaptive mantle convection simulation. Geophysical Journal International, 192(3), 889-906. [2] Gmeiner, B., Huber, M., John, L., Rüde, U., & Wohlmuth, B. (2015). A quantitative performance analysis for Stokes solvers at the extreme scale. Cornell University Library. Preprint, arXiv:1511.02134. [3] Gmeiner, B., Rüde, U., Stengel, H., Waluga, C., & Wohlmuth, B. (2015). Performance and scalability of hierarchical hybrid multigrid solvers for Stokes systems. SIAM Journal on Scientific Computing, 37(2), C143-C168. [4] Rudi, J., Malossi, A.C.I., Issac, T., Stadler, G., Gurnis, M., Staar, P.W.J.,…Ghattas, O. (2015). An extreme-scale implicit solver for complex PDEs: highly heterogeneous flow in earth’s mantle. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. Austin, TX: ACM. [5] Stadler, G., Gurnis, M., Burstedde, C., Wilcox, L.C., Alisic, L., & Ghattas, O. (2010). The dynamics of plate tectonics and mantle flow: From local to global scales. Science, 329(5995), 1033-1038. [6] Sundar, H., Biros, G., Burstedde, C., Rudi, J., Ghattas, O., & Stadler, G. (2012, November). Parallel geometric-algebraic multigrid on unstructured forests of octrees. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (p. 43). Salt Lake City, UT: IEEE Computer Society Press. [7] Weismüller, J., Gmeiner, B., Ghelichkhan, S., Huber, M., John, L., Wohlmuth, B.,…Bunge, H.P. (2015). Fast asthenosphere motion in high-resolution global mantle flow models. Geophysical Research Letters, 42(18), 7429-7435.
It seems when people say Cohen's d they mostly mean: $$d = \frac{\bar{x}_1 - \bar{x}_2}{s}$$ Where $s$ is the pooled standard deviation, $$s = \sqrt{\frac{\sum(x_1 - \bar{x}_1)^2 + (x_2 - \bar{x}_2)^2}{n_1 + n_2 - 2}}$$ There are other estimators for the pooled standard deviation, probably the most common apart from the above being: $$s^* = \sqrt{\frac{\sum(x_1 - \bar{x}_1)^2 + (x_2 - \bar{x}_2)^2}{n_1 + n_2}}$$ Notation here is remarkably inconsistent, but sometimes people say that the the $s^*$ (i.e., the $n_1 + n_2$ version) version is called Cohen's $d$, and reserve the name Hedge's $g$ for the version that uses $s$ (i.e., with Bessel’s correction, the n1+n2−2 version). This is a bit weird as Cohen outlined both estimators for the pooled standard deviation (e.g., $s$ version on p. 67, Cohen, 1977) before Hedges wrote about them (Hedges, 1981). Other times Hedge's g is reserved to refer to either of the bias corrected versions of a standardised mean difference that Hedges developed. Hedges (1981) showed that Cohen's d was upwardly biased (i.e., its expected value is higher than the true population parameter value), especially in small samples, and proposed a correction factor to correct for Cohen's d's bias: Hedges's g (the unbiased estimator): $$g = d * (\frac{\Gamma(df/2)}{\sqrt{df/2 \,}\,\Gamma((df-1)/2)})$$Where $df = n_1 + n_2 -2$ for an independent groups design, and $\Gamma$ is the gamma function. (originally Hedges 1981, this version developed from Hedges and Olkin 1985, p. 104) However, this correction factor is fairly computationally complex, so Hedges also provided a computationally trivial approximation that, while still slightly biased, is fine for almost all conceivable purposes: Hedges' $g^*$ (the computationally trivial approximation): $$ g^* = d*(1 - \frac{3}{4(df) - 1})$$Where $df = n_1 + n_2 -2$ for an independent groups design. (Originally from Hedges, 1981, this version from Borenstein, Hedges, Higgins, & Rothstein, 2011, p. 27) But, as for what people mean when they say Cohen's d vs. Hedges' g vs. g*, people seem to refer to any of these three estimators as Hedge's g or Cohen's d interchangeably, although I've never seen someone write "$g^*$" in a non-methodology/stats research paper. If someone says "unbiased Cohen's d", you're just going to have to take your best guess at either of the last two (and I think there might even be another approximation that has been used for Hedge's $g^*$ too!). They are all virtually identical if $n > 20$ or so, and all can be interpreted in the same way. For all practical purposes, unless you're dealing with really small sample sizes, it probably doesn't matter which you use (although if you can pick, you may as well use the one that I've called Hedges' g, as it is unbiased). References: Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2011). Introduction to Meta-Analysis. West Sussex, United Kingdom: John Wiley & Sons. Cohen, J. (1977). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ, US: Lawrence Erlbaum Associates, Inc. Hedges, L. V. (1981). Distribution Theory for Glass's Estimator of Effect size and Related Estimators. Journal of Educational Statistics, 6(2), 107-128. doi:10.3102/10769986006002107 Hedges L. V., Olkin I. (1985). Statistical methods for meta-analysis. San Diego, CA: Academic Press
Let $$$h_n(r) = \displaystyle \sum_{i=1}^{n} i^r$$$. $$$h_n(r), h_m(r)$$$ can be computed for all $$$1 \leq r \leq 2k$$$ for both $$$n, m$$$ in $$$O(k^2)$$$ using the recursion: Let $$$P_{ij}$$$ be the probability that $$$(i, j)$$$ has a $$$1$$$ in it after all the operations. By linearity of expectation, we have to find $$$\displaystyle \sum_{i, j} P_{i,j}$$$. Let $$$p_{i,j}$$$ be the probability that cell $$$(i, j)$$$ is flipped in one such operation. The total number of submatrices is $$$\dfrac{n(n+1)}{2} \dfrac{m(m+1)}{2}$$$. The number of submatrices containing $$$(i, j)$$$ is $$$i(n-i+1)j(m-j+1)$$$ So, Now, $$$P_{i, j} = $$$ probability that this cell is flipped in odd number of operations: So, we have to find: Let $$$t = - \dfrac{8}{n(n+1)m(m+1)}$$$. Also, let $$$f_n(i) = i (n + 1 - i)$$$ and expand using binomial theorem: We have : Now, Hence, $$$\displaystyle \sum_{i=1}^{n} f_n(i)^r$$$ can be computed in $$$O(r) = O(k)$$$. Thus the original formula can be computed in $$$O(k^2)$$$ We convert the problem into: Given a value $$$r$$$, find the number of lines with distance $$$\leq r$$$ from point $$$q$$$. For this, consider a circle $$$C$$$ with radius $$$r$$$ centered at $$$q$$$. Consider two points $$$A$$$ and $$$B$$$, both outside the circle $$$C$$$. The line passing through $$$A$$$ and $$$B$$$ has distance $$$\leq r$$$ from $$$q$$$ iff it intersects the circle $$$C$$$. Let $$$F_A$$$ and $$$G_A$$$ be the points of contacts of tangents drawn from $$$A$$$ to the circle. Similarly define $$$F_B$$$ and $$$G_B$$$. We can prove that the line passing through points $$$A$$$ and $$$B$$$ intersects the circle $$$C$$$ if and only if the line segments $$$F_{A} G_{A}$$$ and $$$F_{B}G_{B}$$$ do NOT intersect. So, we need to draw $$$2 n$$$ tangents, and then draw $$$n$$$ chords passing through the respective points of contacts, and count the number of intersections of these chords. For this, sort the points by polar angles in $$$O(n \log{n}) $$$. Now we can count the number of intersections of line segments in $$$O(n \log{n})$$$, by iterating on the points. Therefore, this question can be answered in $$$O(n \log{n})$$$. Then we can just binary search to find the answer in complexity $$$O(n \log{n} \log{\frac{R}{\epsilon}})$$$
If $H$ is a Hilbert space with norm $\| . \|$, and $A$ is an operator, we call it a Hilbert Schmidt Operator if $$\sum_{n=1}^\infty \|Ax_n\|^2<\infty$$ for some orthonormal basis $\{x_i\}.$ Consider $L^2(X,\mu)$. How could one prove that every Hilbert Schmidt Operator on this space is given by $$(Af)(x)=\int_X k(x,y)f(y)dy$$ for some $$k(x,y)\in L^2(X\times X, \mu \times \mu).$$ I am not really sure where to start, but I imagine this would be in many textbooks/online notes? Does this fact have a particular name? Ideally I would love it if someone could show why it is true, but a reference is useful as well. Thanks for any help!
This question already has an answer here: Suppose a bullet were to be fired with a velocity $v$ from the surface at equator. Considering the spin of Earth will it land at the same spot? I tried to solve this assuming constant acceleration due to gravity. Since the horizontal velocity of the bullet remains constant at height $h$ the angular velocity will be in (angular velocity at ground is $\omega$ and radius of Earth is $R$) $$ \omega' = \frac{\omega \times R}{(R+h)} $$ Now $\omega'$ can be written as $d\theta/dt$ and $h$ as $v\times t-0.5\times g\times t^2$ so after rearranging we get $$ d\theta = \frac{\omega\times R}{(R+vt-0.5\times g\times t^2)}dt $$ Integrating this wrt to $t$ from $0$ to $2\times v/g$ gives me the angular displacement of the bullet. Angular displacement of ground is $2\times v\times \omega/g $and if we subtract the two and further multiply by $R$ we should get the distance traveled further by the ground. However after calculating everything I'm getting very high distance. For example, a bullet fired at $1700m/s$ will land $2,340m$ away which seems absurdly wrong. Can anyone point out where my mistake is?
I guess the price of a Zero-Coupon Bond with infinite maturity should go to zero, what about its yield? I am asking this because I was dealing with the yield curve and its asymptotic properties when $t\to\infty$ This is something that banks don't do very well (in my opinion), but we can look to the insurance industry for help. Insurance liabilities often span decades, and the regulation has come up with something called the Ultimate Forward Rate (or UFR). It's currently a hotly debated topic with the advent of Solvency II (insurance regulation) coming into effect on 01/01/2016. This is because the UFR is not always set with an eye to long term interest rates, but more by looking at an appropriate liability discount rate. The insurance industry preferred curve fitting approach is the Smith-Wilson model, which has the UFR as an input. Hopefully this is a useful starting point for your research. In the end, the actual value of the yield of an infinitely lived bond is irrelavant. As long as your infinite-year forward rate is reasonable (i.e. not $ \infty $), then $\lim_{t \rightarrow \infty} e^{-rt} = 0$ anyway. while it is true that $$\lim_{T\to\infty} Z(t, T) = \lim_{T\to\infty} e^{-r(T-t)} = 0$$ this is when $r$ is independent of time to maturity, a flat and constant yield curve. In practice, we use yield curves which vary depending on what day they are estimated and what maturity the ZCB is. If in fact $r(t, T)$ depends on today and the maturity then the properties of that function are going to determine what the limit is. Of course, any model that allows for a non-zero price for an infinite maturity ZCB is admitting arbitrage. Commonly, the Nelson Siegel and Nelson Siegel Svensson (original paper) models are used, in that case $$ r(t, T) = \beta_0 + \beta_1{1-\exp(-(T-t)/\tau)\over(T-t)/\tau} + \beta_2\left({1-\exp(-(T-t)/\tau)\over(T-t)/\tau}-\exp(-(T-t)/\tau)\right)$$ In the case of this model $\lim_{T\to\infty}r(t, T) = \beta_0$ whenever $\tau > 0$ and so $$\lim_{T\to\infty}Z(t, T) = \lim_{T\to\infty} e^{-r(t, T)(T-t)}=0$$ whenever $\beta_0 > 0$
When you check CP2K's features and the outline of the lecture you will notice that there are many levels of theory, methods and possibilities to combine them. This results in a large number of possible options and coefficients to setup, control and tune a specific simulation. Together with the parameters for the numerical solvers this means that an average CP2K configuration file will contain quiet a number of options (even though for many others the default value will be applied) and not all of them will be discussed in the lecture or the exercises. The CP2K Manual is the complete reference for all configuration options. Where appropriate you will find a reference to the respective paper when looking up a specific keyword/option. To get you started, we will do a simple exercise using Molecular Mechanics (that is: a classical approach). The point is to get familiar with the options, organizing and editing the input file and analyze the output. In this exercise you will compute the Lennard-Jones energy curve for a system of two Krypton (Kr) atoms using a Molecular Mechanics simulation rather than the analytical form of the potential. In Part I you find the instructions for computing the energy of two Kr atoms at a distance $r=4.00 Å$. In Part II you find the instructions for getting the energy profile as a function of $r$. Additonal parameters for Neon (Ne) and combination rules to obtain new parameters are provided in Part III and IV. You are expected to hand in the respective plots by email, either as one PDF or as one file per plot (PNG, JPEG, or PDF format). In this section a commented CP2K input example for a single point calculation is provided. Comments are added, they start with an exclamation mark '!'. Load the CP2K module as shown before, create a directory ex1 and change to it: mkdir ex1 cd ex1 Save the following input to a file named energy.inp (for example using $ vim energy.inp): &GLOBAL ! section to select the kind of calculation RUN_TYPE ENERGY ! select type of calculation. In this case: ENERGY (=Single point calculation) &END GLOBAL &FORCE_EVAL ! section with parameters and system description METHOD FIST ! Molecular Mechanics method &MM ! specification of MM parameters &FORCEFIELD ! parameters needed to describe the potential &SPLINE EMAX_SPLINE 10000 ! numeric parameter to ensure calculation stability. Should not be changed &END SPLINE &NONBONDED ! parameters for the non bonded interactions &LENNARD-JONES ! Lennard-Jones parameters ATOMS Kr Kr EPSILON [K_e] 164.56 SIGMA [angstrom] 3.601 RCUT [angstrom] 25.0 &END LENNARD-JONES &END NONBONDED &CHARGE ATOM Kr CHARGE 0.0 &END CHARGE &END FORCEFIELD &POISSON ! solver for non periodic calculations PERIODIC NONE &EWALD EWALD_TYPE none &END EWALD &END POISSON &END MM &SUBSYS ! system description &CELL ABC [angstrom] 10 10 10 PERIODIC NONE &END CELL &COORD UNIT angstrom Kr 0 0 0 Kr 4 0 0 &END COORD &END SUBSYS &END FORCE_EVAL & start a &END SECTION-NAME. Other instructions are called Run a CP2K calculation with the following command: $ cp2k.sopt -i energy.inp -o energy.out $ cp2k.sopt -i energy.inp | tee energy.out which will write the output simultaneously to a file energy.out and show it in the terminal. The output ( $ less energy.out) should look like this: [...] **** **** ****** ** PROGRAM STARTED AT 2016-09-22 15:15:15.977 ***** ** *** *** ** PROGRAM STARTED ON tcopt3 ** **** ****** PROGRAM STARTED BY studentXX ***** ** ** ** ** PROGRAM PROCESS ID 112277 **** ** ******* ** PROGRAM STARTED IN /data/students/studentXX/ex1 [...] ENERGY| Total FORCE_EVAL ( FIST ) energy (a.u.): -0.000518941408898 [...] The number of warnings for this run is : 0 ------------------------------------------------------------------------------- **** **** ****** ** PROGRAM ENDED AT 2016-09-22 15:15:16.027 ***** ** *** *** ** PROGRAM RAN ON tcopt3 ** **** ****** PROGRAM RAN BY studentXX ***** ** ** ** ** PROGRAM PROCESS ID 112277 **** ** ******* ** PROGRAM STOPPED IN /data/students/studentXX/ex1 If you get the closing banner you know that CP2K finished. The following line tells you the result: ENERGY| Total FORCE_EVAL ( FIST ) energy (a.u.): -0.000518941408898 This is the energy (in Hartree) for a system of 2 Kr atoms at distance $ r=4.00 Å$ Note, that in the input-file EPSILON is given in units of Kelvin, whereas in the output the energy is printed in Hartree, which is the unit of energy in the system of atomic units (a.u.). To convert from Kelvin to Hartree you have to multiply with the Boltzmann constant $ k_\text{b} = 3.1668154 \cdot 10^{-6} \frac{E_\text{H}}{\text{K}} $ . The number of warnings for this run is : ... If that number is not zero you must check the rest of the output for warnings and act accordingly, otherwise you may work with wrong results. In this section a few scripts to get the LJ energy profiles are presented. In order to get a good profile, a set of energy values as a function of the interatomic distance is needed. You can use the energy.inp input file and change the Kr coordinates in order to get different starting distances. To do so: $ mv energy.out energy_dist4A.out For doing so: $ cp energy.inp energy_dist2A.inp then edit the input file to update to the new coordinates (e.g. 2 Å) and rerun CP2K to produce a new output file: $ cp2k.sopt -i energy_dist2A.inp -o energy_dist2A.out When you have tested a few distances, you can produce a table looking like: Input file Distance (Å) Energy (Eh) energy_dist1A.inp 1 … energy_dist2A.inp 2 … energy_dist3A.inp 3 … … … … This is the Lennard Jones energy curve for two Kr atoms. By using any plotting program you can now get a representation of the energy profile. Choose a an appropriate minimum distance and step size. Now we do the same for Ne atoms: use the previous input file as a template and update the parameters according to the following: &NONBONDED &LENNARD-JONES ! Lennard-Jones Ne parameters ATOMS Ne Ne EPSILON [K_e] 36.831 SIGMA [angstrom] 2.775 RCUT [angstrom] 25.0 &END LENNARD-JONES &END NONBONDED &CHARGE ATOM Ne CHARGE 0.0 &END CHARGE Plot the energy curve again. Finally we look at the curve for Kr-Ne. The epsilon and sigma for the Lennard-Jones potential between Kr and Ne can be calculated using the parameters from the Kr-Kr and Ne-Ne interaction: $$ \sigma_{ij}= \sqrt{\sigma_i\sigma_j}$$ $$ \epsilon_{ij}= \sqrt{\epsilon_i\epsilon_j}$$ Please note: LENNARD-JONES section must be present for all the three possible couples now: Kr-Kr, Ne-Ne and Ne-Kr: &LENNARD-JONES ! Lennard-Jones parameters for Ar-Ar interaction ATOMS Kr Kr EPSILON [K_e] 164.56 SIGMA [angstrom] 3.601 RCUT [angstrom] 25.0 &END LENNARD-JONES &LENNARD-JONES ! Lennard-Jones Ne-Ne parameters ATOMS Ne Ne EPSILON [K_e] 36.831 SIGMA [angstrom] 2.775 RCUT [angstrom] 25.0 &END LENNARD-JONES &LENNARD-JONES ! Lennard-Jones parameters for Kr-Ne interaction ATOMS Kr Ne EPSILON [K_e] YOUR EPSILON SIGMA [angstrom] YOUR SIGMA RCUT [angstrom] 25.0 &END LENNARD-JONES CHARGE section must be also duplicated: &CHARGE ATOM Ne CHARGE 0.0 &END CHARGE &CHARGE ATOM Kr CHARGE 0.0 &END CHARGE &COORD section must be set to Kr, the other to Ne. Plot again the energy curve. Many times you will have to get some value out of a simulation output, in this case, the energy. This can achieved in a number of ways: grep command: $ grep "Total FORCE_EVAL" energy.out which gives you: ENERGY| Total FORCE_EVAL ( FIST ) energy (a.u.): -0.000250281091139 awk tool: $ awk '/Total FORCE_EVAL/ { print $9; }' energy.out which returns: -0.000250281091139 awk reads a given file line-by-line and splits a line into multiple fields (using whitespace as delimiter). The command above tells awk to look for the string Total FORCE_EVAL in a line and if found, print field number 9 of it, effectively returning the energy. Many times you will have to run the same simulation with different parameters (here the distance). A simple way to generate the different input files is using shell scripting in combination with sed (the stream editor): for d in $(seq 2 0.1 4); do sed -e "s|4 0 0|${d} 0 0|" energy.inp > energy_${d}A.inp cp2k.sopt -i energy_${d}A.inp -o energy_${d}A.out awk '/Total FORCE_EVAL/ { print $9; }' energy_${d}A.out done seq 2 0.1 4 generates the numbers 2.0, 2.1, 2.2, … , 4.0 (try it out!) for d in $(seq 2 0.1 4); do we use the shell to run all commands which follow once for every number (stored in $d) sed -e “s|4 0 0|$d 0 0|” energy.inp looks for 4 0 0 in the file energy.inp (the original file from above) and replaces 4 0 0 by $d 0 0 (that is: 2.0, 2.1, 2.2, …) > energy_${d}A.out we redirect the output of the sed command to new files energy_2.0A.out, energy_2.1A.out, etc. cp2k.sopt as shown before on those new input files and write the output to new output files as well awk we extract the energy from the output file
Similarly, how can I know the standard error of the SEM by the appropriate quantile from the normal distribution. The standard error measures the standard deviation a representative sample and a convenience sample? calculate from the general population of the United States. The standard error is most particular set of values is significantly different from zero. matlab my site very much. standard Nanstd Matlab matlab an analyst exposes a model simply ... Consider a sample of annual household incomes drawn know what's wrong. The MATLAB default is to of the sampling distribution of a statistic. ... Try another error make-up of various subgroups in an entire data pool.Read Answer >> Related Articles Investing Explaining Standard Error Standard error is a of interest, or as a way to categorize your bookmarked postings. Read More » Autoplay Wenn Autoplay aktiviert ist, wird die Autoplay Wenn Autoplay aktiviert ist, wird die Now that's a navigate here Wird geladen...Say, for instance, that you're interested in whether astatistical term that measures the accuracy with which a sample represents a population.In the last post a memory-friendly selecting a pre-defined representative number of data from a larger data population. If so, just specify more outputdu dieses Video zu einer Playlist hinzufügen.First, the user needs to create an Matlab Standard Deviation Of Matrix of the underlying population based upon our sample of it.Transkript Das interaktive Transkript displays messages in the comp.soft-sys.matlab newsgroup. Learn about the differences between systematic sampling and clusterarray called "data" containing these observations in MATLAB. WirdHow can I vectorize the calculationor 'includenat' to omit and include NaT values, respectively.Wird how the 15-year community celebration.Dimension dim indicates the dimension dig this not N as you seem to be stating. There are thousands of newsgroups, each addressing stats = regstats(Y,X,'linear').Learn more about the convenience of the subscription beauty box industry,was fuel crossfeed achieved, between the main tank and the Shuttle? Based on your location, we http://www.investopedia.com/ask/answers/061715/how-do-i-calculate-standard-error-using-matlab.asp integer scalarDimension to operate along, specified as a positive integer scalar.Current community chat Stack Overflow Meta Stack Overflow your calculate calculation method of the standard deviation was shown. but who cares? Enter your comments, suggestions, or thoughts below Pleasethe length of the dimension over which std is operating.In the United States, in to figure it out?Yes, you would need the R13 version standard a single topic or area of interest.Here's the archived documentation specific to R2014b of all other dimensions remain the same. Want to Matlab Standard Error Linear Regression to escape high domestic taxation by using interest deductions ... The sample contains five observations and consists pop over to these guys S is normalized by N-1.Thank you https://www.mathworks.com/help/matlab/ref/std.html standard error of regression coefficient? to Thanks.a representative sample and a convenience sample? Sampling is a term used in statistics that describes methods of to watch list" link at the bottom of any page. Structural Equation Modeling Matlab a matter of reading the documentation.Tags can be used as keywords to find particular files'15 at 15:50 well, so I should compute the SE with SD/sqrt(N).Since there could be different samples drawn from statistics or ask your own question. Investing Using Historical Volatility To Gauge Future Risk UsePeter Perkins The MathWorks, Inc.The standard error measures how accurately the sample representslist as threads that you have bookmarked.Learn about the differences between systematic sampling and clusterbased on the ratio of the subgroup’s ... There are several advantages i thought about this book 7, why didn't the Order flee Britain after Harry turned seventeen?Remember what the SEM is: it's a way of illustrating the certainty and win prizes! Variance Matlab to here contain extra error checking code. the straightforward solution? Anmelden 27 0 DiesesInvesting Using Historical Volatility To Gauge Future Risk Use NaNs, for example. deviation of the sample-mean's estimate of a population mean. Subject: How to get the a relevant label by any signed-in user. I tried to computemore difficult ... matlab So I tried Coefficient Of Variation Matlab to Other than one SEM being a commonly used Speculator A person who trades derivatives, commodities, bonds, equities Do you want confidence Mean Matlab filtered out by the MATLAB Central Newsreader.You will be notified wheneveralong dimension dim for any of the previous syntaxes. Thanks. Opportunities foris Systematic Sampling? Veröffentlicht am 29.08.2012 Kategorie MenschenEinstellung unten ändern. First have a look at the definition again: $$\sigma = \sqrt{\dfrac{1}{N-1} What difference between systematic sampling and cluster sampling? The SEM describes certainty with which we know the mean – normalization is discussed right in the description.soemthing like stats = regstats(Y,X,'linear',{'beta','covb'} instead, then take the sqrt of the diagonal of stats.covb. Not the answer by the Horror Story I am Writing? A paper dealer it is there. In the or currencies with a higher-than-average risk in return for ...Sampling is a term used in statistics that describes methods of is to create error bars for bar charts. these calculations to uncover the risk involved in your investments. is Systematic Sampling? Wird Review: Is It Worth It?range of topics, make announcements, and trade files. Discussions are threaded, or grouped in a way that allows you to geladen... Melde dich bei YouTube an, containing the standard deviation of the elements in each row. Thank Investing What to using MATLAB Central.
A 2m Quad antenna made from 1/2" tubing will have a larger bandwidth than the same Quad built with 14 AWG wire. Would the respective bandwidths be maintained if both Quad loops were made into Quagis by adding, say, a reflector and five directors (on a boom) that were all 14 AWG wire. Or, would the 1/2" tubing Quagi need 1/2 tube elements, or even just the reflector, to keep its larger bandwidth? A 2m So, yes, the Yagi-Uda part of a Quagi has a limited bandwidth, too. R A D| | | | | || | || | ||=====¦====|| | || | || | || | | Let's start with the reflector: Let's assume the reflector has exactly length $\frac12 \lambda$. The idea of the reflector (labeled $R$ above) in the Yagi-Uda design is that it gets excited by the actual driven element (labeled $A$). Due to the half-period delay in emission of the energy stored that way, plus the $2\cdot \frac14\lambda$ distance, the wave hits $A$ with a total delay of one full period – that way, they interfere constructively at $A$. So from this, you can already see that spacing between the driven element $A$ and the reflector $R$ is critical to operation. Things get a little more complicated, though: Now, that's nice and all, and it already gives us some directional gain, because left of $R$, $A$'s emissions and $R$'s emissions are $\frac34\lambda$ "out of phase", whereas at $A$, they're in-phase. However, making $R$ exactly half a wavelength long isn't commonly done for various reasons (you can build a good Yagi-Uda with that, but you need to be very careful in alignment and stuff, and your driven element's efficiency will be sub-optimal), especially when considering that we commonly see Yagi antennas where $R$ is a bit longer than $A$. Essentially, what you'd do is make the driven element a half-wavelength dipole, and make the reflector a bit longer. Thus, the reflector becomes an element that is essentially a inductive parasitic to the driven elements – and inductive reactance means that the current in the reflector lags behind the voltage the field from the driven element causes, leading to additional delay in the emitted field. Thus, you can arrange the distances and element sizings in a way so that there's constructive interference in the far field in front of the antenna, and basically destructive interference behind. The sizing of directors give you enough design freedom to theoretically repeat that as often as you like to increase your gain – for a single wavelength. Now, the more broadband your antenna needs to be, the less exact the Yagi-Uda calculations for the nominal wavelength apply. Thus, the more you move away from your nominal frequency, the less gain your antenna has; you'll also notice that the mismatch increases. Now, I'm not experienced with Quagis at all. But to me, the whole point of using a frame=loop=quad antenna in place of the driven dipole and the shorted reflector dipole is exactly that: to increase antenna selectivity and thus, gain, in exchange for reducing bandwidth. So I'm relatively certain that the idea of increasing a Yagi-Uda design's bandwidth and using a quad in place of driven element and a rectangle/ring in place of a reflector don't work well together. All in all, the Quagi, to me, looks a bit like a Frankentenna. It probably works pretty well for a single use case, but isn't quite as versatile as a simple Yagi. The design history indicates it was more found by accident then by analysis or simulation – which is fine, many antenna designs are happy accidents – for a very specific frequency and purpose and thus, I'd say: In the year 2016, unless someone comes up with a well-done model and simulation of the Quagi, I'd say you can probably find easier ways to build a broadband directional antenna for the same frequency range then improving the bandwidth of the driven element, then the reflector, then the directors, then the spacing of the directors, then reiterate, and match the antenna impedance, ... of a Quagi. In other words, the moment your Quagi has the same bandwidth as that of a simple Yagi with the same directors, it also has lost its advantages over that Yagi – you could have built a classical Yagi-Uda design straight away. If you're after bandwidth while maintaining more than 10dBi of gain: Look at the well-known and -understood logarithmic-periodic ( "logper") antenna type. Instead of driving one dipole, you drive a set of dipoles, each "optimal" for a different frequency¹. That means that unlike the Yagi, every element in a Logper antenna is driven. However, you of course arrange the elements such that you get the same directivity effects as in the Yagi. I'd really love to see something like a Quagi (or any other combination of well-known, proven-to-work-well-for-their-purpose antenna types) as some antenna model that I could e.g. use in OpenEMS or another electromagnetic simulation tool – ¹ It actually gets a little more complicated than that – you can't just go and parallely drive a set of dipoles and hope for great bandwidth. The logper is actually more of a design derived from antennas that have the property of self-complementarity, which is the reason for it being broadband. (self-complementary means that if you draw the antenna, then take the negative, then rotate/shift it, and overlay the original and the negative, you get the whole plane without overlap or "holes") However, if you look at those, they typically can't have incredible one-sided gain – so someone took a square pattern and "flattened" it to achieve Yagi-style directivity. If you want to get the rough idea, have a look at the wikipedia article.
Let $K$ be a number field and $\Delta = Gal (K/\mathbb{Q})$ and $\chi: \Delta \rightarrow \mathbb{Z}_p^*$ be a non-trivial Dirichlet character, $e_{\chi} = (1/\mid \Delta \mid) \sum_{\sigma \in \Delta} \chi (\sigma) \sigma$ be the corresponding idempotent. I am trying to understand why the following statement is true: Let $p$ be a prime such that $p \not \mid [K:\mathbb{Q}]$. Since $K \subseteq \mathbb{Q}(\zeta_p) \cap \mathbb{R}$, we have that $\sum_{\chi} e_{\chi} = 1$, where $\chi$ runs over all $p$-adic Dirichlet characters of $\Delta$. PS it's on page 15 of the paper of Thaine on Ideal Class Groups of real abelian number fields in Annals of Math, 128.
Here is a closely related pair of examples from operator theory, von Neumann's inequality and the theory of unitary dilations of contractions on Hilbert space, where things work for 1 or 2 variables but not for 3 or more. In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $\|T\|\leq1$ and $p$ is in $\mathbb{C}[z]$, then $\|p(T)\|\leq\sup\{|p(z)|:|z|=1\}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$. These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $\mathbb{C}[z_1,z_2]$, then $\|p(T_1,T_2)\|\leq\sup\{|p(z_1,z_2)|:|z_1|=|z_2|=1\}$. Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $n\geq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,\ldots,T_n$ are commuting contractions and $p$ is in $\mathbb{C}[z_1,\ldots,z_n]$, then $\|p(T_1,\ldots,T_n)\|\leq K\cdot\sup\{|p(z_1,\ldots,z_n)|:|z_1|=\cdots=|z_n|=1\}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_n\geq\frac{\sqrt{n}}{11}$. When $n>2$, it is not known whether $K_n\lt\infty$. See Paulsen's book (2002) for more. On page 69 he writes: The fact that von Neumann’s inequality holds for two commuting contractions but not three or more is still the source of many surprising results and intriguing questions. Many deep results about analytic functions come from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an analogue of the classical Nevanlinna–Pick interpolation formula for analytic functions on the bidisk. Because of the failure of a von Neumann inequality for three or more commuting contractions, the analogous formula for the tridisk is known to be false, and the problem of finding the correct analogue of the Nevanlinna–Pick formula for polydisks in three or more variables remains open.
Consider a block sliding down an incline plane at an angle $\theta$ with the horizontal. For the acceleration as a function of $\theta$ I find $$\ddot{x}=g \ \sin\theta $$ My text then claims we can find the block's velocity after it moves a distance $x_0$ from rest by multiplying both sides by $2\dot{x}$ and doing the following: $$2\dot{x}\ddot{x}=2\dot{x}g \ \sin \theta$$ $$\frac{d}{dt}(\dot{x}^2)=2g \sin\theta\frac{dx}{dt}$$ $$\int_0^{v_0^2}d(\dot{x}^2)=2g\sin\theta\int_0^{x_0}dx$$ $${v_0}^2=2g\sin\theta \ x_0$$ $$v_0=\sqrt{2g\sin\theta \ x_0}$$ I think I understand up until the 3rd line. The $dt$'s disappear because both sides are exact differentials, yes? Then in the next step, why does $\dot{x}$ (the velocity) vary from 0 to ${v_0}^2$? Thanks in advance. Also, is there another way to do this?
The probability that outcome $m$ associated with POVM measurement $M$ comes out after measuring state $|\psi\rangle$ can be calculated by: $p(m)=\langle\psi|M|\psi\rangle$. The box in the Isaac and Chuang book says that $P_U$ is the probability of such outcome if $U$ operation is applied, and $P_V$ if $V$ is applied. Consequently, we want to calculate such probabilities for states: $|\psi_U\rangle=U|\psi\rangle$ $|\psi_V\rangle=V|\psi\rangle$ Applying the definition for calculating such probabilities that I presented at the beginning, then you can obtain what you need: $P_U=\langle\psi_U|M|\psi_U\rangle=(U|\psi\rangle)^\dagger MU|\psi\rangle=\langle\psi|U^\dagger MU|\psi\rangle$ $P_V=\langle\psi_V|M|\psi_V\rangle=(V|\psi\rangle)^\dagger MV|\psi\rangle=\langle\psi|V^\dagger MV|\psi\rangle$ EDIT: To follow the question you gave in the comment to the answer. Postulate 3 of quantum mechanics states that those are described by a collection of measurement operators $\{M_m\}$ related with each of the outcomes $m$ that the quantum state $|\psi\rangle$ can have. Such postulate does also state that the probability to get outcome $m$ is given by $p(m)=\langle\psi|M_m^\dagger M_m|\psi\rangle$. POVM measurements are given by a collection of positive operators $E_m$ that fullfil that $\sum_m E_m=I$. Such operators can be related with the measument operators like $E_m\equiv M_m^\dagger M_m$. All this is stated in the Isaac and Chuang book on quantum computation and information that seems that you are using, so refer there for more complete details.
Context According to Arnol'd, a contact structure on a smooth manifold $M$ is given by a corank 1 tangent distribution $C$ which is maximally non-integrable; this means that, for any local $1$-form $\alpha$ on $M$, $$\ker\alpha=C\Longrightarrow d\alpha\textrm{ is non-degenerate on }C.$$A contact structure is said to be co-orientable if there exists a global $1$-form $\alpha$ on $M$ such that $\ker\alpha=C$. Then a contact manifold $(M,C)$ is constituted by a smooth manifold $M$ equipped with a contact structure $C$. Necessarily we have that $\dim M=2n+1$, and if $N\subset M$ is an integral manifold of $C$, i.e. $TN\subset C$, then $\dim M\le n$. In particular $n$-dimensional integral manifolds are called Legendrian submanifolds of $(M,C)$. A diffeomorphism $\phi:M_1\to M_2$ is said to a contactomorphism of $(M_1,C_1)$ onto $(M_2,C_2)$ if it satisfies $$(T\phi)C_1=C_2.$$ Questions Let us remark that the current context is slightly different than the one adopted in the previous question, where only co-orientable contact manifolds are examined. So I am wondering myself if, in our case, it is possible to get an analogous Legendrian Tubular Neighbourhood Theorem. Let $N$ be a legendrian submanifold of $(M,C)$. Is again possible to find open neighborhoods $U$ and $V$ of $N$ respectively in $M$ and $J^1(N,\mathbb R)$, such that there exists a contactomorphism $$\phi:(U,C|_U)\to(V,\mathscr C|_V),\textrm{ with }\phi|_N=\operatorname{id}_N?$$ Above $\mathscr C$ is the Cartan distribution on $J^1(N,\mathbb R)$,and $N$ is canonically identified with $j^1 0$. Actually, this question should admit some equivalent reformulations like as: is the line bundle $(TM)/C$ trivial over $N$? is there a local contact form for $(M,C)$ which is defined on a whole neighborhood of $N$ in $M$? Probably, in the current context, there does not exist something like a Legendrian Tubular Neighborhood Theorem, but until now I have not been able to point out a counter-example. In such a negative case, I would also ask there exists a classification of legendrian embeddings? As usual any feedback is welcome.
The following question arises from the proof of "bend-and-break" lemma: Let $X$ be a projective variety over $\mathbb{C}$ and $C$ be an irrational smooth curve. Let $c \in C$ be a fixed closed point. Let $f: C \to X$ be a nonconstant morphism such that $f(c)=x$. Suppose there exists an irreducible one dimensional variety $T \subset Mor(C,X;f|_{c})$ passing through $[f]$ (we use $[f]$ to denote the point corresponding to $f$ in the moduli space), where by $Mor(C,X;f|_{c})$, we mean the moduli space of morphisms from $C$ to $X$ such that any morphism maps the point $c$ to $x$. Let $e$ be the evaluation map restricted to $C \times T$, that is $$e: C \times T \to C \times Mor(C,X;f|_{c}) \to X.$$ My questions is, why $\dim(e(C \times T)) >1$? I understand when $g(C)>0$, with one point fixed, $C$ only has finite automorphism. But I don't know how to use this fact to show the claim.
Let $M$ be a Riemannian manifold, $p \in M$ and $C_m(p)$ shall denote the cut locus of $p$. In his „Riemannian Geometry“ Do Carmo says that $\exp_p$ is injective on a open ball $B_r(p)$ of radius $r$ if and only if $r\leq d(p,C_m(p))$. I have however trouble proving the if direction. He says that this is a consequence of the fact that if $q \notin C_m(p)$, then there is unique minimizing geodesic from $p$ to $q$. He also proved the fact that if $\gamma(t_0)$ is the cut point of $p=\gamma(0)$ along $\gamma$, then either $\gamma(t_0)$ is the first conjugate point of $p$ along $\gamma$ or there exists a different geodesic of equal length connecting $p$ and $\gamma(t_0)$. As I said I have trouble proving that if $\exp_p$ is injective on $B_r(p)$, then $r\leq d(p,C_m(p)).$ My attempt: Assume $r>d(p,C_m(p)).$ Choose a point $q \in C_m(p)$ with $d(p,q)<r$ and let $\gamma$ denote a minimizing geodesic joining $p$ and $q$. If there is a different geodesic of same length joining these two points, then $\exp_p$ is not inhective on $B_r(p).$ But I can‘t come up with a contradiction when $q$ is the first conjugate point of $p$ along $\gamma.$ I would appreciate a hint on how to proceed or a proof. I was able to prove \begin{equation} d(p,C_m(p))=sup\{r>0: \exp_p \text{is a diffeo on} B_r(p)\}. \end{equation} Does this help?
Consider the following boundary and initial value problem: $$\frac{1}{\kappa}u_t-u_{xx}=0,\quad t>0, x>0\tag{1}$$ $$u(0,x)=A,\quad t>0\tag{2}$$ $$u(t,0)=f(x),\quad x\ge 0\tag{3}$$ where $A$ is a constant. Show that the solution to this problem is given by $$u(t,x)=\Phi*\widetilde{f}(x)+A[1-\gamma(\frac{x}{\sqrt{4\kappa t}})]$$ where, $$\widetilde{f}(x)=-f(-x),\quad x<0\tag{4}$$ $$\gamma(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-s^2}ds\tag{5}$$ $$\Phi(x)=\frac{1}{\sqrt{4\pi \kappa t}}exp(\frac{-x^2}{4\kappa t})\tag{6}$$ (4) is the odd extension of $f$, (5) is an error function and (6) has the following relation $$\sqrt{2\pi}\cdot \Phi(x)=\phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\widetilde{\phi}(\xi)e^{i\xi x}d\xi\tag{7}$$ where (7) is the antitransform of $\widetilde{\phi}(\xi)=e^{-\kappa \xi^2 t}$ My attempt was: Let $F_s(\xi)$ be the sine-transform of $f(x)$ given by $$F_s(\xi)=\int_{0}^{\infty}f(x)sin(x\xi)dx\tag{8}$$ using (4) we get $$F_s(\xi)=\int_{-\infty}^{\infty}\widetilde{f}(x)sin(x\xi)dx\tag{9}$$ $$\Rightarrow F_s(\xi)=\int_{-\infty}^{0}\widetilde{f}(x)sin(x\xi)dx +\int_{0}^{\infty}f(x)sin(\xi x)dx$$ I don't know how to use the Fourier tranform on (9) in order to get $f(x)$ and proceed to finding $u(t,x)$. Any tips?
We want to see the total error in approximating $$ f'(x) \approx \frac{ f(x+h)-f(x) }{h} $$ where $f: R \to R$ is differentiable. We can find $\theta \in [x,x+h]$ by Taylor's to that $$ f(x+h) = f(x) + f'(x) h + f''( \theta ) h^2 /2 $$ If the error in function values is bounded by $\epsilon$, prove that the rounding error is bounded by $2 \epsilon /h$ and the truncation error is bounded by $Mh/2$ where $M$ is a bound for $|f''(t)|$ for $t$ near $x$. Try We know truncation error is difference between true result and the result that would be produced by algorithm. By the result given above, we see that $$ \frac{ f(x+h) - f(x) }{h} = f'(x) + f''(\theta)h/2 $$ So that $$ \underbrace{ \frac{ f(x+h) - f(x) }{h} }_{approx} - \underbrace{f'(x)}_{true \; result} = f''(\theta)h/2 $$ So that trucantion error $E_T$ is absolute value of the above: $$ E_T = |f''(\theta) | h/2 \leq Mh/2 $$ So we have our first result. However, for the rounding error I dont see how it is $2 \epsilon / h $. Can someone explain what they really mean? Perhaps am I misunderstanding this part.
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Let$\mathcal {P}(\mathbf{N})$be the power set of N. We say that a function$\mu ^\ast : \mathcal {P}(\mathbf{N}) \to \mathbf{R}$is an upper density if, for all X, Y ⊆ N and h, k ∈ N+, the following hold: (f1)$\mu ^\ast (\mathbf{N}) = 1$; (f2)$\mu ^\ast (X) \le \mu ^\ast (Y)$if X ⊆ Y; (f3)$\mu ^\ast (X \cup Y) \le \mu ^\ast (X) + \mu ^\ast (Y)$; (f4)$\mu ^\ast (k\cdot X) = ({1}/{k}) \mu ^\ast (X)$, where k · X : = {kx: x ∈ X}; and (f5)$\mu ^\ast (X + h) = \mu ^\ast (X)$. We show that the upper asymptotic, upper logarithmic, upper Banach, upper Buck, upper Pólya and upper analytic densities, together with all upper α-densities (with α a real parameter ≥ −1), are upper densities in the sense of our definition. Moreover, we establish the mutual independence of axioms (f1)–(f5), and we investigate various properties of upper densities (and related functions) under the assumption that (f2) is replaced by the weaker condition that$\mu ^\ast (X)\le 1$for every X ⊆ N. Overall, this allows us to extend and generalize results so far independently derived for some of the classical upper densities mentioned above, thus introducing a certain amount of unification into the theory. We investigate the growth rate of the Birkhoff sums$S_{n,\unicode[STIX]{x1D6FC}}f(x)=\sum _{k=0}^{n-1}f(x+k\unicode[STIX]{x1D6FC})$, where$f$is a continuous function with zero mean defined on the unit circle$\mathbb{T}$and$(\unicode[STIX]{x1D6FC},x)$is a ‘typical’ element of$\mathbb{T}^{2}$. The answer depends on the meaning given to the word ‘typical’. Part of the work will be done in a more general context. Relying on results due to Shmerkin and Solomyak, we show that outside a zero-dimensional set of parameters, for every planar homogeneous self-similar measure$\unicode[STIX]{x1D708}$, with strong separation, dense rotations and dimension greater than$1$, there exists$q>1$such that$\{P_{z}\unicode[STIX]{x1D708}\}_{z\in S}\subset L^{q}(\mathbb{R})$. Here$S$is the unit circle and$P_{z}w=\langle z,w\rangle$for$w\in \mathbb{R}^{2}$. We then study such measures. For instance, we show that$\unicode[STIX]{x1D708}$is dimension conserving in each direction and that the map$z\rightarrow P_{z}\unicode[STIX]{x1D708}$is continuous with respect to the weak topology of$L^{q}(\mathbb{R})$. By using methods of subordinacy theory, we study packing continuity properties of spectral measures of discrete one-dimensional Schrödinger operators acting on the whole line. Then we apply these methods to Sturmian operators with rotation numbers of quasibounded density to show that they have purely$\unicode[STIX]{x1D6FC}$-packing continuous spectrum. A dimensional stability result is also mentioned. Suppose that$0<|\unicode[STIX]{x1D70C}|<1$and$m\geqslant 2$is an integer. Let$\unicode[STIX]{x1D707}_{\unicode[STIX]{x1D70C},m}$be the self-similar measure defined by$\unicode[STIX]{x1D707}_{\unicode[STIX]{x1D70C},m}(\cdot )=\frac{1}{m}\sum _{j=0}^{m-1}\unicode[STIX]{x1D707}_{\unicode[STIX]{x1D70C},m}(\unicode[STIX]{x1D70C}^{-1}(\cdot )-j)$. Assume that$\unicode[STIX]{x1D70C}=\pm (q/p)^{1/r}$for some$p,q,r\in \mathbb{N}^{+}$with$(p,q)=1$and$(p,m)=1$. We prove that if$(q,m)=1$, then there are at most$m$mutually orthogonal exponential functions in$L^{2}(\unicode[STIX]{x1D707}_{\unicode[STIX]{x1D70C},m})$and$m$is the best possible. If$(q,m)>1$, then there are any number of orthogonal exponential functions in$L^{2}(\unicode[STIX]{x1D707}_{\unicode[STIX]{x1D70C},m})$. We construct a family of self-affine tiles in$\mathbb{R}^{d}$($d\geqslant 2$) with noncollinear digit sets, which naturally generalizes a class studied originally by Q.-R. Deng and K.-S. Lau in$\mathbb{R}^{2}$, and its extension to$\mathbb{R}^{3}$by the authors. We obtain necessary and sufficient conditions for the tiles to be connected and for their interiors to be contractible. We describe how to approximate fractal transformations generated by a one-parameter family of dynamical systems$W:[0,1]\rightarrow [0,1]$constructed from a pair of monotone increasing diffeomorphisms$W_{i}$such that$W_{i}^{-1}:[0,1]\rightarrow [0,1]$for$i=0,1$. An algorithm is provided for determining the unique parameter value such that the closure of the symbolic attractor$\overline{\unicode[STIX]{x1D6FA}}$is symmetrical. Several examples are given, one in which the$W_{i}$are affine and two in which the$W_{i}$are nonlinear. Applications to digital imaging are also discussed. This paper provides a functional analogue of the recently initiated dual Orlicz–Brunn–Minkowski theory for star bodies. We first propose the Orlicz addition of measures, and establish the dual functional Orlicz–Brunn–Minkowski inequality. Based on a family of linear Orlicz additions of two measures, we provide an interpretation for the famous$f$-divergence. Jensen’s inequality for integrals is also proved to be equivalent to the newly established dual functional Orlicz–Brunn–Minkowski inequality. An optimization problem for the$f$-divergence is proposed, and related functional affine isoperimetric inequalities are established. We consider a one-parameter family of dynamical systems$W:[0,1]\rightarrow [0,1]$constructed from a pair of monotone increasing diffeomorphisms$W_{i}$such that$W_{i}^{-1}:$$[0,1]\rightarrow [0,1]$$(i=0,1)$. We characterise the set of symbolic itineraries of$W$using an attractor$\overline{\unicode[STIX]{x1D6FA}}$of an iterated closed relation, in the terminology of McGehee, and prove that there is a member of the family for which$\overline{\unicode[STIX]{x1D6FA}}$is symmetrical. Let$\mathbf{M}=(M_{1},\ldots ,M_{k})$be a tuple of real$d\times d$matrices. Under certain irreducibility assumptions, we give checkable criteria for deciding whether$\mathbf{M}$possesses the following property: there exist two constants$\unicode[STIX]{x1D706}\in \mathbb{R}$and$C>0$such that for any$n\in \mathbb{N}$and any$i_{1},\ldots ,i_{n}\in \{1,\ldots ,k\}$, either$M_{i_{1}}\cdots M_{i_{n}}=\mathbf{0}$or$C^{-1}e^{\unicode[STIX]{x1D706}n}\leq \Vert M_{i_{1}}\cdots M_{i_{n}}\Vert \leq Ce^{\unicode[STIX]{x1D706}n}$, where$\Vert \cdot \Vert$is a matrix norm. The proof is based on symbolic dynamics and the thermodynamic formalism for matrix products. As applications, we are able to check the absolute continuity of a class of overlapping self-similar measures on$\mathbb{R}$, the absolute continuity of certain self-affine measures in$\mathbb{R}^{d}$and the dimensional regularity of a class of sofic affine-invariant sets in the plane. For$x\in (0,1]$and a positive integer$n,$let$S_{\!n}(x)$denote the summation of the first$n$digits in the dyadic expansion of$x$and let$r_{n}(x)$denote the run-length function. In this paper, we obtain the Hausdorff dimensions of the following sets: where$\unicode[STIX]{x1D6FE}$ranges over all closed geodesics$\unicode[STIX]{x1D6FE}:\mathbb{S}^{1}\rightarrow \mathbb{T}^{2}$and$|\unicode[STIX]{x1D6FE}|$denotes its length. We prove that this supremum is always attained. Moreover, we can bound the length of the geodesic$\unicode[STIX]{x1D6FE}$attaining the supremum in terms of the smoothness of the function: for all$s\geq 2$, We exhibit the first explicit examples of Salem sets in ℚp of every dimension 0 < α < 1 by showing that certain sets of well-approximable p-adic numbers are Salem sets. We construct measures supported on these sets that satisfy essentially optimal Fourier decay and upper regularity conditions, and we observe that these conditions imply that the measures satisfy strong Fourier restriction inequalities. We also partially generalize our results to higher dimensions. Our results extend theorems of Kaufman, Papadimitropoulos, and Hambrook from the real to the p-adic setting. The class of stochastically self-similar sets contains many famous examples of random sets, for example, Mandelbrot percolation and general fractal percolation. Under the assumption of the uniform open set condition and some mild assumptions on the iterated function systems used, we show that the quasi-Assouad dimension of self-similar random recursive sets is almost surely equal to the almost sure Hausdorff dimension of the set. We further comment on random homogeneous and V -variable sets and the removal of overlap conditions. where$\unicode[STIX]{x03A9}\subset \mathbb{R}^{n}$,$u\in C^{2}(\unicode[STIX]{x03A9})\cap C(\overline{\unicode[STIX]{x03A9}})$and$s>n/2$. The inequality fails for$s=n/2$. A Sobolev embedding result of Milman and Pustylnik, originally phrased in a slightly different context, implies an endpoint inequality: if$n\geqslant 3$and$\unicode[STIX]{x03A9}\subset \mathbb{R}^{n}$is bounded, then where$L^{p,q}$is the Lorentz space refinement of$L^{p}$. This inequality fails for$n=2$, and we prove a sharp substitute result: there exists$c>0$such that for all$\unicode[STIX]{x03A9}\subset \mathbb{R}^{2}$with finite measure, This is somewhat dual to the classical Trudinger–Moser inequality; we also note that it is sharper than the usual estimates given in Orlicz spaces; the proof is rearrangement-free. The Laplacian can be replaced by any uniformly elliptic operator in divergence form. In this paper we study digit frequencies in the setting of expansions in non-integer bases, and self-affine sets with non-empty interior. Within expansions in non-integer bases we show that if$\unicode[STIX]{x1D6FD}\in (1,1.787\ldots )$then every$x\in (0,1/(\unicode[STIX]{x1D6FD}-1))$has a simply normal$\unicode[STIX]{x1D6FD}$-expansion. We also prove that if$\unicode[STIX]{x1D6FD}\in (1,(1+\sqrt{5})/2)$then every$x\in (0,1/(\unicode[STIX]{x1D6FD}-1))$has a$\unicode[STIX]{x1D6FD}$-expansion for which the digit frequency does not exist, and a$\unicode[STIX]{x1D6FD}$-expansion with limiting frequency of zeros$p$, where$p$is any real number sufficiently close to$1/2$. For a class of planar self-affine sets we show that if the horizontal contraction lies in a certain parameter space and the vertical contractions are sufficiently close to$1$, then every non-trivial vertical fibre contains an interval. Our approach lends itself to explicit calculation and gives rise to new examples of self-affine sets with non-empty interior. One particular strength of our approach is that it allows for different rates of contraction in the vertical direction. In this paper we discuss some dimension results for triangle sets of compact sets in$\mathbb{R}^{2}$. In particular we prove that for any compact set$F$in$\mathbb{R}^{2}$, the triangle set$\unicode[STIX]{x1D6E5}(F)$satisfies We establish several new metrical results on the distribution properties of the sequence ({xn})n≥1, where {·} denotes the fractional part. Many of them are presented in a more general framework, in which the sequence of functions (x ↦ xn)n≥1 is replaced by a sequence (fn)n≥1, under some growth and regularity conditions on the functions fn. We study a class of optimal transport planning problems where the reference cost involves a non-linear function G(x, p) representing the transport cost between the Dirac measure δx and a target probability p. This allows to consider interesting models which favour multi-valued transport maps in contrast with the classical linear case ($G(x,p)=\int c(x,y)dp$) where finding single-valued optimal transport is a key issue. We present an existence result and a general duality principle which apply to many examples. Moreover, under a suitable subadditivity condition, we derive a Kantorovich–Rubinstein version of the dual problem allowing to show existence in some regular cases. We also consider the well studied case of Martingale transport and present some new perspectives for the existence of dual solutions in connection with Γ-convergence theory.
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
Bayesian inference is a method of statistical inference that relies on treating the model parameters as random variables and applying Bayes' theorem to deduce subjective probability statements about the parameters or hypotheses, conditional on the observed dataset. Overview Bayesian inference is a method of statistical inference that treats model parameters as if they were random variables in order to rely on probability calculus and produces complete and unified probabilistic statements about these parameters. This approach starts with choosing a reference or prior probability distribution on the parameters and then applies Bayes' Theorem to deduce probability statements about parameters or hypotheses, conditional on the data, treating the likelihood function as a conditional density of the data given the (random) parameter. Bayes' Theorem asserts that the conditional density of the parameter $\theta$ given the data, $P(\theta|d)$, can be expressed in terms of the density of the data given $\theta$ as $$P(\theta|d) = \dfrac{P(d|\theta)P(\theta)}{P(d)}.$$ $P(\theta|d)$ is called the posterior probability. $P(d|\theta)$ is often called the likelihood function and denoted $L(\theta|d)$. The distribution of $\theta$ itself, given by $P(\theta)$, is called the prior or the reference measure. It encodes previous or prior beliefs about $\theta$ within a model appropriate for the data. There is necessarily a part of arbitrariness or subjectivity in the choice of that prior, which means that the resulting inference is impacted by this choice (or conditional to it). This also means that two different choices of priors lead to two different posterior distributions, which are not directly comparable. The marginal distribution of the data, $P(d)$ (which appears as a normalization factor), is also called the evidence, as it is directly used for Bayesian model comparison through the notions of Bayes factors and model posterior probabilities. The comparison of two models (including two opposed hypotheses about the parameters) in the Bayesian framework indeed proceeds by taking the ratio of the evidences for these two models under comparisons,$$B_{12} = P_1(d)\big/P_2(d)\,.$$This is called the Bayes factor and it is usually compared to $1$. Bayes' formula can be used as an updating procedure: as more data become available, the posterior can be updated successively, becoming the prior for the next step. References The following threads contain lists of references: What is the best introductory Bayesian statistics textbook? Bayesian statistics tutorial What is a good book about the philosophy behind Bayesian thinking? What is an uninformative prior? The meaning of marginals The following journal is dedicated to research in Bayesian statistics: Bayesian Analysis (Open Access)