text
stringlengths
256
16.4k
i can't understand how to solve this issue: using an appropriate substitution, evaluate this integral: $$ \int \frac{1+x²}{\sqrt{1+x}}\mathrm{d}x $$ can any one solve this so i can understand how to do this. When you are facing a radical which has a linear sum in it, it works well to use that as the basis of the substitution. Here, you would take $ \ u = x + 1 \ $ , which will give you $ \ du = dx \ $ . To deal with the numerator, you need to solve your substitution equation for $ \ x \ $ , giving $ \ x = u - 1 \ $ . The integral becomes $$ \int \ \frac{1+ x^2}{\sqrt{1 + x }} \ dx \ \rightarrow \ \int \frac{1 + (u - 1)^2}{\sqrt{u}} \ du \ . $$ You would then multiply out the polynomial in the numerator. The point in doing this is that you now have a polynomial divided simply by the square root of the variable, which will leave you with a set of terms in the integrand which are all just fractional powers of $ \ u \ $ , something which is much easier to integrate. Let $u^2=1+x$. Then $2u\,du =dx$. Using the fact that $x=u^2-1$, we find that $x^2+1=u^4-2u^2+2$. Substituting, we find that our integral is $$\int \frac{(u^4-2u^2+2)(2u)}{u}\,du.$$ There is cancellation, and we have reached $$\int (2u^4-4u^2+4)\,du$$ which is $$\frac{2}{5}u^5 -\frac{4}{3}u^3+4u+C.$$ Now replace $u$ by $(x+1)^{1/2}$.
Here's a visual explanation of (1) Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump of $y=1$s to the right). The sequence of curves I plotted is $$\frac{1}{1 + e^{-x}}, \frac{1}{1 + e^{-2x}}, \frac{1}{1 + e^{-3x}}, \ldots $$ so I'm just increasing the coefficient without bound. Which of the 20 curves would you choose? Each one hewes ever closer to our imagined data. Would you keep going on to $$\frac{1}{1 + e^{-21x}}$$ When would you stop? For (2), yes. This is essentially by definition, you've implicitly assumed this in the construction of the binomial likelihood(*) $$ L = \sum_i t_i \log(p_i) + (1 - t_i) \log(1 - p_i) $$ In each term in the summation only one of $t_i \log(p_i)$ or $(1 - t_i) \log(1 - p_i)$ is non-zero, with a contribution of $p_i$ for $t_i = 1$ and $1 - p_i$ for $t_i = 0$. Why is there no convergence mathematically? Here's a (more) formal mathematical proof. First some setup and notations. Let's write $$ S(\beta, x) = \frac{1}{1 + \exp(- \beta x)} $$ for the sigmoid function. We will need the two properties $$ \lim_{\beta \rightarrow \infty} S(\beta, x) = 0 \ \text{for} \ x < 0 $$$$ \lim_{\beta \rightarrow \infty} S(\beta, x) = 1 \ \text{for} \ x > 0 $$ with each approaching the limit monotonically, the first limit is decreasing, the second is increasing. Each of these follows easily from the formula for $S$. Let's also arrange things so that Our data is centered, this allows us to ignore the intercept as it is zero. The vertical line $x = 0$ separates our two classes. Now, the function that we are maximizing in logistic regression is $$ L(\beta) = \sum_i y_i \log(S(\beta, x_i)) + (1 - y_i) \log(1 - S(\beta, x_i)) $$ This summation has two types of terms. Terms in which $y_i = 0$, look like $\log(1 - S(\beta, x_i))$, and because of the perfect separation we know that for these terms $x_i < 0$. By the first limit above, this means that $$ \lim_{\beta \rightarrow \infty} S(\beta, x_i) = 0$$ for every $x_i$ associated with a $y_i = 0$. Then, after applying the logarithm, we get the monotonic increasing limit $$ \lim_{\beta \rightarrow \infty} \log(1 - S(\beta, x_i)) = 1$$ You can easily use the same ideas to show that for the other type of terms $$ \lim_{\beta \rightarrow \infty} \log(S(\beta, x_i)) = 1$$ again, the limit is a monotone increase. So no matter what $\beta$ is, you can always drive the objective function upwards by increasing $\beta$ towards infinity. So the objective function has no maximum, and attempting to find one iteratively will just increase $\beta$ forever. It's worth noting where we used the separation. If we could not find a separator then we could not partition the terms into two groups, we would instead have four types Terms with $y_i = 0$ and $x_i > 0$ Terms with $y_i = 0$ and $x_i < 0$ Terms with $y_i = 1$ and $x_i > 0$ Terms with $y_i = 1$ and $x_i < 0$ In this case, when $\beta$ gets very large the terms with $y_i = 1$ and $x_i < 0$ will drive to negative infinity. When $\beta$ gets very negative, the $y_i = 0$ and $x_i < 0$ will do the same. So somewhere in the middle, there must be a maximum. (*) I replaced your $y_i$ with $p_i$ because the number is a probability, and calling it $p_i$ makes it easier to reason about the situation.
I need to convert the below for a homework question and I am not entirely sure if it's correct. The last part is that I am not sure how to use the distributive laws in this scenario. Any guidance would be appreciated: $ (p \land q) \leftrightarrow (\lnot p \lor \lnot q)$ Step1: Eliminate all operators except for negation, conjunction and disjunction by substituting logically equivalent formulas: $ (p \land q) \leftrightarrow (\lnot p \lor \lnot q) $ $ ((p \land q) \to (\lnot p \lor \lnot q)) \land ((\lnot p \lor \lnot q) \to (p \land q)) $ $ (\lnot(p \land q) \lor (\lnot p \lor \lnot q)) \land (\lnot(\lnot p \lor \lnot q) \lor (p \land q)) $ Step2: Push negation inwards using De Morgan’s laws: $ ((\lnot p \lor \lnot q) \lor (\lnot p \lor \lnot q)) \land ((\lnot\lnot p \land \lnot\lnot q) \lor (p \land q)) $ Step3: Eliminate sequences of negations by deleting double negation operators: $ ((\lnot p \lor \lnot q) \lor (\lnot p \lor \lnot q)) \land ((p \land q) \lor (p \land q)) $ $ (\lnot p \lor \lnot q) \land (p \land q) $ Step4: Use the distributive laws to eliminate conjunctions within disjunctions: This is where I am stuck. I am unsure if I can apply the distributive law if there is the last and in $(p \land q)$ given that it is to eliminate conjunctions within disjunctions. Any advice would be greatly appreciated
Mainly for pedagogical reasons, I'm considering the "simple" one dimensional model:$$x=\theta+\epsilon$$where $\epsilon$ has a known distribution $p$ that is independent of $\theta$. This distribution can be anything, just assuming some regularity conditions, compact support... something that leaves a lot of freedom for the choice of $p$. Somehow I'm looking for properties that hold for "almost any $p$", ignoring special cases with strong underlying algebraic properties (like in an exponential family). Consider an independent sample $X=(x_1,x_2,x_3,...,x_n)$. I'm looking for a general way to construct the best unbiased estimator if it exist. Not asymptotically I mean but for fixed finite $n$. It happens to be a bit more than difficult I thought. The questions I face are: what is a minimal sufficient statistics (is it just reordered $X$)? the MLE has a constant bias (see note), is debiased MLE the best unbiased estimator? how to use an ancillary statistics to find the best unbiased estimator? Would it yield the same estimator? I guess I'm not the first person on earth to ask myself these questions. Have you done this kind of analysis, do you know results, the "right" method, or do you know of an article that studied something like it? Note: proof that MLE $\hat\theta$ has constant bias: $E_\theta(\hat\theta)=E_\theta(\arg\max_t \prod_ip(x_i-t))=E_\theta(\arg\max_t \prod_ip(\epsilon_i-(t-\theta)))\\=E_\theta(\arg\max_u \prod_ip(\epsilon_i-u)+\theta)=E_0(\hat\theta)+\theta=c+\theta $ When $p$ is symetric around 0, this bias is 0: since $-\epsilon$ has the same distribution as $\epsilon$: $E_0 (\hat\theta)=E_\theta(\arg\max_t \prod_ip(-\epsilon_i-t))\\=E_\theta(\arg\max_t \prod_ip(\epsilon_i+t))=E_\theta(-\arg\max_u \prod_ip(\epsilon_i-u))=-E_0(\hat\theta)$
L Stoja Milovanovic Publications13 Citations22 Influential Citations2 Publications Influence Claim Your Author Page Ensure your research is discoverable on Semantic Scholar. Claiming your author page allows you to personalize the information displayed and manage publications. PURPOSE To assess the potential risk factors for pneumothorax secondary to pulmonary radiofrequency (RF) ablation. MATERIALS AND METHODS Six electronic databases were searched from inception to… (More) Inferior vena cava (IVC) filters are commonly used in select high-risk patients for the prevention of pulmonary embolism. Potentially serious complications can arise from the use of IVC filters,… (More) Major bleeding remains an uncommon yet potentially devastating complication following percutaneous image-guided biopsy. This article reviews two cases of major bleeding after percutaneous biopsy and… (More) PURPOSE To report a single operator's experience using a modified single-puncture gastrostomy technique deploying up to three nonabsorbable gastropexy anchors. MATERIALS AND METHODS A retrospective… (More) The objectives of this study were to examine the effect of sonication and high-pressure carbon dioxide processing on proteolytic hydrolysis of egg white proteins and antioxidant activity of the… (More) Interventional radiologists (IR) play an important role in the acquisition of tissue for pathologic diagnosis with the use of image-guided percutaneous techniques. Tissue samples can be collected… (More) The increased use of percutaneous methods for the local management of malignancy has led to a larger role for interventional radiologists within the oncology multidisciplinary team. In particular,… (More) Image-guided minimally invasive percutaneous techniques for thoracic interventions allow interventional radiologists to perform diagnostic and curative treatments for a variety of thoracic… (More) We used the 8$\pi$ $\gamma$-ray spectrometer at the TRIUMF-ISAC radiocative ion beam facility to obtain high-precision branching ratios for $^{19}$Ne $\beta^+$ decay to excited states in $^{19}$F.… (More)
You are used to the fact that the internal energy is minimized in equilibrium, but keep in mind that in this case your system is not isolated. Instead, you are dealing with a system in the canonical ensemble, so there is a giant temperature reservoir coupled to your system! The entire system of your spins + the bath is considered to be in the micro canonical ensemble, and their internal energy is minimized in equilibrium. By introducing the Helmholtz free energy $$F=U-TS$$ you are able to solve the spin system while discarding the bath. Now observe that there is interplay between the internal energy and the entropy. For low temperatures $F\approx U$ so the internal energy is minimized, while for high temperatures $F\approx-TS$ and the systems tends to maximize its entropy. In between the situation is more complex. Thus as you said $F$ is more fundamental. EDIT 1: Why minimization? Just because entropy is always increasing! If $Q_{\rm bath}$ is the heat flowing into the heat bath then $Q_{\rm bath}=\Delta U_{\rm bath}=-\Delta U_{}$ since the total energy of the spins + bath is constant, and so the total change in entropy is $$0\leq\Delta S=\Delta S_{\rm bath}+\Delta S=\frac{Q_{\rm bath}}{T}+\Delta S=$$$$=\frac{T\Delta S-\Delta U}{T}=-\frac{\Delta\left(U-TS\right)}{T}=-\frac{\Delta F}{T}$$ Thus the change in Helmholtz free energy is $$\Delta F\leq 0$$ It means that this quantity is decreasing, obtaining a minimum at equilibrium. I should note that I'd assumed that the system does zero work.
Why does the following DFA have (to have) the state $b_4$? Shouldn't states $b_1,b_2,b_3$ already cover "exactly two 1s"? Wouldn't state $b_4$ mean "more than two 1s", even if it doesn't trigger an accept state? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community $b_4$ is what is called a trap state, that is, a state that exists just so that all possible transitions are explicitly represented, even those that do not lead to a final state. It doesn't change the language that is being defined, and can be omitted for the sake of brevity. b4 exists to cover the entire alphabet ([0,1], in this case) for each state. While this is not strictly necessary, it is a hot topic of discussion in the field. By showing the complete graph, it is more obvious that a third '1' in your input string permanently moves you out of the 'accept' state b3. The formal definition of a DFA is $M = (Q, \Sigma, \delta, q_0, F)$, were $Q$ is the finite set of states, $\Sigma$ is the alphabet, $\delta$ is the transition function, $q_0 \in Q$ is the start state, and $F \subseteq Q$ is the set of final states. Note that $\delta \colon Q \times \Sigma \to Q$ is specified to be a function, i.e., it has to be defined for all states and symbols. The graphical depiction of the DFA is complete in this sense with $b_4$. Often such dead states are just omitted in the sake of clarity of the diagram, the reader is surely capable of adding them if required. Answering your question I have to say(sadly) that it depend. It depends on the definition of DFA that you are using because it appears to not be concensus in a unique definition. For example I use the definition of the DFA where $\delta$ is a function. The next question is: Is $\delta$ a total function or a partial function? Personally when I use the term function I am refering to total functions by default. But someone can disagree with me. More importantly when I studied the definition of a DFA my teacher told me that $\delta$ is a total function. Summarizing I use a particular definition of a DFA where the $b_4$ state have to exist. I can skip drawing it for the sake of laziness or clarity, but I know it exist. Finally to answer your question more precisely we have to know what definition of DFA you use. Wouldn't state b4 mean "more than two 1s", even if it doesn't trigger an accept state? The state $b_4$ means that if a word $\sigma$ have more than two "1" it will never reach an accepting state so $\sigma\notin L = \{w\,|\, \text{contains exactly two ones}\}$.
Your approach, for this problem at least, will work, but there are interesting things happening in the background. Simply setting the two terms equal and solving for $m$ will give you$$m=\frac{n\log n}{n-\log n}$$However, I doubt that you want to substitute this back into the two original expressions and try to find a $g(n)$ so that each of your original expressions is in $O(g(n))$. Let's try something else. For no particular reason, let's try letting $m=\log n$. Then your two expressions become$$\begin{align}n\log n+m\log n&=n\log n+\log^2n \in O(n\log n)\text{, and}\\n\:m&=n\log n\in O(n\log n)\end{align}$$Hooray! If $m=f(n)=\log n$ we get a common upper bound with $g(n)=n\log n$. Let's do the same thing, now with $m=n$. The two expressions are now$$\begin{align}n\log n+m\log n&=n\log n+n\log n \in O(n\log n)\text{, and}\\n\:m&=n\; n\in O(n^2)\end{align}$$Drat. No common upper bound, but by transitivity, we have $n\log n+m\log n\in O(n^2)$ and so again, we find a common upper bound, namely the asymptoticly larger $g(n)=n^2$. You can do this for all kinds of other $f(n)$, like $f(n)=\log\log n$ and even $f(n)=1$, both of which have common solutions $g(n)=n\log n$. In fact, it seems that we can do this for any $f$ whatsoever.
In this final section, we collect several auxiliary results concerning the two notions of stationarity introduced in Definitions 2.7 and 2.8. First of all, we observe that it is actually enough to consider Definition 2.8 for \(q=1\). Lemma A.1 Let \(k\in \mathbb {N}_0\), \(p,q\in [1,\infty )\). If \(\mathbf {U}\) is stationary on \(L^1_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) in the sense of Definition 2.8 and \(\mathbf {U}\in L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\)\(\mathbb {P}\)-a.s. then \(\mathbf {U}\) is stationary on \(L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\). Proof According to the assumption, for all \(f\in C_b(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) , it holds $$\begin{aligned} \mathbb {E}[f(\mathbf {U})]=\mathbb {E}[f(\mathcal {S}_\tau \mathbf {U})]. \end{aligned}$$ If \(f\in C_b(L^q_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) then for all \(R\in \mathbb {N}\) $$\begin{aligned} \mathbf {U}\mapsto f(\mathbf {U}\,\mathbf {1}_{|\mathbf {U}|\le R})\in C_b(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))) \end{aligned}$$ hence $$\begin{aligned} \mathbb {E}[f(\mathbf {U}\,\mathbf {1}_{|\mathbf {U}|\le R})]=\mathbb {E}[f((\mathcal {S}_\tau \mathbf {U})\mathbf {1}_{|\mathcal {S}_\tau \mathbf {U}|\le R})]. \end{aligned}$$ Finally, since \(\mathbf {U}\in L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\)\(\mathbb {P}\) -a.s., we obtain that $$\begin{aligned} \mathbf {U}\,\mathbf {1}_{|\mathbf {U}|\le R}\rightarrow \mathbf {U}\quad \text {in}\quad L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\quad \mathbb {P}\text {-a.s.} \end{aligned}$$ and we conclude by the dominated convergence. \(\square \) Next, we show that for the case of stochastic processes with continuous trajectories, the two definitions are equivalent. Lemma A.2 Let \(k\in \mathbb {N}_0\), \(p\in [1,\infty )\). An \(W^{k,p}(\mathbb {T}^3)\)-valued measurable stochastic process \(\mathbf {U}\) with \(\mathbb {P}\)-a.s. continuous trajectories is stationary on \(W^{k,p}(\mathbb {T}^3)\) in the sense of Definition 2.7 if and only if it is stationary on \(L^1_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) in the sense of Definition 2.8. Proof Let us first show that Definition 2.8 implies Definition 2.7 . Let \(\tau \ge 0\) and \(t_1,\dots ,t_n\in [0,\infty )\) . Let \((\psi _m)\) be a smooth and compactly supported approximation to the identity on \(\mathbb {R}\) and define $$\begin{aligned} \Psi _m(\mathbf {U})= \left( \int _0^\infty \mathbf {U}(s)\psi _m(t_1-s)\mathrm {d}s,\dots , \int _0^\infty \mathbf {U}(s)\psi _m(t_n-s)\mathrm {d}s\right) . \end{aligned}$$ If \(\varphi \in C_b([W^{k,p}(\mathbb {T}^3)]^n)\) then \(\varphi \circ \Psi _m\in C_b(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) and therefore $$\begin{aligned} \mathbb {E}[\varphi \circ \Psi _m(\mathcal {S}_\tau \mathbf {U})]=\mathbb {E}[\varphi \circ \Psi _m(\mathbf {U})]. \end{aligned}$$ Sending \(m\rightarrow \infty \) we obtain due to the continuity of \(\mathbf {U}\) and the dominated convergence theorem that $$\begin{aligned} \mathbb {E}[\varphi (\mathbf {U}(t_1+\tau ),\dots ,\mathbf {U}(t_n+\tau ))]=\mathbb {E}[\varphi (\mathbf {U}(t_1),\dots ,\mathbf {U}(t_n))] \end{aligned}$$ and the claim follows. To show the converse implication, let us fix \(\tau \ge 0\) and an equidistant partition \(0=t_1<\cdots<t_n<\cdots <\infty \) with mesh size \(\Delta t=\frac{\tau }{m}\) for some \(m\in \mathbb {N}\) . Observe that there is an one-to-one correspondence between sequences \({\hat{\mathbf {U}}}_m=(\mathbf {U}(t_1),\mathbf {U}(t_2),\dots )\in \ell ^1_{\mathrm{loc}}(W^{k,p}(\mathbb {T}^3))\) and piecewise constant functions in \(L^1_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) given by \({\tilde{\mathbf {U}}}_m(t)=\mathbf {U}(t_i)\) if \(t\in [t_i,t_{i+1})\) . Moreover, it is an isometry in the following sense $$\begin{aligned} \sum _{i=1}^N\Vert {\hat{\mathbf {U}}}_m(t_i)\Vert _{W^{k,p}(\mathbb {T}^3)}=\int _0^{N\Delta t}\Vert {\tilde{\mathbf {U}}}_m(t)\Vert _{W^{k,p}(\mathbb {T}^3)}\,\mathrm {d}t. \end{aligned}$$ Thus, if \(\Phi \) denotes this isometry and \(\varphi \in C_b (L^1_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) , then \(\varphi \circ \Phi \in C_b(\ell ^1_{\mathrm{loc}}(W^{k,p}(\mathbb {T}^3)))\) . Consequently, $$\begin{aligned} \mathbb {E}[\varphi ({\tilde{\mathbf {U}}}_m)]=\mathbb {E}[\varphi (\mathcal {S}_\tau {\tilde{\mathbf {U}}}_m)] \end{aligned}$$ follows from Definition 2.7 . Due to the continuity of \(\mathbf {U}\) we may send \(m\rightarrow \infty \) which completes the proof. \(\square \) The following result proves that weak continuity together with a uniform bound is enough for the equivalence of Definitions 2.7 and 2.8 to hold true. Corollary A.3 The statement of Lemma A.2 remains valid if the trajectories of \(\mathbf {U}\) are \(\mathbb {P}\) -a.s. weakly continuous and for all \(T>0\) $$\begin{aligned} \sup _{t\in [0,T]}\Vert \mathbf {U}\Vert _{W^{k,p}(\mathbb {T}^3)}<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$ (A.1) Proof Let \((\varphi _\varepsilon )\) be an approximation to the identity on \(\mathbb {T}^3\). Since \(\mathbf {U}\) has weakly continuous trajectories in \(W^{k,p}(\mathbb {T}^3)\) and satisfies (A.1), the process \(\mathbf {U}^\varepsilon :=\mathbf {U}*\varphi _\varepsilon \) has strongly continuous trajectories in \(W^{k,p}(\mathbb {T}^3)\). Hence the equivalence of the two notions of stationarity from Lemma A.2 holds. Now, let \(\mathbf {U}\) be stationary on \(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) in the sense of Definition 2.8 . That is, for every \(f\in C_b(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) we have $$\begin{aligned} \mathbb {E}[f(\mathcal {S}_\tau \mathbf {U})]=\mathbb {E}[f(\mathbf {U})]. \end{aligned}$$ Since \(\mathbf {U}\mapsto f(\mathbf {U}*\varphi _\varepsilon )\) also belongs to \(C_b(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) we deduce that $$\begin{aligned} \mathbb {E}[f(\mathbf {U}^\varepsilon )]=\mathbb {E}[f([\mathcal {S}_\tau \mathbf {U}]*\varphi _\varepsilon )]=\mathbb {E}[f(\mathcal {S}_\tau \mathbf {U}^\varepsilon )]. \end{aligned}$$ So, \(\mathbf {U}^\varepsilon \) is stationary in the sense of Definition 2.8 and due to Lemma A.2 , \(\mathbf {U}^\varepsilon \) is stationary in the sense of Definition 2.7 . In addition, \(\mathbf {U}^\varepsilon (t)\rightarrow \mathbf {U}(t)\) strongly in \(W^{k,p}(\mathbb {T}^3)\) for every \(t\in [0,\infty )\) . Therefore, if \(g\in C_b([W^{k,p}(\mathbb {T}^3)]^n)\) , we may use dominated convergence in order to pass to the limit in expressions of the form $$\begin{aligned} \mathbb {E}[g(\mathbf {U}^\varepsilon (t_1),\dots , \mathbf {U}^\varepsilon (t_n))]=\mathbb {E}[g(\mathbf {U}^\varepsilon (t_1+\tau ),\dots , \mathbf {U}^\varepsilon (t_n+\tau ))]. \end{aligned}$$ Stationarity of \(\mathbf {U}\) in the sense of Definition 2.7 follows. To show the converse implication, assume that \(\mathbf {U}\) is stationary in the sense of Definition 2.7 . By the same argument as above, it follows that \(\mathbf {U}^\varepsilon \) is stationary in the sense of Definition 2.7 hence stationary in the sense of Definition 2.8 . In other words, for every \(f\in C_b(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) , $$\begin{aligned} \mathbb {E}[f(\mathbf {U}^\varepsilon )]=\mathbb {E}[f(\mathcal {S}_\tau \mathbf {U}^\varepsilon )]. \end{aligned}$$ According to (A.1 ) we obtain that \(\mathbf {U}^\varepsilon \rightarrow \mathbf {U}\) in \(L^1_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) and the dominated convergence theorem yields the claim. \(\square \) As the next step, we show that both notions of stationarity introduced in Definitions 2.7 and 2.8 are stable under weak convergence. Lemma A.4 Let \(k\in \mathbb {N}_0, p,q\in [1,\infty )\) and let \((\mathbf {U}_m)\) be a sequence of random variables taking values in \(L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) . If, for all \(m\in \mathbb {N}\) , \(\mathbf {U}_m\) is stationary on \(L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) in the sense of Definition 2.8 and $$\begin{aligned} \mathbf {U}_m\rightharpoonup \mathbf {U}\quad \text {in}\quad L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\quad {\mathbb {P}}\text {-a.s.,} \end{aligned}$$ then \(\mathbf {U}\) is stationary on \(L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) . Proof Stationarity of \(\mathbf {U}_m\) implies that for every \(f\in C_b(L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) and every \(\tau \ge 0\) $$\begin{aligned} \mathbb {E}[ f(\mathcal {S}_\tau \mathbf {U}_m)]=\mathbb {E}[ f(\mathbf {U}_m)]. \end{aligned}$$ (A.2) Moreover, it follows from the above weak convergence and the weak continuity of $$\begin{aligned} \mathcal {S}_\tau :L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\rightarrow L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))) \end{aligned}$$ that for every \(g\in C_b((L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)),{w}))\) it holds $$\begin{aligned} g(\mathcal {S}_\tau \mathbf {U}_m)\rightarrow g(\mathcal {S}_\tau \mathbf {U}),\qquad g(\mathbf {U}_m)\rightarrow g(\mathbf {U}). \end{aligned}$$ In particular, since every weakly continuous function is strongly continuous hence (A.2 ) holds with f replaced by g , we deduce by the dominated convergence theorem that $$\begin{aligned} \mathbb {E}[ g(\mathcal {S}_\tau \mathbf {U})]=\mathbb {E}[ g(\mathbf {U})]. \end{aligned}$$ Now, it remains to verify the corresponding expression for a general strongly continuous function \(f\in C_b(L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)))\) . To this end, let \((\varphi _\varepsilon )\) be a smooth approximation to the identity on \(\mathbb {R}\times \mathbb {T}^3\) . Since convolution with \(\varphi _\varepsilon \) is a compact operator on \(L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) , we obtain that $$\begin{aligned} \mathbf {U}\mapsto f(\mathbf {U}*\varphi _\varepsilon )=:f(\mathbf {U}^\varepsilon )\in C_b((L^q_{\mathrm{loc}}([0,\infty );W^{k,p}(\mathbb {T}^3)),{w})) \end{aligned}$$ and consequently $$\begin{aligned} \mathbb {E}[ f(\mathbf {U}^\varepsilon )]=\mathbb {E}[ f([\mathcal {S}_\tau \mathbf {U}]*\varphi _\varepsilon )]=\mathbb {E}[f(\mathcal {S}_\tau \mathbf {U}^\varepsilon )], \end{aligned}$$ hence \(\mathbf {U}^\varepsilon \) is stationary. Since $$\begin{aligned} \mathbf {U}^\varepsilon \rightarrow \mathbf {U}\quad \text {in}\quad L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\quad {\mathbb {P}}\text {-a.s.,} \end{aligned}$$ we may pass to the limit \(\varepsilon \rightarrow 0\) and conclude using the dominated convergence theorem. \(\square \) Lemma A.5 Let \(k\in \mathbb {N}_0\) , \(p\in [1,\infty )\) and let \((\mathbf {U}_m)\) be a sequence of \(W^{k,p}(\mathbb {T}^3)\) -valued stochastic processes which are stationary on \(W^{k,p}(\mathbb {T}^3)\) in the sense of Definition 2.7 . If for all \(T>0\) $$\begin{aligned} \sup _{m\in \mathbb {N}} \mathbb {E} \left[ \sup _{t\in [0,T]}\Vert \mathbf {U}_m\Vert _{W^{k,p}(\mathbb {T}^3)} \right] <\infty \end{aligned}$$ (A.3) and $$\begin{aligned} \mathbf {U}_m\rightarrow \mathbf {U}\quad \text {in}\quad C_{\mathrm {loc}}([0,\infty );(W^{k,p}(\mathbb {T}^3),w))\quad {\mathbb {P}}\text {-a.s.,} \end{aligned}$$ then \(\mathbf {U}\) is stationary on \(W^{k,p}(\mathbb {T}^3)\) . Proof The claim is a consequence of Corollary A.3 and Lemma A.4 . Indeed, as a consequence of (A.3 ) we deduce that $$\begin{aligned} \mathbb {E} \left[ \sup _{t\in [0,T]}\Vert \mathbf {U}_m\Vert _{W^{k,p}(\mathbb {T}^3)} \right] <\infty \end{aligned}$$ thus \(\mathbf {U}_m\) satisfies the assumptions of Corollary A.3 and the same is true for \(\mathbf {U}\) due to lower semicontinuity of the corresponding norm. Accordingly, \(\mathbf {U}_m\) satisfy the assumptions of Lemma A.4 which implies that \(\mathbf {U}\) is stationary in the sense of Definition 2.8 . Corollary A.3 then yields the claim. \(\square \) Let us conclude with a simple observation that stationarity is preserved under composition with measurable functions. Corollary A.6 Let \(k\in \mathbb {N}_0\), \(p\in [1,\infty )\). Let the stochastic process \(\mathbf {U}\) be stationary on \(W^{k,p}(\mathbb {T}^3)\) in the sense of Definition 2.7. Then for every measurable function \(F:W^{k,p}(\mathbb {T}^3)\rightarrow \mathbb {R}\), the stochastic process \(F(\mathbf {U})\) is stationary on \(\mathbb {R}\). Proof The proof follows immediately from the corresponding equality of joint laws of \((\mathbf {U}(t_1),\dots , \mathbf {U}(t_n))\) and \((\mathbf {U}(t_1+\tau ),\dots , \mathbf {U}(t_n+\tau ))\). \(\square \) Corollary A.7 Let \(k\in \mathbb {N}_0\), \(p,q\in [1,\infty )\). Let \(\mathbf {U}\) be stationary on \(L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) in the sense of Definition 2.8. Then for every measurable function \(F:W^{k,p}(\mathbb {T}^3)\rightarrow \mathbb {R}\) and a.e. \(s,t\in [0,\infty )\), the laws of \(\mathbf {U}(s)\) and \(\mathbf {U}(t)\) on \(W^{k,p}(\mathbb {T}^3)\) coincide. Proof Since the mapping \(\mathbf {U}\mapsto \mathbf {U}(t) \mapsto F(\mathbf {U}(t))\) is measurable on \(L^q_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) for a.e. \(t\in [0,\infty )\). For the same reasons, the mapping \(\mathcal {S}_{s-t}:\mathbf {U}\mapsto \mathbf {U}(s) \mapsto F(\mathbf {U}(s))\) is measurable on \(L^q_{\text {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) for a.e. \(s,t\in [0,\infty )\). Hence the claim follows from the equality of laws of \(\mathbf {U}\) and \(\mathcal {S}_{s-t}\mathbf {U}\). \(\square \) Remark A.8 Note that in view of Corollary A.7 the stationarity in the sense of Definition 2.8 implies the following almost everywhere version of Definition 2.7 : if \(\mathbf {U}\) is stationary on \(L^q_{\mathrm {loc}}([0,\infty );W^{k,p}(\mathbb {T}^3))\) in the sense of Definition 2.8 then the joint laws $$\begin{aligned} \mathcal {L}(\mathbf {U}(t_1+\tau ),\dots , \mathbf {U}(t_n+\tau )),\quad \mathcal {L}(\mathbf {U}(t_1),\dots , \mathbf {U}(t_n)) \end{aligned}$$ on \([W^{k,p}(\mathbb {T}^3)]^n\) coincide for a.e. \(\tau \ge 0\) , for a.e. \(t_1,\dots ,t_n\in [0,\infty )\) .
For a more elementary approach, avoiding induction and Mackey theory, you might try a concrete construction. Realize Aakumadula's group $G$ as a $2 \times 2$ matrix group over $\mathbb{F}_p$ (say for $p$ an odd prime) consisting of all $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$. Here $a, b$ run over respectively the multiplicative group and the additive group of the field. This realizes a semidirect product $A \ltimes B$ having normal Sylow $p$-subgroup $B$ consisting of matrices with $a=1$, acted on by its (cyclic) automorphism group $A$ of order $p-1$ (the diagonal group acting by conjugation). Check first that the commutator subgroup is just $B$, so its index $p-1$ in $G$ counts the number of linear characters (those complex irreducible characters of degree 1). Since the sum of squares of degrees adds up to the group order $p(p-1)$, the problem is to see that there is only one more irreducible character (of degree necessarily $p-1$). As Aakumadula suggests, you might construct this directly by induction from a nontrivial linear character of $B$. (But then you'd have to check irreducibility.) On the other hand, another very classical fact is that the number of (distinct) irreducible characters equals the number of conjugacy classes in $G$. By linear algebra, conjugates must have the same eigenvalues. Sylow theory shows that elements of order $p$ (with both eigenvalues 1) are all in the normal subgroup $B$, and by conjugation with $A$ these $p-1$ elements are all conjugate. Along with the trivial class you have so far just 2 classes. But then it's easy to check that each of the $p-2$ elements with fixed $a \neq 1$ is conjugate in $G$ to precisely the $p$ elements sharing the same eigenvalues $a, 1$. Now you have $p$ classes, with no more possible eigenvalues (or group elements) to consider. P.S. As the answers (and comment about arbitrary finite fields) indicate, the question can be approached narrowly or more broadly. What works best depends heavily on what one already knows. The approach I've sketched is deliberately elementary, restricted to the most basic knowledge of character theory, linear algebra, groups and rings. Even here there are lots of shortcuts and variants. But what is the motivation other than curiosity?
Given $n$, the number of vertices, what is the number of labeled triangle-free graphs on $n$ vertices? There shouldn't be any sensible exact formula, as Ira Gessel says. But there are very good asymptotics and a structural description. An old result of Erdős, Kleitman and Rothschild is that almost all triangle-free graphs are bipartite. Proemel, Schickinger and Steger refined this to show that almost all triangle-free graphs which are not bipartite can be made bipartite by removing one edge; and almost all of the rest can be made bipartite by removing two edges; and so on. It's easy to count bipartite graphs (there are roughly $2^{n+n^2/4}$; you can easily enough find accurate asymptotics which depend on the parity of $n$) and similarly the Proemel-Schickinger-Steger classes are not too hard to enumerate asymptotically (I don't know of these being explicitly in the literature). There are also similar results if you fix the number of edges (which get less precise for sparse graphs). user36212 has already essentially answered the question; the state of the art has not yet been pointed out though (by that I mean I miss a mention of the latest relevant publications, and a mention of the hypergraph container method, which the OP might find useful being told about) : due to work of Balogh and coworkers also for the class of maximal triangle-free graphs the asymptotics are known (and these differ from the asymptotics for all triangle-free graphs). In [BP2014] József Balogh, Šárka Petříková, Number of maximal triangle-free graphs, Bull. London Math. Soc. 46(5) (2014), 1003-1006 it was shown that the lower bound of $2^{\frac18 n^2 + o(n^2)}$ for labelled maximal triangle-free graphs does give the correct asymptotics (to within this precision). Incidentally, while [BP2014] avers without reference that the lower bound was "known much earlier", the first public reference seems to be this answer of Douglas Zare, which gives the a slighly more complicated construction than [BP2014], but in return gives a little more detail regarding the proof that the construction actually produces enough maximal triangle-free graphs. An exposition giving full detail does not exist as far as I know, though writing one would be easy: for the construction in [BP2014], that is, if one takes a pefect matching $M$ consisting of $n/4$ edges, and adds a new independent set $S$ of $n/2$ vertices, and then decides independently for each $(u,vw)\in S\times M$ whether to join $u$ to $v$ or to $w$ (and does exactly one of those), then evidently in the labelled sense precisely $2^{\frac{n}{2}\cdot\frac{n}{4}}=2^{n^2/8}$ are constructed, and what remains to be proved is that $2^{n^2/8 - r}$ with $r\in o(n^2)$ of those are maximal triangle-free; this is an exercise. The proof of the upper bound in [BP2014] uses a result of Saxton and Thomason sometimes referred to a the method of hypergraph containers. One should note that, in view of the results from the 1970s already mentioned by user36212, [JP2014] implies $\text{# maximal triangle-free graphs}\qquad\sim_{n\to\infty}\qquad\sqrt{\text{# triangle free graphs}}$. Even more recently, in [BLPS2015] József Balogh, Hong Liu, Šárka Petříková, Maryam Sharifzadeh, The Typical Structure of Maximal Triangle-Free Graphs, Forum of Mathematics, Sigma (2015), Vol. 3, e20, 19 pages two structural statements were proved, 'structural' in the sense that they shed some light on how most of the members of the set $\mathbb{K}$ of finite maximal triangle-free graphs 'look' like: (0) Almost every member of $\mathbb{K}$ can be constructed according to what arguably is the most straighforward construction of a maximal triangle-free graph: to start with a perfect matching $M$, to add an independent set $S$, and then to add edges between $V(M)$ and $S$ until the graph is maximal-triangle free. (Evidently, not every member of $\mathbb{K}$ can be so constructed: e.g., the Petersen graph cannot.) (1) By [BLPS2015, Lemma 2.4], $\forall n\in\omega$ $\forall$ maximal triangle-free $n$-vertex graph $G$ ($\#\text{maximal independent sets in $G$}$) $\leq$ $2^{\frac12 n - \frac{1}{25}\text{number of vertex-disjoint three-vertex paths in $G$}}$ It seems not to be known whether there is any triangle-free graph for which the bound in the above lemma is attained.
How Do I Compute Lift and Drag? In fluid flow simulations, it is often important to evaluate the forces that the fluid exerts onto the body — for example, lift and drag forces on an airfoil or a car. Engineers can use these body forces to quantify the efficiency and aerodynamic performance of designs. Today, we will discuss different ways to compute lift and drag in COMSOL Multiphysics. Defining Lift and Drag When fluid flow passes a body, it will exert a force on the surface. As shown in the figure below, the force component that is perpendicular to the flow direction is called lift. The force component that is parallel to the flow direction is called drag. For simplicity, let’s assume that the flow direction is aligned with the coordinate system of the model. Later on, we will show you how to compute the lift and drag forces in a direction that is not aligned with the model coordinate system. Schematic of lift and drag components when fluid flow passes a body. There are two distinct contributors to lift and drag forces — pressure force and viscous force. The pressure force, often referred to as pressure-gradient force, is the force due to the pressure difference across the surface. The viscous force is the force due to friction that acts in the opposite direction of the flow. The magnitudes of pressure force and viscous force can vary significantly, depending on the type of flow. The flow around a moving car, for instance, is often dominated by the pressure force. Computing Lift and Drag Using Total Stress COMSOL Multiphysics offers complete access to all of the internal variables and makes it very easy to compute surface forces via integration on a boundary. Here, we will demonstrate how to compute the drag forces on an Ahmed body. You can download this model from our Application Gallery. Simulation of airflow over an Ahmed body. The surface plot shows the pressure distribution, and the streamlines are colored by the velocity magnitude. The arrow surface behind the Ahmed body shows the circulation in the wake zone. There are several ways to compute drag depending on the physics. The most straightforward way is to integrate the total stress — which includes contributions from the pressure force and the viscous force — in each direction. To do so, we first need to define a surface integration operator under the Derived Values node, as illustrated below. TIP: Alternatively, you can also use a boundary probe or integration operator in the component coupling to define such integration. The difference is that the operations defined in the physics setting can be used during the simulation — for example, drag force computed with a boundary probe as an objective or a constraint in an optimization study. Next, we can select the boundaries to perform the integration. In this example, we chose all of the boundaries on the body. Drag in this model is in the y-direction. We can type in the expression: spf.T_stressy, which represents the total stress in the y-direction. Computing Pressure Force and Viscous Force Separately Sometimes, engineers can obtain greater insight into designs by examining the pressure force and viscous force separately. COMSOL Multiphysics features a predefined variable, spf.K_stressy, for viscous stress in the y-direction. We can readily evaluate the viscous force by integrating the viscous stress. What about the pressure force? Pressure, denoted by the variable p, is a scalar. To project in the direction of drag, we need to multiply pressure by the y-component of the normal vector on the surface, spf.nymesh. Therefore, we can evaluate the pressure force by integrating spf.nymesh*p on the surface. In some special turbulent flow cases where the wall function is used, it is more accurate to compute the viscous force using the friction velocity, spf.u_tau. In COMSOL Multiphysics, the k-epsilon and k-omega turbulence models use the wall function. To learn more about turbulence models in COMSOL Multiphysics, read our blog post “Which Turbulence Model Should I Choose for My CFD Application?“. We can obtain the local shear stress at the wall by: Therefore, the local shear stress in the y-direction is: where u^T is the tangential velocity at the wall. We can further rewrite u^T as u_\tau*u^+, where u^+ is the tangential dimensionless velocity. Without going into too many details on derivation, we can translate the previous equations into COMSOL variables. We can integrate the local wall shear stress in the direction of drag (the y-direction) with the following expression: spf.rho*spf.u_tau*spf.u_tangy/spf.uPlus. In this expression, spf.rho is the density of the fluid, spf.u_tangy is the velocity in the y-direction at the wall, and spf.uPlus is the tangential dimensionless velocity. The table below summarizes the expressions used to compute each force. Fluid Flow Without Wall Function Turbulent Flow with Wall Function Pressure Force spf.nymesh*p spf.nymesh*p Viscous Force -spf.K_stressy spf.rho*spf.u_tau*spf.u_tangy/spf.uPlus Total Force -spf.T_stressy spf.nymesh*p + spf.rho*spf.u_tau*spf.u_tangy/spf.uPlus Note: In this example, the drag force is in the y-direction. You may need to change the projection direction based on the orientation of your model. Correction for Angle of Attack It is common that the geometry may not be aligned perfectly with the flow direction. The angle between the center reference line of the geometry and the incoming flow is called angle of attack (often denoted by the Greek letter \alpha). In aerospace engineering, the angle of attack is frequently used as it is the angle between the chord line of the airfoil and the free-stream direction. The following figure shows the relationship between lift, drag, and angle of attack on an airfoil. Schematic illustrating lift, drag, and angle of attack on an airfoil. There are two ways to change the angle of attack of the model. We can either rotate the geometry itself or we can keep the geometry fixed but modify the flow direction at the inlet. Here, we will use the second approach. It is much simpler to adjust the velocity field at the inlet boundary condition as we would not need to remesh the model for each angle of attack. As shown in the figure below, the airfoil is fixed while the streamlines show the flow at an angle of attack due to the adjusted inlet velocity direction. Simulation of flow passing a NACA 0012 airfoil at a 14-degree angle of attack. The surface plot shows the velocity magnitude along with the streamlines (shown in black). This example uses the SST turbulence model, which does not use the wall function. Therefore, we will use total stress to compute lift. At a zero angle of attack, the lift is simply -spf.T_stressy. If the angle of attack is nonzero, we can project the force onto the direction of the lift using the following expression: spf.T_stressx*sin(alpha*pi/180)-spf.T_stressy*cos(alpha*pi/180). Here, alpha represents the angle of attack in degrees. What About Lift or Drag Coefficients? You may also be interested in the nondimensionalized forms of lift and drag — the lift coefficient and the drag coefficient. It is often easier to use the coefficients instead of the dimensional forces for the purpose of validating experimental data or comparing different designs. The lift coefficient in 2D is defined as: Since we have already calculated the dimensional lift, we can simply normalize the lift by the dynamic pressure and the chord length. With the dimensionless lift coefficient, we can compare our simulation results with experimental data (Ref. 1). Note: In 3D, the lift coefficient is nondimensionalized by area instead of length: C_L = \frac{L}{\frac{1}{2} \rho u^2_\infty A} Graph comparing simulation results and experimental data of the lift coefficient on a NACA 0012 airfoil at various angles of attack. As illustrated in the above graph, no discernible discrepancy between the computational and experimental results occurs within the range of the angle of attack values used in this simulation. The experimental results continue toward a high angle of attack regime where the airfoil stalls. Concluding Remarks In this blog post, we have explored ways to compute lift and drag on an Ahmed body and an NACA 0012 airfoil. We have demonstrated how to compute pressure force and viscous force, while also examining the special case where a wall function is used in the turbulence model. Each of the approaches we have presented here are certainly not limited to these specific simulations. You can compute the body forces on any boundaries or surfaces, thereby gaining insight into designs through multiphysics simulations. With the Optimization Module, you can take this analysis one step further and optimize lift or drag. References C.L. Ladson, “Effects of Independent Variation of Mach and Reynolds Numbers on the Low-Speed Aerodynamic Characteristics of the NACA 0012 Airfoil Section,” NASA TM 4074, 1988. Comments (12) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Questions and Answers in Geometry and Analysis [Most Recent Entries][Calendar View][Friends] Below are the 10 most recent journal entries recorded inQuestions and Answers in Geometry and Analysis' LiveJournal: Tuesday, August 29th, 2006 11:45 am [pasha_m] Is there a geometrical meaning in the following number associated to a finite-dimensional semisimple Lie algebra g: an integral over g of the density rho(x)=Det(Sinh(ad_x)/ad_x) for x\in g, calculated on matrices of adjoint representation of g? Tuesday, April 4th, 2006 1:35 pm [nekura_ca] Questions about Terminology. Hello, I'm doing some work, and am having some difficulty with terminology, so I want to get it right. I have two questions for now. 1. I have a surface in three-space, and am looking for a term to describe if it's possible to determine which side a viewer is on. I was thinking about the 33336 tiling, how viewed from one side it has a left twist, and the other side has a right twist, so I thought chiral vs. achiral would work, but if I have a surface that is all black on one side, and all white on the other, compared to one that is white on both sides, that terminology is wrong. So is there a term to describe if a surface looks the same from both sides, or has a different appearance. 2. In a tiling, if A and B both belong to the same Transitivity class, is there an equivalence term to say that. When I've been writing I've been saying that A and B are Transitive, but I'm not sure it that's valid, or correct, so I want to make sure before I use it any more. Thanks Nekura Current Mood: curious Friday, January 13th, 2006 3:32 pm [bravchick] Relationship between L-polynomial and the Stiefel-Whitney class Let M be a closed manifold of odd dimension n=2k+1. Let L(p) denote the Hirzebruch L-polynomial in the Pontrjagin classes of M. Denote by Lthe component of 2k(p) L(p) in Hand by 2k(M, Z) w the 2k Î H 2k(M, Z 2) 2k-th Stiefel-Whitney class of M. I have strong reasons to believe that the reduction of Lmodulo 2 is equal to to 2k(p) w. Is it really so? And, if it is, how to prove it? 2k. Remark: if k is odd, so that 2k is not divisible by 4, then, of course, L by dimensional reason. In this case, Massey [Amer. Jornal of Math. 2k(p) = 0 82, 92-102] showed that wThus, if 2k=0. k is odd, the answer to my question is positive. Monday, October 10th, 2005 8:01 pm [_vopros_otvet] Dear ALL! Do you probably know how to: 1. Find furthest-neighbor or furthest-site diagram out of Voronoi Diagram? (Fast and efficient way for each site to know which one is farthest site from it)? 2. Let P1...Pk be a collection of pairwise disjoint simple polygons with a total of n edges, all enclosed in a given square. Find in O(n*logn) time a largest disk that can be inscribed in this square so that it is disjoint from all the interiors of the polygons Pi ------------ how do I solve this? Provided I know how to find largest empty circle in Voronoi diagram, how can I take care that it does not intersect the EDGES of polygons (staying disjoint with their interior)? Thanks a lot!!! Saturday, August 13th, 2005 11:32 am [dhilbert83] Question on Haar Measure/Integration Ok well you guys said this is for analysis, so don't yell at me since this post is not even remotely geometric :p If G is a locally compact group and K subse G is compact, and lim n --> inf int_K f_n(x) dx = 0, then does lim n --> inf int_{K^-1} f_n(x^-1) dx = 0? (f_n subset L^p(G) for some p >= 1) I actually ultimately need this question asked for integration being Bochner integration, but is this true even for Lebesgue integrals? I don't think it is, since the modular function will screw things up even if it's a 'mild' function. Monday, March 28th, 2005 10:53 pm [mostconducive] Non-Metrical Geometry What is Non-Metrical Geometry? It is mentioned in A. N. Whitehead's "Process and Reality" (pages 490s), in discussion of infinitesimals (Weierstrass), presentational immediacy, strain locus, projection (opposite of injection, I think), God and mentality. Intuitively, it brings me back to the 470s (pages); "Its is to be remembered that two points determine a complete straight line, that three non-colinear points determine a complete plane, and that four non-coplanar points determine a complete three dimensional flat locus." ("Process and Reality", page 472) Monday, March 14th, 2005 4:17 pm [philosophking] I'm really sorry, but I have questions that really aren't directly pertinent to the group. How are you using the special math text that you used in your first post to this community? Can you give me a link to how to use it? Would graph theory be included in your interests? Thank you very much! Thursday, March 10th, 2005 8:45 pm [bbixob] naive defitinion of normalisation of complex manifolds I need a reference to these simple statements about normalisation of complex spaces; are they correct at all ? I am willing to assume everything lies a very nice ambient space, say holomorphically convex manifold. \def\hatZ{{\hat Z}} Let $\nnn:\hatZ \ra Z$ be the normalisation of a variety $Z$, and let $\Zz k(\C) = \{z\in Z(\C): \#\nnn\inv(z)\geq k\}$ be a closed subvariety of points which have $k$ preimages or more. \begin{fact}\label{fact:zknrm} $\Zz k$ is a closed subvariety of $Z$, for all $k$; $\Zz {\deg \nnn + 1}$ is empty, $\Zz 1 = Z$. \end{fact} \bp Reference!!!! \ep \begin{fact}\label{fact:214} Around each point $z\in Z(\C)$ there exists a sufficiently small neighbourhood $z\in V$ open in complex topology such that $$V\cap \Zz k= \bigcup\limits_{i_1<..<i_k} V_{i_1}\cap ...\capV_{i_k},$$ where$$Z\cap V=V_1\cup ...\cup V_n$$ is the irreducible decomposition of$Z\cap V$.\end{fact} \bp not [Grauert, ???] [Abhyankar, LocalAnalytic Geometry, p.402, Cl 7] \ep Saturday, March 5th, 2005 10:59 am [novichyok] Flat connection vs. representation of the fundamental group Let M be a closed manifold. A complex representation α: π 1(M) → C n of its fundamental group gives rise to a flat vector bundle E α = M× α C whose monodromy is isomorphic to α . Given a continuous family of connections α(t) the bundles E α(t) are isomorphic (as vector bundles without a connection). Thus it is not difficult to see that there exists one bundle E → M and a continuous family of connections (t) such that the monodromy of (t) is isomorphic to α(t). Moreover if α(t) is differentiable then (t) can be chosen to be differentiable. I have 2 questions: </p> 1. Is there any book or paper where these simple facts are proven? 2. How far this relationship between families of representations and families of connections can be pushed? For example, if α(t) is analytic can one find an analytic family (t)? Friday, March 4th, 2005 12:53 am [bravchick] Reference for variational formula for the determinant of an elliptic operator Let A be a (note necessarily self-adjoint) invertible elliptic operator. The following variational formula for its determinant is well known δ Det(A) := δ (-∂ s Tr ( A -s ) | s=0) = ∂ s ( s Tr[ (δ A ) A -s-1])| s=0 cf., for example, Burghelea, Friedlander, Kappeler, Meyer-Vietoris type formula for determinants of elliptic differential operators. J. Funct. Anal . 107 (1992), 34--65, or Kontsevich, Vishik , Geometry of determinants of elliptic operators . (of course, to define A -s one uses a spectral cut, which I did not write explicitly to simplify the notation). Is it proven anywhere in the literature? (I do know how to prove it, but I need a reference)
I am confused, as not clear except by multiplying both terms on the r.h.s, and showing that all cancel out except the two on the l.h.s., as below: $(x)(x^{k-1}+x^{k-2}+\ldots+1) - (x^{k-1}+x^{k-2}+\ldots+1)= x^k -1$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I am confused, as not clear except by multiplying both terms on the r.h.s, and showing that all cancel out except the two on the l.h.s., as below: $(x)(x^{k-1}+x^{k-2}+\ldots+1) - (x^{k-1}+x^{k-2}+\ldots+1)= x^k -1$ Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. Here's another approach using roots of unity and the fundamental theorem of algebra. The zeros of $x^k - 1$ are $\{e^{\frac{n2\pi i}{k}}\}$ for $0 \leq n < k$. Therefore, we can rewrite $x^k - 1$ as: $$x^k - 1 = M\prod_{n=0}^{k-1} (x-e^{\frac{n2\pi i}{k}})$$ for some $M$ Now, let's find the zeros of $P(x) = (x-1)(x^{k-1} + ... + 1)$ Clearly, $x = 1 = e^{\frac{0*2\pi i}{k}}$ is a zero. What about $Q(x) = (x^{k-1} + ... + 1)$? Let's substitute our $k-1$ roots of unity, $r_n$ where $n > 1$ $$Q(r_n) = \sum_{j=0}^{k-1} (r_n)^j = \sum_{j=0}^{k-1} (e^{\frac{n2\pi i}{k}})^j = \frac{1 - (e^{\frac{n2\pi i}{k}})^k}{1 - e^{\frac{n2\pi i}{k}}} = \frac{0}{1 - e^{\frac{n2\pi i}{k}}} = 0$$ Using the sum of a finite geometric series and the fact that: $(e^{\frac{n2\pi i}{k}})^k = e^{n2\pi i} = 1^n$ Therefore, we can rewrite $Q(x)$ as: $$Q(x) = D' \prod_{n=1}^{k-1} (x-e^{\frac{n2\pi i}{k}})$$ and $P(x)$ as: $$P(x) = D\prod_{n=0}^{k-1} (x-e^{\frac{n2\pi i}{k}})$$ for some constant $D$. Now, we have that $P(x)$ agrees with $x^k - 1$ at $k$ points, which are the $k$ roots of unity. However, we also have that $P(0) = -1$ and $0^k - 1 = -1$ We can sub this back into the factored expressions to see that $M = D$, or we can also conclude that because $(x-1)(x^{k-1} + ... + 1)$ and $x^k - 1$ agree at $k + 1$ distinct points, $(x-1)(x^{k-1} + ... + 1) = x^k - 1$ for all $x$. For fun: Let $x$ be real, and consider the geometric series: 1)$\sum_{i=0}^{k-1} x^i = 1+x +x^2 +..+x^{k-1};$ 2) $ x \sum_{i=0}^{k-1}x^i = \ \ x+x^2+...+x^{k-1} +x^k;$ Subtract : 2)-1): $(x-1)\sum_{i=0}^{k-1} x^{i}= x^k-1$. This one cries out for a simple proof by induction: If $k = 1$ we evidently have $x^1 - 1 = (x^1 - 1)(1), \tag 1$ and if $k = 2$: $x^2 - 1 = (x -1)(x + 1) = (x - 1)\left ( \displaystyle \sum_0^1 x^i \right ); \tag 2$ if we now suppose that the formula holds for some positive $m \in \Bbb Z$, $x^m - 1 = (x - 1) \left ( \displaystyle \sum_0^{m - 1} x^i \right ), \tag 3$ then $x^{m + 1} - 1 = x^{m + 1} - x^m + x^m - 1$ $= x^m(x - 1) + (x - 1) \left ( \displaystyle \sum_0^{m - 1} x^i \right ) = (x - 1) \left ( x^m + \displaystyle \sum_0^{m - 1} x^i \right ) = (x - 1) \left ( \displaystyle \sum_0^m x^i \right ), \tag 4$ which shows that the formula $x^k - 1 = (x - 1) \left ( \displaystyle \sum_0^{k -1} x^i \right ) \tag 5$ is valid for all positive integers $k$. It's like this: $(x-1)(x^{k-1}+x^{k-2}+\ldots+1)$ $=x(x^{k-1}+x^{k-2}+\ldots+1) - (x^{k-1}+x^{k-2}+\ldots+1)$ $=x^{k-1+1}+x^{k-2+1}+x^{k-3+1}+x^{k-4+1}...+x^3+x^2+x-x^{k-1}-x^{k-2}-x^{k-3}-...-x-1$ $=x^{k}+x^{k-1}+x^{k-2}+x^{k-3}...+x^3+x^2+x-x^{k-1}-x^{k-2}-x^{k-3}-...-x^2-x-1$ $=x^{k}+(x^{k-1}+x^{k-2}+x^{k-3}...+x^3+x^2+x)-(x^{k-1}+x^{k-2}+x^{k-3}+...+x^2+x)-1$ $=x^k-1$ You can also prove it by reversing the solution above.
Suppose $X=Y_1\cup Y_2\,\cup \dots \cup\, Y_n$ where the $Y_i$s are disjoint. In the notes provided by the professor he mentioned that the symmetric group $\operatorname{Sym}(Y_i)$ can be viewed as a subgroup of $\operatorname{Sym}(X)$ by extending the functions from $\operatorname{Sym}(Y_i)$. Precisely: if $f \in \operatorname{Sym}(Y_i)$ then let $g(x)=f(x),\, x\in \operatorname{Sym}(Y_i)$ else $g(x)=x$. Then the external direct product $\operatorname{Sym}(Y_1)\times \operatorname{Sym}(Y_2)\times \dots\times \operatorname{Sym}(Y_n)$ can be embedded into $\operatorname{Sym}(X)$ by: $(f_1,f_2,..,f_n)\mapsto f_1\circ f_2\circ \dots \circ f_n$ where the $f_i$s are extended on the right. He proceeds to say if $\operatorname{Sym}(Y_i)$ is viewed as a subgroup of $\operatorname{Sym}(X)$ then the product of these subgroups is an internal direct product. This is the statement which I do not get. Don't these subgroups have to be normal before the internal direct product can be considered. However, this need not be the case here. Could someone please explain what he meant by internal direct product.
In order to insert blocks of LaTeX notation in your document, click on Insert from the toolbar menu and then click LaTeX. You can choose to set your default to LaTeX by changing your editor settings in your profile. A new LaTeX block in your document is created where the cursor is. It looks like this: You can now type some LaTeX syntax. Hovering with your mouse on Preview, you will preview the rendered markdown content. Clicking outside of the block, you will render the content (a latex flag will be added next to the block for you). If your LaTeX contains any errors, we will let you know (an error flag will show next to Preview). What type of LaTeX is supported? Authorea supports two engines for rendering LaTeX: LaTeXML and Pandoc. LaTeXML, the most powerful way to render LaTeX notation to the web, supports a large chunk of LaTeX packages and styles. This is an up to date list. IMPORTANT: How to write LaTeX DO NOT paste an entire LaTeX article in a LaTeX block! You can import a LaTeX document from your homepage if that is what you intend to do. Only type LaTeX content in a LaTeX block, i.e. DO NOT include documentclass, preamble, frontmatter, macros or figures. In other words, only include what you would write after \begin{document}. To add macros (newcommands) and packages, click Settings -> Edit Macros Use the Insert Figure button to insert images (and data). Use math mode for equations, e.g. $\mathcal L_{EM}=-\frac14F^{\mu\nu}F_{\mu\nu}$ Try the citation tool (click cite) to find and add citations. You can use sectioning commands like \section{} and \subsection{} to add headings. A new LaTeX block in your document is created where the cursor is. It looks like this: You can now type some LaTeX syntax. Hovering with your mouse on Preview, you will preview the rendered markdown content. Clicking outside of the block, you will render the content (a latex flag will be added next to the block for you). If your LaTeX contains any errors, we will let you know (an error flag will show next to Preview). What type of LaTeX is supported? Authorea supports two engines for rendering LaTeX: LaTeXML and Pandoc. LaTeXML, the most powerful way to render LaTeX notation to the web, supports a large chunk of LaTeX packages and styles. This is an up to date list. IMPORTANT: How to write LaTeX DO NOT paste an entire LaTeX article in a LaTeX block! You can import a LaTeX document from your homepage if that is what you intend to do. Only type LaTeX content in a LaTeX block, i.e. DO NOT include documentclass, preamble, frontmatter, macros or figures. In other words, only include what you would write after \begin{document}. To add macros (newcommands) and packages, click Settings -> Edit Macros Use the Insert Figure button to insert images (and data). Use math mode for equations, e.g. $\mathcal L_{EM}=-\frac14F^{\mu\nu}F_{\mu\nu}$ Try the citation tool (click cite) to find and add citations. You can use sectioning commands like \section{} and \subsection{} to add headings. Published on: 12 / 12 / 2017
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
I have a plane, $z=0$, as shown in the image below where $\hat{s}$ is a direction unit vector displayed by the red arrow and $\hat{n}$ is the normal unit vector to the plane. The angle between $\hat{s}$ and $\hat{n}$ is given as: $$\zeta = \arccos\left(\frac{\boldsymbol{\hat{s}}.\boldsymbol{\hat{n}}}{\lVert \boldsymbol{\hat{s}} \rVert \lVert \boldsymbol{\hat{n}} \rVert} \right) \tag{1}$$ The projection of $\hat{s}$ on the plane is given as: $$\boldsymbol{s'} = \boldsymbol{\hat{s}}-\frac{\langle \boldsymbol{\hat{s}},\boldsymbol{\hat{n}} \rangle}{ \lVert \boldsymbol{\hat{s}} \rVert \lVert \boldsymbol{\hat{n}} \rVert} \boldsymbol{\hat{n}} \tag{2}$$ and the angle between $\boldsymbol{x}$ and $\boldsymbol{s'}$ is given as: $$ \chi = \arg(s'_1 + is'_2) \tag{3}$$ I use the arrangement in the first image to measure the angles $\zeta$ and $\chi$ formed by the $\hat{s}$ direction vector and the arrangement in the second image to replicate these angles ($\zeta$ and $\chi$) by placing the $\hat{s}$ vector parallel to the $\boldsymbol{z}$ axis, as shown in the image below, and then rotating the plane. EDIT: I have changed the second illustration. As shown in the second image I first align the plane orthogonal to $\hat{s}$ (this is represented by the green outlined plane), next I rotate the plane by an angle $\xi$ about the $\boldsymbol{x}$ axis (this is represented by blue outlined plane) and finally by an angle $\eta$ about the $\boldsymbol{z'}$ axis. I am finding it difficult to visualize this rotation strategy. I am looking for a consistent way to do this so that I can relate $\xi$ with $\zeta$ and $\eta$ with $\chi$ or in other words, I want to use the angles $\zeta$ and $\chi$ to rotate the plane.
In this paper, we propose a feasible scheme for purifying two-qubit entanglement in the presence of decoherence by employing weak measurements. As long as the entanglement parameter $$\alpha $$ α and the measurement strength $$p$$ p satisfy a certain condition, we can always achieve the purification without reference to the initial state. Furthermore, an arbitrary initial state can be directly purified into the maximally entangled state by setting measurement strength $$p=1-\frac{\left| \alpha \right| }{\sqrt{1-\left| \alpha \right| ^{2}}}$$ p = 1 - α 1 - α 2 . The success probability of our scheme not only depends on measurement strength, but also closely links to the initial state. Quantum Information Processing – Springer Journals Published: Jan 23, 2015 It’s your single place to instantly discover and read the research that matters to you. Enjoy affordable access to over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C. “Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera
Witt vector An element of an algebraic construct, first proposed by E. Witt [1] in 1936 in the context of the description of unramified extensions of $p$-adic number fields. Witt vectors were subsequently utilized in the study of algebraic varieties over a field of positive characteristic [3], in the theory of commutative algebraic groups [4], [5], and in the theory of formal groups [6]. Let $A$ be an associative, commutative ring with unit element. Witt vectors with components in $A$ are infinite sequences $a = (a_0,a_1,\ldots)$, $a_i \in A$, which are added and multiplied in accordance with the following rules: $$ (a_0,a_1,\ldots) \oplus (b_0,b_1,\ldots) = (S_0(a_0;b_0), S_1(a_0,a_1;b_0,b_1), \ldots) $$ $$ (a_0,a_1,\ldots) \otimes (b_0,b_1,\ldots) = (M_0(a_0;b_0), M_1(a_0,a_1;b_0,b_1), \ldots) $$ where $S_n,M_n$ are polynomials in the variables $X_0,\ldots,X_n$, $Y_0,\ldots,Y_n$ with integer coefficients, uniquely defined by the conditions $$ \Phi_n(S_0,\ldots,S_n) = \Phi_n(X_0,\ldots,X_n) + \Phi_n(Y_0,\ldots,Y_n) $$ $$ \Phi_n(M_0,\ldots,M_n) = \Phi_n(X_0,\ldots,X_n) \cdot \Phi_n(Y_0,\ldots,Y_n) $$ where $$ \Phi_n(Z_0,\ldots,Z_n) = Z_0^{p^n} + p Z_1^{p^{n-1}} + \cdots + p^n Z^n $$ are polynomials, $n \in \mathbf{N}$ and $p$ is a prime number. In particular, $$ S_0 = X_0 + Y_0 \ ;\ \ \ S_1 = X_1 + Y_1 - \sum_{i=1}^{p-1} \frac{1}{p} \binom{p}{i} X_0^i Y_0^{p-i} $$ $$ M_0 = X_0 \cdot Y_0 \ ;\ \ \ M_1 = X_0^p Y_1 + X_1 Y_0^p + p X_1 Y_1 \ . $$ The Witt vectors with the operations introduced above form a ring, called the ring of Witt vectors and denoted by $W(A)$. For any natural number $n$ there also exists a definition of the ring $W_n(A)$ of truncated Witt vectors of length $n$. The elements of this ring are finite tuples $a = (a_0,\ldots,a_{n-1})$, $a_i \in A$ with the addition and multiplication operations described above. The canonical mappings$$R : W_{n+1}(A) \rightarrow W_n(A)$$$$R : (a_0,\ldots,a_n) \mapsto (a_0,\ldots,a_{n-1})$$$$T : W_n(A) \rightarrow W_{n+1}(A)$$$$T : (a_0,\ldots,a_{n-1}) \mapsto (0,a_0,\ldots,a_{n-1})$$are homomorphisms. The rule $A \to W(A)$ (or $A \to W_n(A)$) defines a covariant functor from the category of commutative rings with unit element into the category of rings. This functor may be represented by the ring of polynomials $\mathbf{Z}[X_0,X_1,\ldots]$ (or $\mathbf{Z}[X_0,X_1,\ldots,X_n]$) on which the structure of a ring object has been defined. The spectrum $\mathrm{Spec}\mathbf{Z}[X_0,X_1,\ldots]$ (or $\mathrm{Spec}\mathbf{Z}[X_0,X_1,\ldots,X_n]$) is known as a Witt scheme (or a truncated Witt scheme) and is a ring scheme [3]. Each element $a \in A$ defines a Witt vector $$ a^T = (a,0,0,\ldots) \in W(A) $$ called the Teichmüller representative of the element $a$. If $A = k$ is a perfect field of characteristic $p>0$, then $W(k)$ is a complete discrete valuation ring of zero characteristic with field of residues $k$ and maximal ideal $pW(k)$. Each element $w \in W(k)$ can be uniquely represented as $$ w = w_0^T + pw_1^T + p^2 w_2^T + \cdots $$ where $w_i \in k$. Conversely, each such ring $A$ with field of residues $k = A/(p)$ is canonically isomorphic to the ring $W(k)$. The Teichmüller representation makes it possible to construct a canonical multiplicative homomorphism $k \to W(k)$, splitting the mapping $W(k) \to W(k)/(p)$. If $k = \mathbf{F}_p$ is the prime field of $p$ elements, $W(k)$ is the ring of integral $p$-adic numbers $\mathbf{Z}_p$. References [1] E. Witt, "Zyklische Körper und Algebren der characteristik $p$ vom Grad $p^n$. Struktur diskret bewerteter perfekter Körper mit vollkommenem Restklassen-körper der Charakteristik $p$" J. Reine Angew. Math. , 176 (1936) pp. 126–140 Zbl 0016.05101 [2] S. Lang, "Algebra" , Addison-Wesley (1974) MR0783636 Zbl 0712.00001 [3] D. Mumford, "Lectures on curves on an algebraic surface" , Princeton Univ. Press (1966) MR0209285 Zbl 0187.42701 [4] J.-P. Serre, "Groupes algébrique et corps des classes" , Hermann (1959) MR0103191 [5] M. Demazure, P. Gabriel, "Groupes algébriques" , 1 , North-Holland (1971) MR1611211 MR0302656 MR0284446 Zbl 0223.14009 Zbl 0203.23401 Zbl 0134.16503 [6] J. Dieudonné, "Groupes de Lie et hyperalgèbres de Lie sur un corps de charactéristique $p$ VII" Math. Ann. , 134 (1957) pp. 114–133 DOI 10.1007/BF01342790 Zbl 0086.02605 Comments There is a generalization of the construction above which works for all primes $p$ simultaneously, [a3]: a functor $W : \mathsf{Ring} \to \mathsf{Ring}$ called the big Witt vector. Here, $\mathsf{Ring}$ is the category of commutative, associative rings with unit element. The functor described above, of Witt vectors of infinite length associated to the prime $p$, is a quotient of $W$ which can be conveniently denoted by $W_{p^\infty}$. For each $n \in \{1,2,\ldots\}$, let $w_n(X)$ be the polynomial $$ w_n(X) = \sum_{d | n} d X^{n/d} \ . $$ Then there is the following characterization theorem for the Witt vectors. There is a unique functor $W : \mathsf{Ring} \to \mathsf{Ring}$satisfying the following properties: 1) as a functor $W : \mathsf{Ring} \to \mathsf{Set}$, $W(A) = \{(a_1,a_2,\ldots) : a_i \in A\}$ and $W(\phi)((a_1,a_2,\ldots)) = (\phi(a_1),\phi(a_2),\ldots)$ for any ring homomorphism $\phi : A \to B$; 2) $w_{n,A} : W(A) \to A$, $w_{n,A} : (a_1,a_2,\ldots) \mapsto w_n(a_1,a_2,\ldots)$ is a functorial homomorphism of rings for every $n$ and $A$. The functor $W$ admits functorial ring endomorphisms $\mathbf{f}_n : W \to W$, for every $n \in \{1,2,\ldots\}$, that are uniquely characterized by $wn \mathbf{f}_m = w_{nm}$ for all $m,n \in \{1,2,\ldots\}$. Finally, there is a functorial homomorphism $\Delta : W({-}) \to W(W({-}))$ that is uniquely characterized by the property $w_{n,W(A)} \Delta_A = \mathbf{f}_{n,A}$ for all $n$,$A$. To construct $W(A)$, define polynomials $\Sigma_n$; $\Pi_n$; $r_n$ for $n \in \{1,2,\ldots\}$ by the requirements $$ w_n(\Sigma_1,\ldots,\Sigma_n) = w_n(X) + w_n(Y) \ ; $$ $$ w_n(\Pi_1,\ldots,\Pi_n) = w_n(X) \cdot w_n(Y) \ ; $$ $$ w_n(r1,\ldots,r_n) = - w_n(X) \ . $$ The $\Sigma_n$ and $\Pi_n$ are polynomials in $X_1,\ldots,X_n$; $Y_1,\ldots,Y_n$ and the $r_n$ are polynomials in the $X_1,\ldots,X_n$ and they all have integer coefficients. Now $W(A)$ is defined as the set $W(A) = \{ a = (a_1,a_2,\ldots) : a_i \in A \}$ with operations : $$ (a_1,a_2,\ldots) + (b_1,b_2,\ldots) = (\Sigma_1(a,b), \Sigma_2(a,b), \ldots) \ ; $$ $$ (a_1,a_2,\ldots) \cdot (b_1,b_2,\ldots) = (\Pi_1(a,b), \Pi_2(a,b), \ldots) \ ; $$ $$ - (a_1,a_2,\ldots) = (r_1(a), r_2(a), \ldots) \ . $$ The zero of $W(A)$ is $(0,0,0,\ldots)$ and the unit element is $(1,0,0,\ldots)$. The Frobenius endomorphisms $\mathbf{f}_n$ and the Artin–Hasse exponential $\Delta$ are constructed by means of similar considerations, i.e. they are also given by certain universal polynomials. In addition there are the Verschiebung morphisms $\mathbf{V}_n : W({-}) \to W({-})$, which are characterized by $$ w_n \mathbf{V}_m = \begin{cases} 0 & \text{if}\, n \not\mid m \\ n w_{m/n} & \text{if}\, n \mid m \end{cases} \ . $$ The $\mathbf{V}_n$ are group endomorphisms of $W(A)$ but not ring endomorphisms. The ideals $I_n = \{(0,\ldots,0,a_{n+1},a_{n+2},\ldots)\}$ define a topology on $W(A)$ making it a separated complete topological ring. For each $A \in \mathsf{Ring}$, let $\Lambda(A)$ be the Abelian group $1 + t A[[t]]$ under multiplication of power series; $$ \bar E : W(A) \rightarrow \Lambda(A) $$ $$ \bar E : (a_1,a_2,\ldots) \mapsto \prod_{i=1}^\infty \left({ 1 - a_i t^i }\right) $$ defines a functional isomorphism of Abelian groups, and using the isomorphism $\bar E$ there is a commutative ring structure on $\Lambda(A)$. Using $\bar E$ the Artin–Hasse exponential $\Delta$ defines a functorial homomorphism of rings $W(A) \to \Lambda(W(A))$ making $W(A)$ a functorial special $\lambda$-ring. The Artin–Hasse exponential $\Delta : W \mapsto W \circ W$ defines a cotriple structure on $W$ and the co-algebras for this co-triple are precisely the special $\lambda$-rings (cf. also Category and Triple). On $\Lambda(A)$ the Frobenius and Verschiebung endomorphisms satisfy $$ \mathbf{f}_n (1 - at) = 1 - a^n t $$ $$ V_n(f(t)) = f(t^n) $$ and are completely determined by this (plus functoriality and additivity in the case of $\mathbf{f}_n$). For each supernatural number $\mathbf{n} = (\alpha_p) : \alpha_p \in \{0,1,2,\ldots\} \cup \{\infty\}$, $p$ prime , one defines $N(\mathbf{n}) = \{ n \in \{1,2,\ldots\} : \nu_p(n) \le \alpha_p \}$, where $\nu_p(n)$ is the $p$-adic valuation of $n$, i.e. the number of prime factors $p$ in $n$. Let $$ \mathfrak{a}_{\mathbf{n}}(A) = \{ (a_1,a_2,\ldots) : a_i \in A \,,\, a_d = 0 \,\text{for all}\, d \in N(\mathbf{n}) \} \ . $$ Then $\mathfrak{a}_{\mathbf{n}}(A) $ is an ideal in $W(A)$ and for each supernatural $\mathbf{n}$ a corresponding ring of Witt vectors is defined by $$ W_{\mathbf{n}}(A) = W(A) / \mathfrak{a}_{\mathbf{n}}(A) \ . $$ In particular, one thus finds $W_{p^\infty}$, the ring of infinite-length Witt vectors for the prime $p$, discussed in the main article above, as a quotient of the ring of big Witt vectors $W(A)$. The Artin–Hasse exponential $\Delta : W \to W \circ W$ is compatible in a certain sense with the formation of these quotients, and using also the isomorphism $\bar E$ one thus finds a mapping $$ \mathbf{Z}_p = W_{p^\infty}(\mathbf{F}_p) \to \Lambda(W_{p^\infty}(\mathbf{F}_p)) = \Lambda(\mathbf{Z}_p) $$ where $\mathbf{Z}_p$ denotes the $p$-adic integers and $\mathbf{F}_p$ the field of $p$ elements, which can be identified with the classical morphism defined by Artin and Hasse [a1], [a2], [a3]. As an Abelian group is isomorphic to the group of curves of curves in the one-dimensional multiplicative formal group . In this way there is a Witt-vector-like Abelian-group-valued functor associated to every one-dimensional formal group. For special cases, such as the Lubin–Tate formal groups, this gives rise to ring-valued functors called ramified Witt vectors, [a3], [a4]. Let be the sequence of polynomials with coefficients in defined by The Cartier ring is the ring of all formal expressions (*) with the calculation rules Commutative formal groups over are classified by certain modules over . In case is a -algebra, a simpler ring can be used for this purpose. It consists of all expressions (*) where now the only run over the powers of the prime . The calculation rules are the analogous ones. In case is a perfect field of characteristic and denotes the Frobenius endomorphism of (which in this case is given by ), then can be described as the ring of all expressions in two symbols and and with coefficients in , with the extra condition and the calculation rules This ring, and also its subring of all expressions is known as the Dieudonné ring and certain modules (called Dieudonné modules) over it classify unipotent commutative affine group schemes over , cf. [a5]. References [a1] E. Artin, H. Hasse, "Die beide Ergänzungssätze zum Reciprozitätsgesetz der $\ell$-ten Potenzreste im Körper der $\ell$-ten Einheitswurzeln" Abh. Math. Sem. Univ. Hamburg , 6 (1928) pp. 146–162 [a2] G. Whaples, "Generalized local class field theory III: Second form of the existence theorem, structure of analytic groups" Duke Math. J. , 21 (1954) pp. 575–581 MR73645 [a3] M. Hazewinkel, "Twisted Lubin–Tate formal group laws, ramified Witt vectors and (ramified) Artin–Hasse exponentials" Trans. Amer. Math. Soc. , 259 (1980) pp. 47–63 MR0561822 Zbl 0437.13014 [a4] M. Hazewinkel, "Formal group laws and applications" , Acad. Press (1978) MR506881 [a5] M. Demazure, P. Gabriel, "Groupes algébriques" , 1 , North-Holland (1971) MR1611211 MR0302656 MR0284446 Zbl 0223.14009 Zbl 0203.23401 Zbl 0134.16503 How to Cite This Entry: Witt vector. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Witt_vector&oldid=37687
TensorFlow 1 version View source on GitHub Solves one or more linear least-squares problems. Aliases: tf.compat.v1.linalg.lstsq tf.compat.v1.matrix_solve_ls tf.compat.v2.linalg.lstsq tf.linalg.lstsq( matrix, rhs, l2_regularizer=0.0, fast=True, name=None) matrix is a tensor of shape [..., M, N] whose inner-most 2 dimensionsform M-by- N matrices. Rhs is a tensor of shape [..., M, K] whoseinner-most 2 dimensions form M-by- K matrices. The computed output is a Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by- Kmatrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squaressense. Below we will use the following notation for each pair of matrix and right-hand sides in the batch: matrix=\(A \in \Re^{m \times n}\), rhs=\(B \in \Re^{m \times k}\), output=\(X \in \Re^{n \times k}\), l2_regularizer=\(\lambda\). If fast is True, then the solution is computed by solving the normalequations using Cholesky decomposition. Specifically, if \(m \ge n\) then\(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squaresproblem \(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||A Z - B||_F^2 +\lambda ||Z||_F^2\). If \(m \lt n\) then output is computed as\(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) isthe minimum-norm solution to the under-determined linear system, i.e.\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \), subject to\(A Z = B\). Notice that the fast path is only numerically stable when\(A\) is numerically full rank and has a condition number\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\) or\(\lambda\)is sufficiently large. If fast is False an algorithm based on the numerically robust completeorthogonal decomposition is used. This computes the minimum-normleast-squares solution, even when \(A\) is rank deficient. This path istypically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored. Args: : matrix Tensorof shape [..., M, N]. : rhs Tensorof shape [..., M, K]. : 0-D l2_regularizer double Tensor. Ignored if fast=False. : bool. Defaults to fast True. : string, optional name of the operation. name Returns: : output Tensorof shape [..., N, K]whose inner-most 2 dimensions form M-by- Kmatrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]in the least squares sense. Raises: : linalg.lstsq is currently disabled for complex128 and l2_regularizer != 0 due to poor accuracy. NotImplementedError
How to Use the Beam Envelopes Method for Wave Optics Simulations In the wave optics field, it is difficult to simulate large optical systems in a way that rigorously solves Maxwell’s equation. This is because the waves that appear in the system need to be resolved by a sufficiently fine mesh. The beam envelopes method in the COMSOL Multiphysics® software is one option for this purpose. In this blog post, we discuss how to use the Electromagnetic Waves, Beam Envelopes interface and handle its restrictions. Comparing Methods for Solving Large Wave Optics Models In electromagnetic simulations, the wavelength always needs be resolved by the mesh in order to find an accurate solution of Maxwell’s equations. This requirement makes it difficult to simulate models that are large compared to the wavelength. There are several methods for stationary wave optics problems that can handle large models. These methods include the so-called diffraction formulas, such as the Fraunhofer, Fresnel-Kirchhoff, and Rayleigh-Sommerfeld diffraction formula and the beam propagation method (BPM), such as paraxial BPM and the angular spectrum method (Ref. 1). Most of these methods use certain approximations to the Helmholtz equation. These methods can handle large models because they are based on the propagation method that solves for the field in a plane from a known field in another plane. So you don’t have to mesh the entire domain, you just need a 2D mesh for the desired plane. Compared to these methods, the Electromagnetic Waves, Beam Envelopes interface in COMSOL Multiphysics (which we will refer to as the Beam Envelopes interface for the rest of the blog post) solves for the exact solution of the Helmholtz equation in a domain. It can handle large models; i.e., the meshing requirement can be significantly relaxed if a certain restriction is satisfied. A beam envelopes simulation for a lens with a millimeter-range focal length for a 1-um wavelength beam. We discuss the Beam Envelopes interface in more detail below. Theory Behind the Beam Envelopes Interface Let’s take a look at the math that the Beam Envelopes interface computes “under the hood”. If you add this interface to a model and click the Physics Interface node and change Type of phase specification to User defined, you’ll see the following in the Equation section: Here, \bf E1 is the dependent variable that the interface solves for, called the envelope function. In the phasor representation of a field, \bf E1 corresponds to the amplitude and \phi_1 to the phase, i.e., The first equation, the governing equation for the Beam Envelopes interface, can be derived by substituting the second definition of the electric field into the Helmholtz equation. If we know \phi_1, the only unknown is \bf E1 and we can solve for it. The phase, \phi_1, needs to be given a priori in order to solve the problem. With the second equation, we assume a form such that the fast oscillation part, the phase, can be factored out from the field. If that’s true, the envelope \bf E1 is “slowly varying”, so we don’t need to resolve the wavelength. Instead, we only need to resolve the slow wave of the envelope. Because of this process, simulating large-scale wave optics problems is possible on personal computers. A common question is: “When do you want the envelope rather than the field itself?” Lens simulation is one example. Sometimes you may need the intensity rather than the complex electric field. Actually, the square of the norm of the envelope gives the intensity. In such cases, it suffices to get the envelope function. What Happens If the Phase Function Is Not Accurately Known? The math behind the beam envelope method introduces more questions: What if the phase is notaccurately known? Can we use the Beam Envelopesinterface in such cases? Are the results correct? To answer these questions, we need to do a little more math. 1D Example Let’s take the simplest test case: a plane wave, Ez = \exp(-i k_0 x), where k_0 = 2\pi / \lambda_0 for wavelength \lambda_0 = 1 um, it propagates in a rectangular domain of 20 um length. (We intentionally use a short domain for illustrative purposes.) The out-of-plane wave enters from the left boundary and transmits the right boundary without reflection. This can be simulated in the Beam Envelopes interface by adding a Matched boundary condition with excitation on the left and without excitation on the right, while adding a Perfect Magnetic Conductor boundary condition on the top and bottom (meaning we don’t care about the y direction). The correct setting for the phase specification is shown in the figure below. We have the answer Ez = \exp(-i k_0 x), knowing that the correct phase function is k_0 x or the wave vector is (k_0,0) a priori. Substituting the phase function in the second equation, we inversely get E1z = 1, the constant function. How many mesh elements do we need to resolve a constant function? Only one! (See this previous blog post on high-frequency modeling.) The following results show the envelope function \bf E1 and the norm of \bf E, ewbe.normE, which is equal to |{\bf E1}|. Here, we can see that we get the correct envelope function if we give the exact phase function, constant one, for any number of meshes, as expected. For confirmation purposes, the phase of \bf E1z, arg(E1z), is also plotted. It is zero, also as expected. Now, let’s see what happens if our guess for the phase function is a little bit off — say, (0.95k_0,0) instead of the exact (k_0,0). What kind of solutions do we get? Let’s take a look: What we see here for the envelope function is the so-called beating. It’s obvious that everything depends on the mesh size. To understand what’s going on, we need a pencil, paper, and patience. We knew the answer was Ez = \exp(-i k_0 x), but we had “intentionally” given an incorrect estimate in the COMSOL® software. Substituting the wrong phase function in the second equation, we get \exp(-i k_0 x)={\bf E1z} \exp(-0.95i k_0 x). This results in {\bf E1z} = \exp(-0.05i k_0 x), which is no longer constant one. This is a wave with a wavelength of \lambda_b= 2\pi/0.05k_0 = 20 um, which is called the beat wavelength. Let’s take a look at the plot above for six mesh elements. We get exactly what is expected (red line), i.e., {\bf E1z} = \exp(-0.05i k_0 x). The plot automatically takes the real part, showing {\bf E1z} = \cos(-0.05 k_0 x). The plots for the lower resolutions still show an approximate solution of the envelope function. This is as expected for finite element simulations: coarser mesh gives more approximate results. This shows that if we make a wrong guess for the phase function, we get a wrong (beat-convoluted) envelope function. Because of the wrong guess, the envelope function is added a phase of the beating (green line), which is -0.05 k_0 x. What about the norm of \bf E? Look at the blue line in the plots above. It looks like the COMSOL Multiphysics software generated a correct solution for ewbe.normE, which is constant one. Let’s calculate: Substituting both the wrong (analytical) phase function and the wrong (beat-convoluted) envelope function in the second equation, we get {\bf Ez} = \exp(-0.05i k_0 x) \times \exp(-0.95i k_0 x) = \exp(-i k_0 x), which is the correct fast field! If we take a norm of \bf E, we get a correct solution, constant one. This is what we wanted. Note that we can’t display \bf E itself because the domain can be too large, but we can find \bf E analytically and display the norm of \bf E with a coarse mesh. This is not a trick. Instead, we see that if the phase function is off, the envelope function will also be off, since it becomes beat-convoluted. However, the norm of the electric field can still be correct. Therefore, it is important that the beat-convoluted envelope function be correctly computed in order to get the correct electric field. The above plots clearly show that. The six-element mesh case gives the completely correct electric field norm because it fully resolves the beat-convoluted envelope function. The other meshes give an approximate solution to the beat-convoluted envelope function depending on the mesh size. They also do so for the field norm. This is a general consequence that holds true for arbitrary cases. No matter what phase function we use in COMSOL Multiphysics, we are okay as long as we correctly solve the first equation for \bf E1 and as long as the phase function is continuous over the domain. When there are multiple materials in a domain, the continuity of the phase function is also critical to the solution accuracy. We may discuss this in a future blog post, but it is also mentioned in this previous blog post on high-frequency modeling. 2D Example So far, we have discussed a scalar wave number. More generally, the phase function is specified by the wave vector. When the wave vector is not guessed correctly, it will have vector-valued consequences. Suppose we have the same plane wave from the first example, but we make a wrong guess for the phase, i.e., k_0(x \cos \theta + y \sin \theta) instead of k_0 x . In this case, the wave number is correct but the wave vector is off. This time, the beating takes place in 2D. Let’s start by performing the same calculations as the 1D example. We have \exp(-i k_0 x)= {\bf E1z}(x,y) \exp(-i k_0 (x \cos \theta+y \sin \theta) ) and the envelope function is now calculated to be {\bf E1z}(x,y) = \exp(-i k_0 (x (1-\cos \theta) -y \sin \theta) ) , which is a tilted wave propagating to direction (1-\cos \theta, -\sin \theta) , with the beat wave number k_b = 2 k_0/\sin (\theta/2) and the beat wavelength \lambda_b=\lambda_0/(2\sin (\theta/2)). The following plots are the results for θ = 15° for a domain of 3.8637 um x 29.348 um for different max mesh sizes. The same boundary conditions are given as the previous 1D example case. The only difference is that the incident wave on the left boundary is {\bf E1z}(0,y) = \exp(i k_0 y \sin \theta) . (Note that we have to give the corresponding wrong boundary condition because our phase guess is wrong.) In the result for the finest mesh (rightmost), we can confirm that \bf E1z is computed just like we analyzed in the above calculation and the norm of \bf Ez is computed to be constant one. These results are consistent with the 1D example case. The electric field norm (top) and the envelope function (bottom) for the wrong phase function k_0(x \cos\theta +y \sin\theta ), computed for different mesh sizes. The color range represents the values from -1 to 1. Simulating a Lens Using the Beam Envelopes Interface The ultimate goal here is to simulate an electromagnetic beam through optical lenses in a millimeter-scale domain with the Beam Envelopes interface. How can we achieve this? We already discussed how to compute the right solution. The following example is a simulation for a hard-apertured flat top incident beam on a plano-convex lens with a radius of curvature of 500 um and a refractive index of 1.5 (approximately 1 mm focal length). Here, we use \phi_1 = k_0 x, which is not accurate at all. In the region before the lens, there is a reflection, which creates an interference. In the lens, there are multiple reflections. After the lens, the phase is spherical so that the beam focuses into a spot. So this phase function is far different from what is happening around the lens. Still, we have a clue. If we plot \bf E1z, we see the beating. Plot of \bf E1z. The inset shows the finest beat wavelength inside the lens. As can be seen in the plot, a prominent beating occurs in the lens (see the inset). Actually, the finest beat wavelength is \lambda_0/2 in front of the lens. To prove this, we can perform the same calculations as in the previous examples. The finest beat wavelength is due to the interference between the incident beam and reflected beam, but we can ignore this because it doesn’t contribute to the forward propagation. We can see that the mesh doesn’t resolve the beating before the lens, but let’s ignore this for now. The beat wavelength in the lens is 3\lambda_0/2 for the backward beam and 2\lambda_0 for the forward beam for n = 1.5, which we can also prove in the same way as the previous examples. Again, we ignore the backward beam. In the plot, what’s visible is the 2\lambda_0 beating for the forward beam. The backward beam is only a fraction (approximately 4% for n = 1.5 of the incident beam, so it’s not visible). The following figure shows the mesh resolving the beat inside the lens with 10 mesh elements. The beat wavelength inside the lens. The mesh resolves the beat with 10 mesh elements. Other than the beating for the propagating beam in the lens, the beating in the subsequent air domain is pretty large, so we can use a coarse mesh here. This may not hold for faster lenses, which have a more rapid quadratic phase and can have a very short beat wavelength. In this example, we must use a finer mesh only in the lens domain to resolve the fastest beating. The computed field norm is shown at the top of this blog post. To verify the result, we can compute the field at the lens exit surface by using the Frequency Domain interface, and then using the Fresnel diffraction formula to calculate the field at the focus. The result for the field norm agrees very well. Comparison between the Beam Envelopes interface and Fresnel diffraction formula. The mesh resolves the beat inside the lens with 10 mesh elements. The following comparison shows the mesh size dependence. We get a pretty good result with our standard recommendation, \lambda_b/6, which is equal to \lambda_0/3. This makes it easier to mesh the lens domain. Mesh size dependence on the field norm at the focus. As of version 5.3a of the COMSOL® software, the Fresnel Lens tutorial model includes a computation with the Beam Envelopes interface. Fresnel lenses are typically extremely thin (wavelength order). Even if there is diffraction in and around the lens surface discontinuities, the fine mesh around the lens part does not significantly impact the total number of mesh elements. Concluding Remarks In this blog post, we discuss what the Beam Envelopes interface does “under the hood” and how we can get accurate solutions for wave optics problems. Even if we get beating, the beat wavelength can be much longer than the wavelength, which makes it possible to simulate large optical systems. Although it seems tedious to check the mesh size to resolve beating, this is not extra work that is only required for the Beam Envelopes interface. When you use the finite element method, you always need to check the mesh size dependence for accurately computed solutions. Next Steps Try it yourself: Download the file for the millimeter-range focal length lens by clicking the button below. References J. Goodman, Fourier Optics, Roberts and Company Publishers, 2005. Comments (29) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Fourier transforms are quite useful in solving differential equations. By decomposing the functions using Fourier transform, we might be able to simplify many differential equations. Suppose we have a differential equation To solve the equation, we decompose \(f(x)\) using its Fourier transform, Then we get The equation is then simplified into Note To summarize, we simple do replacement of the differential operators. Similar to Fourier transform, Laplace transform is also useful in equation solving. Laplace transform is a transform of a function of \(t\), e.g. \(f(t)\), to a function of \(s\), Some useful properties: \(\mathscr{L}[\frac{d}{dt}f(t)] = s \mathscr{L}[f(t)] - f(0)\); \(\mathscr{L}[\frac{d^2}{dt^2}f(t) = s^2 \mathscr{L}[f(t)] - s f(0) - \frac{d f(0)}{dt}\); \(\mathscr{L}[\int_0^t g(\tau) d\tau ] = \frac{\mathscr{L}[f(t)]}{s}\); \(\mathscr{L}[\alpha t] = \frac{1}{\alpha} \mathscr{L}[s/\alpha]\); \(\mathscr{L}[e^{at}f(t)] = \mathscr{L}[f(s-a)]\); \(\mathscr{L}[tf(t)] = - \frac{d}{ds} \mathscr{L}[f(t)]\). Some useful results: \(\mathscr{L}[1] = \frac{1}{s}\); \(\mathscr{L}[\delta] = 1\); \(\mathscr{L}[\delta^k] = s^k\); \(\mathscr{L}[t] = \frac{1}{s^2}\); \(\mathscr{L}[t^n] = \frac{n!}{s^{n+1}}\) \(\mathscr{L}[e^{at}]= \frac{1}{s-a}\). A very nice property of Laplace transform is which is very useful when dealing with master equations. Two useful results are and where \(I_0(2Ft)\) is the modified Bessel functions of the first kind. \(J_0(2Ft)\) is its companion. Using the property above, we can find out Example: Solving Differential Equations For a first order differential equation we apply Laplace transform, from which we solve Then we lookup in the transform table, we find that The geometrical meaning of Legendre transformation in thermodynamics can be illustrated by the following graph. In the above example, we know that entropy \(S\) is actually a function of temperature \(T\). For simplicity, we assume that they are monotonically related like in the graph above. When we are talking about the quantity \(T \mathrm d S\) we actually mean the area shaded with blue grid lines. Meanwhile the area shaded with orange line means \(S \mathrm d T\). Let’s think about the change in internal energy. For this example, we only consider the thermal part, Internal energy change is equal to the the area shaded with blue lines. The area shaded with orange lines is the Helmholtz free energy, The two quantities \(T \mathrm d S\) and \(S \mathrm d T\) sum up to \(d(TS)\). This is also the area change of the rectangle determined by two edges \(0\) to \(T\) and \(0\) to \(S\). This is a Legendre transform, or The point is that \(S(T)\) is a function of \(T\). However, if we know the blue area, we can find out the orange area. This means that the two functions \(A(T)\) and \(U(S)\) are somewhat like a pair. Choosing one of them for a specific calculation is a choice of freedom but we carry all the information in either one once the relation between \(T\) and \(S\) is know. The above example sheds light on Legendre transform. The mathematical form is a little bit tricky so we will illustrate it using an example. For a function \(U(T, X)\), we find its differential as For convinience, we define The differential of function becomes where \(S\) (\(Y\)) and \(T\) (\(X\)) are a conjugate pair. A Legendre transform says that we change the variable of the differential from \(T\) (\(X\)) to \(S\) (\(Y\)). For example, we know that Plugging this into \(\mathrm d U\), we get The left hand side is defined as a new differential In these calculations, \(U\) is the internal energy and \(A\) is the Helmholtz free energy. The transform that changes the variable from \(X\) to \(Y\) gives us enthalpy \(H\). If we transform both variables then we get Gibbs free energy \(G\). More about these thermodynamic potentials will be discussed in the following chapters. Zia, Royce K. P., Edward F. Redish and Susan R. McKay. “Making sense of the Legendre transform.” (2009).
Sandbox From AbInitio Revision as of 04:56, 8 January 2008 (edit) Stevenj (Talk | contribs) (→Scribble below) ← Previous diff Revision as of 03:37, 9 January 2008 (edit) Stevenj (Talk | contribs) (→Scribble below) Next diff → Line 11: Line 11: You can put text in '''bold''', ''italics'', etcetera (see also the buttons at the top of the editing window). You can insert a link by using <nowiki>[[double brackets]]</nowiki> like [[MPB]] or <nowiki>[[double brackets|name]]</nowiki> like [[MIT Photonic Bands|this]]. You can put text in '''bold''', ''italics'', etcetera (see also the buttons at the top of the editing window). You can insert a link by using <nowiki>[[double brackets]]</nowiki> like [[MPB]] or <nowiki>[[double brackets|name]]</nowiki> like [[MIT Photonic Bands|this]]. + + <math>a \simeq b</math> * "\\copyright" [Print "©"]; * "\\copyright" [Print "©"]; Revision as of 03:37, 9 January 2008 This is a "sandbox" page where you can practice editing. Feel free to scribble any nonsense you want. (The contents of this page may be deleted at any time.) Please only scribble below this line, however. Scribble below MediaWiki allows you to enter LaTeX equations using the <math> tag. For example: "\\copyright" [Print "©"]; "\\lang" [Print "〈"]; "\\rang" [Print "〉"]; "\\lceil" [Print "⌈"]; "\\rceil" [Print "⌉"]; "\\lfloor" [Print "⌊"]; "\\rfloor" [Print "⌋"]; "\\le" [Print "≤"]; "\\leq" [Print "≤"]; "\\ge" [Print "≥"]; "\\geq" [Print "≥"]; "\\neq" [Print "≠"]; "\\approx" [Print "≈"]; "\\cong" [Print "≅"]; "\\equiv" [Print "≡"]; "\\propto" [Print "∝"]; "\\subset" [Print "⊂"]; "\\subseteq" [Print "⊆"]; "\\supset" [Print "⊃"]; "\\supseteq" [Print "⊇"]; "\\ang" [Print "∠"]; "\\perp" [Print "⊥"]; "\\therefore" [Print "∴"]; "\\bigcirc" [Print "◯"]; "\\sim" [Print "∼"]; "\\times" [Print "×"]; "\\ast" [Print "∗"]; "\\otimes" [Print "⊗"]; "\\oplus" [Print "⊕"]; "\\lozenge" [Print "◊"]; "\\diamond" [Print "◊"]; "\\neg" [Print "¬"]; "\\pm" [Print "±"]; "\\dagger" [Print "†"]; "\\ne" [Print "≠"]; "\\in" [Print "∈"]; "\\notin" [Print "∉"]; "\\ni" [Print "∋"]; "\\forall" [Print "∀"]; "\\exists" [Print "∃"]; "\\Re" [Print "ℜ"]; "\\Im" [Print "ℑ"]; "\\aleph" [Print "ℵ"]; "\\wp" [Print "℘"]; "\\emptyset" [Print "∅"]; "\\nabla" [Print "∇"]; "\\rightarrow" [Print "→"]; "\\to" [Print "→"]; "\\longrightarrow" [Print "→"]; "\\Rightarrow" [Print "⇒"]; "\\leftarrow" [Print "←"]; "\\longleftarrow" [Print "←"]; "\\Leftarrow" [Print "⇐"]; "\\leftrightarrow" [Print "↔"]; "\\sum" [Print "∑"]; "\\prod" [Print "∏"]; "\\int" [Print "∫"]; "\\partial" [Print "∂"]; "\\vee" [Print "∨"]; "\\lor" [Print "∨"]; "\\wedge" [Print "∧"]; "\\land" [Print "∧"]; "\\cup" [Print "∪"]; "\\infty" [Print "∞"]; "\\mapsto" [Print " |->"]; "\\sqrt" [Print "√("; Print_arg; Print ")"]; "\\frac" [Print "("; Print_arg; Print ")/("; Print_arg; Print ")"]; "\\Vert" [Print "||"]; "\\circ" [Print "o"]; "\\^circ" [Print "°"]; "\\tm" [Print "™"]; "\\simeq" [Print "≅"]; "\\cdot" [Print "⋅"]; "\\cdots" [Print "⋅⋅⋅"]; "\\varepsilon" [Print "ɛ"]; "\\vartheta" [Print "ϑ"];
I have been given the following definition: $\rule{17cm}{0.4pt}$ Let $\{a_n\}$ be a sequence in $\mathbb{R}$. The series: $$\sum_{n=0}^\infty a_n$$ is $\textbf{convergent}$ if the sequence $\{s_m\}$ of the $\textbf{partial sums}$ $$s_m=\sum_{n=0}^m a_n$$ converges, that is for all $\varepsilon >0$ there exists $N=N(\varepsilon)\in \mathbb{N},$ such that $$\left| s_m-s_k\right| = \left| \sum_{n=k+1}^m a_n \right| <\varepsilon$$ for all $m>k\geq N$. $\{a_n\}$ $\textbf{converges absolutely}$ if the series: $$\sum_{n=0}^\infty |a_n|$$ converges. $\rule{17cm}{0.4pt}$ I was a bit confused about what it means by partial sums, and why, if the series of partial sums converges, that we know the $\{a_n\}$ converges. I am thinking that for any finite m, $s_m$ would be a sub-sequence of $a_n$, and i think I am right in saying that if a subsequence is convergent then the sequence must also be convergent. This is a complete guess though, If someone could let me know if I am on the right track in understanding this, that'd be great. Thanks.
Can photons push the source which is emitting them? Yes. If yes, will a more intense flashlight accelerate me more? Yes Does the wavelength of the light matter? No Is this practical for space propulsion? Probably not Doesn't it defy the law of momentum conservation? No In fact that last question is the key one, because photons do carry momentum (even though they have no mass). Photons, like all particles obey the relativistic equation: $$ E^2= p^2c^2 + m^2c^4 $$ where for a photon the mass, $m$, is zero. That means the momentum of the photon is given by: $$ p = \frac{E}{c} = \frac{h\nu}{c} $$ where $\nu$ is the frequency of the light. Let's suppose you have a flashlight that emits light with a power $W$ and a frequency $\nu$. The number of photons per second is the total power divided by the energy of a single photon: $$ n = \frac{W}{h\nu} $$ The momentum change per second is the numbr of photons multiplied by the momentum of a single photon: $$ P/sec = \frac{W}{h\nu} p = \frac{W}{h\nu} \frac{h\nu}{c} = \frac{W}{c} $$ But the rate of change of momentum is just the force, so we end up with an equation for the force created by your flashlight: $$ F = \frac{W}{c} $$ Now you can see why I've answered your questions above as I have. The force is proportional to the flashlight power, but the frequency $\nu$ cancels out so the frequency of the light doesn't matter. Momentum is conserved because it's the momentum carried by the photons that creates the force. As for powering spaceships, your 1W flashlight creates a force of about $3 \times 10^{-9}$ N. You'd need a staggeringingly intense light source to power a rocket.
I am not sure if I am using the correct terminology, something must be written about the following problem, but I cannot find it by searching. I am presently analyzing data about the effect of introduction of a new vaccination program. The data is supposedly "ecological data", that is , population data, we do not have data on individual level, so no subject level covariates. So, basically, I have a count time series, of total number of deaths, and number of deaths for reason A. But we do not know how good the surveillance system is, that is, which percentage of cases is actually reported. That percentage is surely changing with time, hopefully it is becoming better. But we do assume, realistically, that the probability of a death being reported do not depend on its cause. So, I model $$ Y_{1t} \sim \text{Po}(\lambda_t) \\ Y_{2t} \sim \text{Po}(\mu_t) $$ where $Y_{1t}$ is number of deaths for reason A at $t$, while $Y_{2t}$ is number for all other reasons. In reality, those variables are overdispersed, but here I will just assume Poisson (because overdispersion issues are unimportant for my question). So, I use glms (actually, vglms from R package VGAM) for representing Poisson regressions, with some common terms in the linear predictors $\eta_1, \eta_2$. I introduce a 0/1 dummy for the introduction of the vaccine at time $t_0$, a polynomial of low degree for representing the observed relative decrease of deaths of certain reason relative for other reasons, a common natural spline term for representing the effect of the varying coverage (assumed the same for both processes) and some other terms unimportant for the question. The model is then more explicit: $$ Y_{1t} \sim \text{Po}(\lambda_t=\exp(\beta_{10}+\beta_{11}t+\beta_{12}t^2+\beta_{13}t^3+s(t)+\beta_{14}I(t-t_0\ge 0)) ) \\ Y_{2t} \sim \text{Po}(\mu_t=exp(\beta_{20}+s(t))) $$ where $s(t)$ is a common spline term. Since the two regressions have some common parameters, we use VGAM for the estimation. My problem is that by introducing too much flexibility (too many df for the natural spline term), that term will eventually absorb the effect of the dummy term! So, how to choose how much flexibility? As we do not really have prior info about the correct nuber of df, it must in some way be from the fit. Some observations: Since the smooth term is common for both linear predictors, if too little flexibility, will cause underfitting, which can be seen as autocorrelated residuals, and even correlation between the two residuals. So that will give some information. But, I am only concerned about good estimation (unbiased, small bias, consistency) of the treatment effect. How can I be sure about that, in the presence of many other badly identified parameters, with little prior information? That is, should I underfit or overfit (my intuition says that I should avoid overfitting, so erring in the direction of some underfit) but there must be some papers studying this or some related questions? Any thoughts, or references? EDIT I found one paper which seems relevant, http://ac.els-cdn.com/S0304414913000811/1-s2.0-S0304414913000811-main.pdf?_tid=5ddf25ce-ac17-11e6-a0a4-00000aacb35d&acdnat=1479312884_12ea86a56c9581553b7607211e21941d but there must be something more specific? Also, very low views in this question, is there something I can do to make the question clearer?
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A January 1999 , Volume 5 , Issue 1 Select all articles Export/Reference: Abstract: This paper deals with various applications of two basic theorems in order- preserving systems under a group action -- monotonicity theorem and convergence theorem. Among other things we show symmetry properties of stable solutions of semilinear elliptic equations and systems. Next we apply our theory to traveling waves and pseudo-traveling waves for a certain class of quasilinear diffusion equa- tions and systems, and show that stable traveling waves and pseudo-traveling waves have monotone profiles and, conversely, that monotone traveling waves and pseudo- traveling waves are stable with asymptotic phase. We also discuss pseudo-traveling waves for equations of surface motion. Abstract: We establish the existence of solutions to an anti-periodic non-monotone boundary value problem. Our approach relies on a combination of monotonicity and compactness methods. Abstract: This paper is a study of the global structure of the attractors of a dynamical system. The dynamical system is associated with an oriented graph called a Symbolic Image of the system. The symbolic image can be considered as a finite discrete approximation of the dynamical system flow. Investigation of the symbolic image provides an opportunity to localize the attractors of the system and to estimate their domains of attraction. A special sequence of symbolic images is considered in order to obtain precise knowledge about the global structure of the attractors and to get filtrations of the system. Abstract: We study special symmetric periodic solutions of the equation $\dot x(t) =\alphaf(x(t), x(t-1))$ where $\alpha$ is a positive parameter and the nonlinearity $f$ satisfies the symmetry conditions $f(-u, v) = -f(u,-v) = f(u, v).$ We establish the existence and stability properties for such periodic solutions with small amplitude. Abstract: Topological transitivity, weak mixing and non-wandering are definitions used in topological dynamics to describe the ways in which open sets feed into each other under iteration. Using finite directed graphs, these definitions are generalized to obtain topological mapping properties. The extent to which these mapping properties are logically distinct is examined. There are three distinct properties which entail "interesting" dynamics. Two of these, transitivity and weak mixing, are already well known. The third does notappear in the literature but turns out to be close to weak mixing in a sense to be discussed. The remaining properties comprise a countably infinite collection of distinct properties entailing somewhat less interesting dynamics and including non-wandering. Abstract: We study the Cauchy problem for a nonlinear Schrödinger equation which is the generalization of a one arising in plasma physics. We focus on the so called subcritical case and prove that when the initial datum is "small", the solution exists globally in time and decays in time just like in the linear case. For a certain range of the exponent in the nonlinear term, we prove that the solution is asymptotic to a "final state" and the nonexistence of asymptotically free solutions. The method used in this paper is based on some gauge transformation and on a certain phase function. Abstract: The rich diversity of patterns and concepts intrinsic to the Julia and the Mandelbrot sets of the quadratic map in the complex plane invite a search for higher dimensional generalisations. Quaternions provide a natural framework for such an endeavour. The objective of this investigation is to provide explicit formulae for the domain of stability of multiple cycles of classes of quaternionic maps $F(Q)+C$ or $CF(Q)$ where $C$ is a quaternion and $F(Q)$ is an integral function of $Q$. We introduce the concept of quaternionic differentials and employ this in the linear stability analysis of multiple cycles. Abstract: Nonlinear stability and some other dynamical properties for a KS type equation in space dimension two are studied in this article. We consider here a variation of the KS equation where the derivatives in the nonlinear and the antidissipative linear terms are in one single direction. We prove the nonlinear stability for all positive times and study the corresponding attractor. Abstract: Given a control system (formulated as a nonconvex and unbounded differential inclusion) we study the problem of reaching a closed target with trajectories of the system. A controllability condition around the target allows us to construct a path that steers each point nearby into it in finite time and using a finite amount of energy. In applications to minimization problems, limits of such trajectories could be discontinuous. We extend the inclusion so that all the trajectories of the extension can be approached by (graphs of) solutions of the original system. In the extended setting the value function of an exit time problem with Lagrangian affine in the unbounded control can be shown to coincide with the value function of the original problem, to be continuous and to be the unique (viscosity) solution of a Hamilton-Jacobi equation with suitable boundary conditions. Abstract: We study the regularity of the composition operator $((f, g)\to g \circ f)$ in spaces of Hölder differentiable functions. Depending on the smooth norms used to topologize $f, g$ and their composition, the operator has different differentiability properties. We give complete and sharp results for the classical Hölder spaces of functions defined on geometrically well behaved open sets in Banach spaces. We also provide examples that show that the regularity conclusions are sharp and also that if the geometric conditions fail, even in finite dimensions, many elements of the theory of functions (smoothing, interpolation, extensions) can have somewhat unexpected properties. Abstract: In this paper, we give some existence results for equilibrium problems by proceeding to a perturbation of the initial problem and using techniques of recession analysis. We develop and describe thoroughly recession condition which ensure existence of at least one solution for hemivariational inequalities introduced by Panagiotopoulos. Then we give two applications to resolution of concrete variational inequalities. We shall examine two examples. The first one concerns the unilateral boundary condition. In the second, we shall consider the contact problem with given friction on part of the boundary. Abstract: In this paper we consider the notion of determining projections for two classes of stochastic dissipative equations: a reaction-diffusion equation and a 2-dimensional Navier-Stokes equation. We define certain finite dimensional objects that can capture the asymptotic behavior of the related dynamical system. They are projections on a space of polynomial functions, generalizing the classical (but not very much studied in a stochastic context) concepts of determining modes, nodes and volumes. Abstract: We show the local in time solvability of the Cauchy problem for nonlinear wave equations in the Sobolev space of critical order with nonlinear term of exponential type. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
There and Back Again: Time of Flight Ranging between Two Wireless Nodes With the growth in the Internet of Things (IoT) products, the number of applications requiring an estimate of range between two wireless nodes in indoor channels is growing very quickly as well. Therefore, localization is becoming a red hot market today and will remain so in the coming years. One question that is perplexing is that many companies now a days are offering cm level accurate solutions using RF signals. The conventional wireless nodes usually implement synchronization techniques which can provide around $\mu s$ level accuracy and if they try to find the range through timestamps, the estimate would be off by $$1 \mu s \times 3 \times 10^8 m/s = 300 m$$ where $3\times 10^8 m/s$ is the approximate speed of an electromagnetic wave. So how are cm level accurate solutions being claimed and actually delivered? This is a classic example of the simplest of signals solving the most complex of problems. In this article, my target is to explain the fundamentals behind this high resolution ranging in the easiest of manners possible. Needless to say, while each product would have its own unique signal processing algorithms, the fundamentals still remain the same. The Big Picture For the sake of providing the big picture, remember that there are other methods available, the best of which are based on optical interferometry. Then, there are ultrasound, optical and hybrid options available as well. RF is the cheapest solution though and there is nothing better than getting accurate measurements using the RF waves. The following techniques are most widely used in RF domain. Rx Signal Strength Indicator (RSS) Time of arrival (ToA) Phase of arrival (PoA) - a special case of ToA Time Difference of Arrival (TDoA) Angle of Arrival (AoA) Technique Pros Cons RSSI Simple hardware, no synchronization required, info provided by most PHY chips Highly inaccurate and environment specific ToA Highly accurate Time synchronization required among anchors and target node PoA Extremely accurate, low cost Sensitive to phase noise and impairments TDoA Great accuracy, no target node synchronization Tight synchronization among all anchors AoA Extra dimension relaxes timing and phase constraints Expensive hardware and less accurate As a final comment, all range estimation methods need a reference point. Anchors provide this reference when an accurate measurement of position is needed. If it is just the range from another node that is of interest, any node can use its own reference. This is the situation we assume in this article. What is a Timestamp? A typical embedded device comes with a counter and a register. The value of the counter increments/decrements as driven by an oscillator. When an increment counter reaches the maximum value (0xF...FF), or a decrement counter reaches the minimum value (0x0...00), it overflows and starts counting again. If a desirable event occurs, say a message arrival event driven by a Rx start interrupt, the value of the counter can be captured and stored in a register that can be later accessed to find the time of that event - according to the node's own reference clock. As an example, consider the following Figure where the timestamp value is captured in Register the Counter is an incremental counter Tx Start is an event that resets the counter, and Rx Start is an event that captures the Counter value to Register. Figure 1: The counter, register and Tx and Rx start events If you don't know much about electronics, it is enough to know that event times can be recorded at a node and accessed for processing later. Setup The ranging setup in this discussion consists of two nodes that can exchange timestamps with each other through the wireless medium as shown in Figure below. Figure 2: Two nodes exchanging timestamps with each other The distance between the two nodes is $R$ while the time of flight from one node to another is $\tau$. Consequently, $$R = \tau \cdot c$$ We denote the real time by $t$, Node A's time by $T_A$ and Node B's time by $T_B$. Since each node starts at a random time, there is a clock offset between its time as compared to the real time. $$T_A = t + \phi_A$$ $$T_B = t + \phi_B$$ Refer to the next Figure to observe how the chain of events unfolds. Figure 3: The chain of events with their corresponding timestamps exchanged between Node A and Node B Any node can start its counter at any given time. So to set a reference point at an arbitrary real time 0, the time offset of Node A is $\phi_A$ while that of of Node B is $\phi_B$. 1. Node A sends its local timestamp $T_1$ to Node B at real time $t_1$, where $$T_1 = t_1 + \phi_A$$ 2. Node B receives this packet at real time $t_2$ and records its local time $T_2$, where $$T_2 =t_2 + \phi_B$$ Clearly, $$t_2 = t_1 + \tau,\qquad or \qquad \tau = t_2-t_1$$ Therefore, we can write $$T_2-T_1 = t_2 +\phi_B- t_1-\phi_A $$ Defining $T_2-T_1$ as $\Delta T_{A->B}$ and $\Delta \phi$ as $\phi_B - \phi_A$ (the clock offset between two nodes), $$\Delta T_{A->B} = \tau + \Delta \phi \quad ------ \quad \text{Eq (1)}$$ It is important to write the equation in the above form because all we know is the observation $\Delta T_{A->B}$. We do not know $t_1$, $t_2$, $\tau$, $\phi_A$ and $\phi_B$. 3. After a processing delay, Node B sends its local timestamp $T_3$ at real time $t_3$ to Node A. 4. Node A records it at $T_4$ at actual time $t_4$. Since $t_4 = t_3+\tau$, $$T_4 -T_3 = t_4+\phi_A - t_3 - \phi_B$$ which can be written in terms of $\Delta T_{B->A}=T_4-T_3$ as $$\Delta T_{B->A} = \tau - \Delta \phi \quad ------ \quad \text{Eq (2)}$$ Adding Eq (1) and Eq (2) yields the estimate of delay. $$\hat \tau = \frac{1}{2}\Big(\Delta T_{A->B} + \Delta T_{B->A}\Big)$$ Now it is clear that the time base of Node A serves as the reference for estimating this delay. Research literature refers to this approach as a ' two-way message exchange'. To pay tribute to Tolkien, I call it ' There and Back Again'. Performance I performed some ranging experiments with a wireless device with a clock rate of 8 MHz. That implies that one such tick takes $1/(8 \times 10^6)$ $=$ $125 ns$. In terms of distance, this is $125 ns \times 3\times 10^8$ $=$ $37.5$ m. Gradually increasing the distance, a divide by two operation and rounding off the results generated the following results. Figure 4: Results for a ranging experiment with an 8 MHz clock Assume that a 100x accuracy, say $37.5$ cm, is needed. Then, we need a clock generating timestamps at a rate of 800 MHz. That kind of expense and power, however, is more suited to computing applications and not to an embedded device. In conclusion, we cannot afford a high rate clock but still desire a high resolution. The Arrival of the Phase of Arrival In the spirit of time of arrival, this method is known as the phase of arrival. First, observe that we already have access to something similar to a high resolution clock – a continuous wave (CW). Consider a simple sinusoid at GHz frequency and just plot its sign. It looks very much like a very high rate clock. Figure 5: Sign of a simple continuous wave is similar to a high rate clock Now again consider two wireless nodes that are exchanging continuous waves instead of timestamps in the following manner. 1. Node A sends a continuous wave $\cos (2\pi F_1 t)$ of frequency $F_1$ at its time $T_1$ (real time $t_1$) to Node B. Using $T_1 = t_1 + \phi_A$, its phase is given by $$2\pi F_1 T_1 = 2\pi F_1 t_1 + 2\pi F_1 \phi_A$$ where $2\pi F_1 \phi_A$ is just a constant and could easily be expressed as a single term $\phi'_A$. As opposed to timestamps case, it is not required, neither it is easy, to measure the phase $2\pi F_1 T_1$ explicitly. 2. Node B receives this continuous wave at real time $t_2$ when the phase of its own local reference at frequency $F_1$ at its local time $T_2$, where $T_2 =t_2 + \phi_B$, is $$2\pi F_1 T_2 = 2\pi F_1 t_2 + 2\pi F_1 \phi_B$$ Using $t_2 = t_1 + \tau$, Node B employs some signal processing algorithm to measure the phase difference between the two continuous waves as $$\Delta \theta_{A->B} = 2\pi F_1(T_2-T_1) = 2\pi F_1 \tau + 2\pi F_1\Delta \phi\quad ------ \quad \text{Eq (3)}$$It is important to write the equation in the above form because all we know is the phase difference $\Delta \theta _{A->B}$. We do not know anything else. 3. After a processing delay, Node B sends a continuous wave in the reverse direction. 4. Node A measures the phase difference Adding Eq (3) and Eq (4) yields the estimate of delay. $$\hat \tau = \frac{1}{2\cdot 2\pi F_1}\Big(\Delta \theta_{A->B} + \Delta \theta_{B->A}\Big)\quad ------ \quad \text{Eq (5)}$$ That was so easy, so fast and so accurate. But the world is not that simple. The Rollover Problem The solution to the accuracy problem creates a problem of its own. Remember we said that when an increment counter reaches the maximum value (0xF...FF), or a decrement counter reaches the minimum value (0x0...00), it overflows and starts counting again. So if a clock is very fast, it overflows more quickly and resets again. It might even do so when the signal on the reverse path might not have returned! The same is the case with the sinusoids. For example, a continuous wave at 2.4 GHz would roll over every $1/(2.4 \times 10^9) \times 3 \times 10^8$ $=$ $12.5$ cm. Any distance greater than 12.5 cm would be impossible to measure. Introducing More Carriers To solve this rollover problem, assume $\Delta \theta = \Delta \theta_{A->B} + \Delta \theta_{B->A}$ and start with plugging Eq (5) in the range expression. $$ R = c\cdot \hat \tau = c \cdot \frac{1}{2\cdot 2\pi F_1} \Delta \theta$$This can be simplified using $c=F_1 \lambda_1$ as $$R = \frac{\lambda_1}{2} \frac{\Delta \theta}{2\pi} $$ Now we can break the phase $\Delta \theta$ into an integer part and a fractional part because $\Delta \theta = 2\pi n + (\Delta \theta)_{\text{frac},F_1}$, where $n$ is the number of integer wavelengths spanning the distance $R$ while $(\Delta \theta)_{\text{frac},F_1}$ is the phase corresponding to the remaining fractional distance. Thus, the above equation can be written as $R = \frac{\lambda_1}{2}\left(n + \frac{(\Delta \theta)_{\text{frac},F_1}}{2\pi}\right)$ Writing the fractional phase as a function of range $\Delta \theta_{\text{frac},F_1} = 2\pi\left(2R\frac{F_1}{c} - n\right)\quad ------ \quad \text{Eq (6)}$ The rollover unwrapping problem is now reduced to cancelling $n$ from the above equation. This can be easily accomplished by sending another tone at frequency $F_2$ that would generate the result $\Delta \theta_{\text{frac},F_2} = 2\pi\left(2R\frac{F_2}{c} - n\right)$ The above two equations can now be solved to cancel $n$ and create an effect equivalent to sending a single tone with a very large wavelength or very low frequency $F_2-F_1$. $\Delta \theta_{\text{frac},F_2} - \Delta \theta_{\text{frac},F_1} = 2\pi\left(2R\frac{F_2-F_1}{c} \right)$ The range is now found to be $R = \frac{c}{4\pi}\cdot \frac{\Delta \theta_{\text{frac},F_2} - \Delta \theta_{\text{frac},F_1}}{F_2-F_1}$ Having eliminated the phase rollover, we are interested in maximum range that can be unambiguously estimated through the above equation. Clearly, this depends on the frequency difference between the two continuous waves. Also, remember that $\Delta \theta_{\text{frac},F_2} - \Delta \theta_{\text{frac},F_1}$ can attain a maximum value of $2\pi$. Then, for example, for a 2 MHz difference, i.e., $F_2-F_1=2\times 10^6$, the unambiguous range is $R = \frac{3\times 10^8}{4\pi}\cdot \frac{2\pi}{2\times 10^6}=75 m$ The Phase Slope Method To combat interference and multipath in indoor channels, a number of difference continuous waves can be used and their results can be stitched together to form a precise range estimate. This is plotted in Figure below. Figure 6: Phase vs frequency plot After taking a number of measurements, a plot of phases versus frequencies is drawn. Similar to Eq (6), we can write $$\Delta \theta_{\text{frac},F_k} = 2\pi\left(2R\frac{F_k}{c} + \text{constant}\right)$$ where the constant term arises instead of $n$ as it might not be the same for all frequencies. However, the slope of the curve is still given by $$\text{slope} = \frac{4\pi}{c}\cdot R$$ from which the range can be found as $$R = \frac{c}{4\pi} \cdot \text{slope}$$ This is why it is known as the Phase Slope method. It is relatively costly to implement due to a number of back and forth transmissions (equal to the number of CWs employed) but it is very accurate because indoor channels are frequently susceptible to interference. A wider range of frequencies ensures resilience against interference through the added redundancy. More importantly, a wider bandwidth combats the multipath problem through higher resolution of arriving echoes in time domain after taking the transform of this phase data. Previous post by Qasim Chaudhari: A Beginner's Guide to OFDM Thanks for this great article on ToF. I'm glad that you liked it. Thank you for introducing with the slope phase method. I have developed a couple of systems for Wi-Fi and proprietary radio distance measurement using ToA but still did not hear about the method. Do you have any plan to describe the slope phase method in details in your next article? Perhaps, you have ideas how to implement the method in the HW? Thank you in advance. The phase slope method is based on the phase of arrival principle. The initial details are exactly the same as ranging with a set of CW. In the next stage, signal processing algorithms are implemented to extract the exact range. In your WiFi and propriety implementation, did you employ time of arrival or phase of arrival technique? I used Round Trip Time of arrival for ranging task. That was quite simple method that did not require neither signal processing development nor extra HW. But now I am working on Time Difference of Arrival for localization task and I want to use both time of arrival for high frequency band (Sub-1GHz and 2.4GHz) and phase of arrival for low frequency band (13.56MHz). To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
My question is mainly about getting explanation to Gauss's third note (note III) in his writing "Zur Astralgeometrie", (see Cubierung der Tetraeder, pages 228-229), and about placing it in the right historical context. In this note Gauss is writing the following: "In the tetrahedron 1234 (1,2,3,4 are it's vertexes), whose faces 124 and 134 are orthogonal, denote the volume as $$\Delta,$$ then it holds that: $$\partial \Delta = -24.\partial341,$$ and if the face angles at vertex 3 are constant, then the following also holds: $$\alpha\alpha\cdot cotg^2341 - \beta\beta\cdot (tgi.24/i)^2 = 1$$ when: $$\alpha = cotg431,\quad \beta = cotg 234.$$ Two things i didn't understand about Gauss's notation immediately came to my mind: What does the notation 24 point"something" mean? does the point mean product? or something else? What is the " i" that appears in his formulas? how is it defined? The questions i asked are of great interest to me since I think the problem of determining the volume of the non-euclidean tetrahedron really represents the peak of Gauss's work on hyperbolic geometry, and I really want to get a formal statement of his result, and to understand how is it compared with later results of Lobachevsky and Schlafli. Update: recently i became aware of a certain interconnection between this fragment (CUBIRUNG DER TETRAEDER) and a second fragment of this kind - " Volumenbestimmungen in der Nichteuklidischen Geometrie" (Gauss's werke, volume 8, p.232-233). This fragment was found among the pages of Gauss's copy of Lobachevsky's 1840 publication (these two fragments are the only ones that deal with the probelm of hyperbolic tetrahedron). As a matter of fact, Stackel mentions, in his commentary of Gauss's "cubirung der tetraeder", that Gauss used the results of the second fragment implicitly when he derived his first displayed formula from p. 228 ($\partial \Delta = -\frac{{1}}{{2}}(24)d(341)$) - he used the two formulas: $$d(341) = -sinh(14)\cdot d(13)$$ $$\partial \Delta = \frac{{1}}{{2}}d(13)\cdot (24)\cdot sinh(14)$$ to set up this relation ($\partial \Delta = -\frac{{1}}{{2}}(24)d(341)$). The second of these formulas was also mentioned in his later fragment, and though this fragment was written later (Gauss wrote this since the publication of Lobachevsky captured his attention), it seems that he was aware of it earlier. I made this update because maybe dechipering Gauss's second fragment might serve as a breakthrough in understanding what Gauss knew on hyperbolic tetrahedrons.
Since the cable is not moving horizontally you know the horizontal component of tension is the same at both ends. The total tension is the horizontal component divided by the cosine of the angle. So the ratio between the tensions is the ratio of the cosines. Since you know the shape of the curve you should be able to take it from here. UPDATE The general equation for a catenary (with lowest point at x=0) is $$y = a \cosh \frac{x}{a}$$ Where $$a = \frac{H}{w}\\H = \text{horizontal tension}\\w = \text{weight per unit length}$$ For a given horizontal distance and vertical displacement, we have to figure out the location of the lowest point and the tension - two equations, two unknowns. From wikipedia.org/wiki/Catenary#Determining_parameters: Given $s$, $v$, and $h$, then $a$ can be solved for numerically: $$\sqrt{s^2 - v^2} = 2a \sinh \frac{h}{2a}, a > 0$$ where $h$ is the horizontal distance between ends, $v$ is the vertical distance between ends, $s$ is the length of the cable, and $a$ is the y coordinate of the lowest point. Next, we just need to find the position of the lowest point relative to the ends. To get the actual locations of $x_1$ and $x_2$ (the horizontal distances from the lowest point to the the left and right ends, respectively) you now have to solve $$\begin{align}\\v &= a (\cosh \frac{x_2}{a} - \cosh \frac{x_1}{a})\\h &= x_2 + x_1 \tag1 \\v &= a\left(\cosh \frac{x_2}{a} - \cosh \frac{x_2-h}{a}\right) \tag2\end{align}$$ Solve (2) for $x_2$ then substitute into (1) to get $x_1$ Finally, the ratio of tensions comes from the ratio of cosines of the angle at the point of suspension: $$\frac{T_2}{T_1} = \frac{\cos\theta_1}{\cos\theta_2}$$ We know the tangent at $x$ is given by $$tan\,\theta = \frac{dy}{dx} = \sinh \frac{x}{a}$$ Combine with the trig identity $$cos\,\theta = \frac{1}{\sqrt{1+\tan^2\theta}}$$ You finally obtain $$\frac{T_1}{T_2} = \sqrt{\frac{1+\sinh^2\frac{x_2}{a}}{1+\sinh^2\frac{x_1}{a}}}$$
I am studying gaussian processes and I have already discrete amount of knowledge in gaussian mixture models. I am here to undersrtand if with a gaussian process you can fit a gaussian mixture model. Formally, a GMM is a linear combination of gaussians such that $$ \phi(x) = \sum_{i=0}^k \alpha_i \phi_i(x | \mu_i, \Sigma_i) $$ where each $\phi_i$ is a gaussian centered in $\mu_i$ with variance $\Sigma_i$. Computationally this is solved using EM. A GP is (roughly) a set of functions distributed with a multivariate gaussian probability distribution that models your data. Computationally this is solved by Cholesky decomposition and linear systems. So I am wondering if with GP you can hope to solve GMM models, or if there is a link whatsoever. To me, they are two completely different things. Am I right? Thanks.
I have just been playing with this and thought my solution just might help somebody else at some point. I wanted the following: table notes i.e. notes at the bottom of the tabular, within the table environment - not at the bottom of the page; automatic numbering of notes within the list of notes; automatic numbering of note markers within the table itself; numbering with small letters, to avoid any confusion with the Arabic numerals used to number footnotes and in the table and text to track content; note markers in the list of notes to be left aligned with text in the first column of the tabular. My solution involves an unholy mixture of threeparttablex with option referable: this manages the automatic numbering of the note markers, on the basis of labels inserted into the list of notes; enumitem: to customise the list of notes. This is a bit complex in terms of number of cooks responsible for the broth. To say that enumitem is used to 'customise' the list is a bit misleading. Essentially, my solution redefines it. More specifically, threeparttable provides tablenotes. threeparttablex redefines it and provides \tnotex{} and some other enhancements. enumitem is then used to redefine tablenotes again. Caveat emptor... Anyway, for what it is worth: \documentclass{article} \usepackage{enumitem,booktabs,cfr-lm} \usepackage[referable]{threeparttablex} \renewlist{tablenotes}{enumerate}{1} \makeatletter \setlist[tablenotes]{label=\tnote{\alph*},ref=\alph*,itemsep=\z@,topsep=\z@skip,partopsep=\z@skip,parsep=\z@,itemindent=\z@,labelindent=\tabcolsep,labelsep=.2em,leftmargin=*,align=left,before={\footnotesize}} \makeatother \begin{document} \begin{table} \centering\tlstyle \begin{threeparttable} \begin{tabular}{lcccc} \toprule & \multicolumn{4}{c}{Great Value}\\ \cmidrule(lr){2-5} Option & Robot 1 & Robot 2 & Robot 3 & Total\\ \midrule Develop Robot 1 brilliant eye\tnotex{tnote:robots-r1} & 5 & 78 & 54 & 56\\ Develop Robot 2 extended ears\tnotex{tnote:robots-r2} & 24 & 87 & 42 & 23\\ Develop Robot 3 brilliant eye\tnotex{tnote:robots-r3} & 0.5 & $\pi$ & 61 & $<19.3$\\ \bottomrule \end{tabular} \begin{tablenotes} \item\label{tnote:robots-r1}That is, $360^\circ$ vision, as proposed by Noddy Norris. \item\label{tnote:robots-r2}As recommended by \emph{Robot Review}. \item\label{tnote:robots-r3}That is, X-Ray vision, as proposed by \emph{Mechanical Maniacs}. \end{tablenotes} \end{threeparttable} \caption{\label{tab:robots}Total values of Jim's technological options for robot projects he thinks possible.} \end{table} \end{document}
wait, well i guess I learned something new today: Google is an untrusted source no its just you divided by .33 which is not the same as 1/3 when you divide hmmmm (does the math again) ahhhhh k guess I learned something else today: MY WHOLE LIFE IS FILLED WITH LIES!!!!! WHY DO TEACHERS SAY CONVERT FROM FRACTIONS TO DECIMALS WHEN IT'S REALLY IMPOSSIBLE!!!! Yeah Rome. YOU GONNA LEARN TODAY! Im not a rosala here. I got 11 years of school backin me up! XD poor misguided fool! you never know how tricky I can be! I said -18.1818181818! and btw, that's incorrect! Zegroes did it with -2! poor misguided fool! maybe a bit of titanium Magic will work things out! wait, Zegroes said 2 which means: NO ONE IS CORRECT! the real answer is: -2 zegroes, if only you placed the - sign, you would've won victory at the question, but without the - sign you failed :( TR you dont want to go down this road! Yuo can ask rosala how it ended up for her when she challenged the master! i did work out i do not care if i got fool the answer is -18 plan -18 accorring to the cal... she lost when she challenged we TIED when we clashed each other that's a huge difference :p You lot are hilarious! I have only just seen this thread. I laughed so hard I almost fell off my chair. Thanks for the entertainment :)) $$\\-6\div \frac{1}{3}\\\\ =\frac{-6}{1}\div \frac{1}{3}\\\\ =\frac{-6}{1}\times \frac{3}{1}\\\\ =-6 \times 3\\\\ =-18$$ It is just as well that Shaniab's fingers work well on her trusty calculator. Thanks Shaniab :)) hehehehehehehe!!!! YES! I BEATED ZEGROES!!!! DANG, I'M SO GOOD AT DEFEATING PEOPLE! XD I FEEL SO ALIVE RIGHT NOW!!!!!! LOL!!!!!! Mother Nature: okay, calm down, Titanium Rome! No bragging Titanium!!!! I know you're hyped up, but keep the tone down. Titanium Rome: YES!!! I defeated 1 of the Top Answerers in the Forums! I expand my territory and I spread my "wisdom" to the Forums! Go run and hide, Rosala! You're next! The 1st Phase and then we will go after kitty them happy7 then maybe attack heureka, then omi67 then radix The 2nd Phase then geno3141 (however that will take FOREVER unless he agrees to ally or to surrender) then Alan (same as geno) then maybe Mr. Columbus (Christopher) then we will attack Singing Birds (Melody) then I will be known in Forums history, as the best math genius in the game "The Forums" and by then I would beat Clash of Clans for the 10th time and MineCraft for the 20th time then we shall beat some other games and beat YOUTUBE and beat THE WORLD!!!!! wait, if zegroes considers himself "the genius", and I've beaten the genius this means... I AM ALREADY A GENIUS!!!!!!!! Creator, plz allow me and zegroes to switch places in the top answerers pernamently for 24 hours in the new update! yes im a genius! I've finally BEATEN the Fourms! I start my own reign...my major Reich...a new empire i must be promoted now from member to Moderator, so now i candelete message of anyone that dares to oppose me beaten the genius, beated the website! thank you all for this amazing journey to beat the forums i will still stay active, the only difference is now I gain power Melody I don't know what's wrong with me I'm reading every question wrong! Lol!
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A October 1999 , Volume 5 , Issue 4 Select all articles Export/Reference: Abstract: The paper is devoted to investigation of a number of difference-deiiferential equations, among them the following one plays the central role: $dF_n$/$dt=\varphi(F_n)(F_{n-1}-F_n)\quad\qquad\qquad (\star)$ where, for every $t, \{F_n(t), n=0,1,2,\ldots\}$ is a probability distribution function, and $\varphi$ is a positive function on $[0, 1]$. The equation $(\star)$ arose as a description of industrial economic development taking into accout processes of creation and propagation of new technologies. The paper contains a survey of the earlier received results including a multi-dimensional generalization and an application to the economic growth theory. If $\varphi$ is decreasing then solutions of Cauchy problem for $(\star)$ approach to a family of wave-trains. We show that diffusion-wise asymptotic behavior takes place if $\varphi$ is increasing. For the nonmonotonic case a general hypothesis about asymtotic behavior is formulated and an analogue of a Weinberger's (1990) theorem is proved. It is argued that the equation can be considereded as an analogue of Burgers equation. Abstract: In this paper, we consider the initial value problem with periodic boundary condition for a class of general systems of the ferromagnetic chain $z_t=-\alpha z\times (z\times z_{x x})+ z\times z_{x x}+z\times f(z), \qquad (\alpha \geq 0).$ The existence of unique smooth solutions is proved by using the technique of spatial difference and a priori estimates of higher-order derivatives in Sobolev spaces. Abstract: The general topic is the connection between a change of stability of an equilibrium point or invariant set $M$ of a (semi-) dynamical system depending on a parameter and a bifurcation of $M$ (generalizing the Hopf bifurcation). In particular, we address the case where $M$ is unstable (for instance a saddle) for a certain value $\lambda_0$ of a parameter $\lambda$, and stable for certain nearby values. Two kinds of bifurcations are considered: "extracritical", i.e. splitting of the set $M$ as $\lambda$ passes the value $\lambda_0$, and "critical" (also called "vertical"), a term which refers to the accumulation of closed invariant set at $M$ for $\lambda=\lambda_0$. Also, two kinds of change of stability are considered, corresponding to the presence or absence of a certain generalized equistability property for $\lambda\ne\lambda_0$. Connections are established between the type of change of stability and the types of bifurcation arising from them. Abstract: We study the scattering theory for nonlinear Klein-Gordon equations $u_{t t} + (m^2-\Delta)u = f_1(u) + f_2(u)$. We show that the scattering operator carries a band in $H^s \times H^{s-1}$ into $H^s \times H^{s-1}$ for all $s\in [1/2,\ \infty)$ if $f_i(u)\ (i = 1,\ 2)$ have $H^s$-critical or $H^s$-subcritical powers. Abstract: We present results on homoclinic and multibump solutions for perturbed second order systems. Using topological degree, we generalize results recently obtained by variational methods. We give Melnikov type conditions for the existence of one homoclinic solution and for the existence of infinitely many multibump solutions. We give also an example for which the set of zeros of the Poincaré-Melnikov function contains an interval and results requiring a simple zero of this function can not be applied. In the case of multibump solutions, when the perturbation is periodic, we prove the existence of approximate Bernoulli shift structures leading to some form of chaos. Abstract: In this paper we give new properties of the dimension introduced by Afraimovich to characterize Poincaré recurrence and which we proposed to call Afraimovich-Pesin's (AP's) dimension. We will show in particular that AP's dimension is a topological invariant and that it often coincides with the asymptotic distribution of periodic points : deviations from this behavior could suggest that the AP's dimension is sensitive to some "non-typical" points. Abstract: In this article, we study harmonic maps between two complete noncompact manifolds M and N by a heat flow method. We find some new sufficient conditions for the uniform convergence of the heat flow, and hence the existence of harmonic maps. Our condition are: The Ricci curvature of M is bounded from below by a negative constant, M admits a positive Green’s function and $ \int_M G(x, y)|\tau(h(y))|dV_y $ is bounded on each compact subset. $\qquad$ (1) Here $\tau(h(x))$ is the tension field of the initial data $h(x)$. Condition (1) is somewhat sharp as is shown by examples in the paper. Abstract: We give a uniform rate function for large deviations of the random occupational measures of an expanding random dynamical system. Abstract: This article discusses the relationship between the inertial manifolds "with delay" introduced by Debussche & Temam, and the standard definition. In particular, the "multi-valued" manifold of the same paper is shown to arise naturally from the manifolds "with delay" when considering issues of convergence as the delay time tends to infinity. This leads to a new characterisation of the multi-valued manifold, which allows a fuller understanding of its structure. Abstract: The complex Ginzburg-Landau equation (CGL, for short) $ \partial_t u = (1 + i\nu)\Delta u + Ru- (1 + i\mu) |u|^2 u; \quad 0\le t < \infty, x\in\Omega $, is investigated in a bounded domain $\Omega\subset \mathbb R^n$ with suffciently smooth boundary. Standard boundary conditions are considered: Dirichlet, Neumann or periodic. Existence and uniqueness of global smooth solutions is established for all real parameter values $\mu$ and $\nu$ if $n\le 2$, and for certain parameter values $\mu$ and $\nu$ if $n\ge 3$. Furthermore, dynamical properties of the CGL equation, such as existence of determining nodes, are shown. The proof of existence of smooth solutions hinges on the following inequality using the $L^2(\Omega)$-duality, $|\mathfrak Im$ $<\Delta u ,\ |u|^{p-2}u>\le (|p-2|)/(2\sqrt{p-1})\mathfrak Re$ $< -\Delta u ,\ |u|^{p-2}u >.$ Abstract: We study here the blow-up set of the maximal classical solution of $u_t -\Delta u = g(u)$ on a ball of $\mathbb R^N$, $N \geq 2$ for a large class of nonlinearities $g$, with $u(x,0) = u_0(|x|)$. Numerical experiments show the interesting behaviour of the blow-up set in respect of $u_0$. As a theoretical background to the method used in this work, we prove an important monotonicity property, that is for a fixed positive radius $r_0$, when the solution gets large enough at a certain time $t_0$, then $u$ is monotone increasing at $r_0$ after $t_0$. Finally, a single radius blow-up property is proved for some large initial conditions. Abstract: It is shown that the complex Ginzburg-Landau (CGL) equation on the real line admits nontrivial $2\pi$-periodic vortex solutions that have $2n$ simple zeros ("vortices") per period. The vortex solutions bifurcate from the trivial solution and inherit their zeros from the solution of the linearized equation. This result rules out the possibility that the vortices are determining nodes for vortex solutions of the CGL equation. Abstract: The wellposedness of the delay problem in a Banach space $X$ $u'(t) = Au(t) + \int_{-r}^0 k(s)A_1 u(s) ds + f(t),\quad t\ge 0;\quad u(t) = z(t), \quad t\in [-r,0]$ (where $A : D(A)\subset X \to X$ is a closed operator and $A_1 : D(A)\to X$ is continuous) is proved and applied to get a classical solution of the wave equation with memory effects $ w_{t t} (t,x) = w_{x x}(t, x) + \int_{-r}^0 k(s) w_{x x} (t + s, x)ds + f(t, x), \quad t\ge 0,\quad x\in [0,l]$ To include also the Dirichlet boundary conditions and to get $C^2$-solutions, $D(A)$ is not supposed to be dense hence A is only a Hille-Yosida operator. The methods used are based on a reduction of the inhomogeneous equation to a homogeneous system of the first order and then on an immersion of $X$ in its extrapolation space, where the regularity and perturbation results of the classical semigroup theory can be applied. Abstract: We consider hyperbolic flows on one dimensional basic sets. Any such flow is conjugate to a suspension of a shift of finite type. We consider compact Lie group skew-products of such symbolic flows and prove that they are stably ergodic and stably mixing, within certain naturally defined function spaces. Abstract: The aim of this paper is to study the blow up behavior of a radially symmetric solution $u$ of the semilinear parabolic equation $u_t - \Delta u = |u|^{p-1} u, \quad x\in\Omega,\quad t\in [0,T]$, $u(t,x)=0, x\in\partial\Omega, \quad t\in [0,T] $, $u(0,x) =u_0(x),\quad x\in\Omega $, around a blow up point other than its centre of symmetry. We assume that $\Omega$ is a ball in $\mathbb R^N$ or $\Omega =\mathbb R^N$, and $p>1$. We show that $u$ behave as of a one-dimensional problem was concerned, that is, the possible asymptotic behaviors and final time profiles around an unfocused blow up point are the ones corresponding to the case of dimesion $N=1$. Abstract: In this paper we establish several results concerning the asymptotic behavior of (random) infinite products of generic sequences of homogeneous order-preserving mappings on a cone in an ordered Banach space. In addition to weak ergodic theorems we also obtain convergence to an operator $f(\cdot)\eta$ where $f$ is a functional and $\eta$ is a common fixed point. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Nonzero $\phi$'s and $\psi$'s denote the temporal influence from stimulus to mediator/outcome and etc $A$, $B$, $C$ are causal following a similar proof in Sobel, Lindquist, 04 Causal Conditions The treatment randomization regime is the same across time and participants. Models are correctly specified, and no treatment-mediator interaction. At each time point $t$, the observed outcome is one realization of the potential outcome with observed treatment assignment $\mathbf{\bar S}_{t}$, where $\mathbf{ \bar S}_{t}=( \mathbf{S}_{1},\dots,\mathbf{S}_{t})$. The treatment assignment is random across time. The causal effects are time-invariant. The time-invariant covariance matrix is not affected by the treatment assignments. Estimation: Conditional Likelihood The full likelihood for our model is too complex Given the initial $p$ time points, the conditional likelihood is$$ \begin{align*} & \tiny \ell\left(\boldsymbol{\Theta},\delta~|~\mathbf{Z},\mathcal{I}_{p}\right) = \sum_{t=p+1}^{T}\log f\left((M_{t},R_{t})~|~\mathbf{X}_{t}\right) \\& \tiny = -\frac{T-p}{2}\log\sigma_{1}^{2}\sigma_{2}^{2}(1-\delta^{2})-\frac{1}{2\sigma_{1}^{2}}\|\mathbf{M}-\mathbf{X}\boldsymbol{\theta}_{1}\|_{2}^{2} \\& \tiny -\frac{1}{2\sigma_{2}^{2}(1-\delta^{2})}\|(\mathbf{R}-\mathbf{M}B-\mathbf{X}\boldsymbol{\theta}_{2})-\kappa(\mathbf{M}-\mathbf{X}\boldsymbol{\theta}_{1})\|_{2}^{2} \end{align*} $$ Multilevel Data: Two-level Likelihood Second level model, for each subject $i$$$(A_i,B_i,C_i) = (A,B,C) + (\eta^A_i, \eta^B_i, \eta^C_i)$$where errors $\eta$ are normally distributed The two level likelihood is conditional convex Two-stage fitting: plug-in estimates from the first level Block coordinate fitting: jointly optimize first level likelihood + second level likelihood Theorem: Assume assumptions (A1)-(A6) are satisfied.Assume $\mathbb{E}(Z_{i_{t}}^{2})=q\lt \infty$, for $i=1,\dots,N$. Let $T=\min_{i}T_{i}$. 1. If $\boldsymbol{\Lambda}$ is known, then the two-stage estimator $\hat{\delta}$ maximizes the profile likelihood of model asymptotically, and $\hat{\delta}$ is $\sqrt{NT}$-consistent. 2. If $\boldsymbol{\Lambda}$ is unknown, then the profile likelihood of model has a unique maximizer $\hat{\delta}$ asymptotically, and $\hat{\delta}$ is $\sqrt{NT}$-consistent, provided that $1/\varpi=\bar{\kappa}^{2}/\varrho^{2}=\mathcal{O}_{p}(1/\sqrt{NT})$, $\kappa_{i}=\sigma_{i_{2}}/\sigma_{i_{1}}$, $\bar{\kappa}=(1/N)\sum\kappa_{i}$, and $\varrho^{2}=(1/N)\sum(\kappa_{i}-\bar{\kappa})^{2}$.Using the two-stage estimator $\hat{\delta}$, the CMLE of our model is consistent, as well as the estimator for $\mathbf{b}=(A,B,C)$. Theory: Summary Under regularity conditions, $N$ subs, $T$ time points Our $\hat \delta$ is $\sqrt{NT}$-consistent This relaxes the unmeasured confounding assumption in mediation analysis Our $(\hat{A},\hat{B}, \hat{C})$ is also consistent Simulations & Real Data Comparison Our methods: GMA-h and GMA-ts Previous methods: BK Baron & Kenny, MACC Zhao and Luo, KKB Kenny et al Other methods do not model the temporal correlations or time series like ours Simulations Low bias for $AB$ Low bias for temporal cor Gray dash lines are the truth GMA performs the best, and recovers the temporal correlations Real Data Experiment Public data: OpenFMRI ds30 Stop-go experiment: withhold (STOP) from pressing buttons Expect "STOP" stimuli to deactivate brain region M1 Goal: quantify the role of region preSMA Result Result STOP deactivates M1 directly ($C$) and indirectly ($AB$) preSMA mediates a good portion of the total effect Help resolve the debates among neuroscientists Other methods under-estiamte the effects Novel feedback findings: M1 → preSMA after lag 1 and 2 (not shown) Discussion Mediation analysis for multiple time series Method: Granger causality + mediation Optimizing complex likelihood Theory: identifiability, consistency Result: low bias and improved accuracy Extension: functional mediation Paper in Biometrics 2019 CRAN pkg: gma and references within Covariate Assisted Principal regression Co-Authors Yi ZhaoIndiana Univ Biostat Bingkai WangJohns Hopkins Biostat Stewart MostofskyJohns Hopkins Medicine Brian CaffoJohns Hopkins Biostat Statistics/Data Science Focuses Motivating Example Brain network connections vary by covariates (e.g. age/sex) Goal: model how covariates change network connections Proposition: When (C1) $H=\boldsymbol{\mathrm{I}}$ in the optimization problem, for any fixed $\boldsymbol{\beta}$, the solution of $\boldsymbol{\gamma}$ is the eigenvector corresponding to the minimum eigenvalue of matrix$$ \sum_{i=1}^{n}\frac{\Sigma_{i}}{\exp(x_{i}^\top\boldsymbol{\beta})} $$ Will focus on the constraint (C2) Algoirthm Iteratively update $\beta$ and then $\gamma$ Prove explicit updates Extension to multiple $\gamma$: After finding $\gamma^{(1)}$, we will update $\Sigma_i$ by removing its effect Search for the next PD $\gamma^{(k)}$, $k=2, \dotsc$ Impose the orthogonal constraints such that $\gamma^{k}$ is orthogonal to all $\gamma^{(t)}$ for $t\lt k$ Theory for $\beta$ Theorem:Assume $\sum_{i=1}^{n}x_{i}x_{i}^\top/n\rightarrow Q$ as $n\rightarrow\infty$. Let $T=\min_{i}T_{i}$, $M_{n}=\sum_{i=1}^{n}T_{i}$, under the true $\boldsymbol{\gamma}$, we have\begin{equation}\sqrt{M_{n}}\left(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right)\overset{\mathcal{D}}{\longrightarrow}\mathcal{N}\left(\boldsymbol{\mathrm{0}},2 Q^{-1}\right),\quad \text{as } n,T\rightarrow\infty,\end{equation}where $\hat{\boldsymbol{\beta}}$ is the maximum likelihood estimator when the true $\boldsymbol{\gamma}$ is known. Theory for $\gamma$ Theorem:Assume $\Sigma_{i}=\Gamma\Lambda_{i}\Gamma^\top$, where $\Gamma=(\boldsymbol{\gamma}_{1},\dots,\boldsymbol{\gamma}_{p})$ is an orthogonal matrix and $\Lambda_{i}=\mathrm{diag}\{\lambda_{i1},\dots,\lambda_{ip}\}$ with $\lambda_{ik}\neq\lambda_{il}$ ($k\neq l$), for at least one $i\in\{1,\dots,n\}$. There exists $k\in\{1,\dots,p\}$ such that for $\forall~i\in\{1,\dots,n\}$, $\boldsymbol{\gamma}_{k}^\top\Sigma_{i}\boldsymbol{\gamma}_{k}=\exp(x_{i}^\top\boldsymbol{\beta})$. Let $\hat{\boldsymbol{\gamma}}$ be the maximum likelihood estimator of $\boldsymbol{\gamma}_{k}$ in Flury, 84. Then assuming that the assumptions are satisfied, $\hat{ \boldsymbol{\beta}}$ from our algorithm is $\sqrt{M_{n}}$-consistent estimator of $\boldsymbol{\beta}$. Simulations PCA and common PCA do not find the first principal direction, because they don't model covariates Resting-state fMRI Regression Coefficients Age Sex Age*Sex No statistical significant changes were found by massive edgewise regression
Abbreviation: All An is an expanded category $\mathbf{M}=\langle M,\circ,\text{dom},\text{rng},\text{id},\vee,\wedge,^\smile\rangle$ such that allegory $...$ is …: $...$ $...$ is …: $...$ Remark: This is a template. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be allegories. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a functor $F:A\rightarrow B$ that also preserves the new operations: $h(x ... y)=h(x) ... h(y)$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct % 1)
The logarithm $\log_{b} (x)$ can be computed from the logarithms of $x$ and $b$ with respect to a positive base $k$ using the following formula: $$\log_{b} (x) = \frac{\log_{k} (x)}{\log_{k} (b)}.$$ So your examples can be solved in the following way with a calculator: $$x = \log_{1.03} (2) = \frac{\log_{10} (2)}{\log_{10} (1.03)} = \frac{0.301}{0.013} = 23.450, $$ $$x = \log_{8} (33) = \frac{\log_{10} (33)}{\log_{10} (8)} = \frac{1.519}{0.903} = 1.681.$$ If you know that $b$ and $x$ are both powers of some $k$, then you can evaluate the logarithm without a calculator by the power identity of logarithms, e.g., $$x = \log_{81} (27) = \frac{\log_{3} (27)}{\log_{3} (81)} = \frac{\log_{3} (3^3)}{\log_{3} (3^4)} = \frac{3 \cdot \log_{3} (3)}{4 \cdot \log_{3} (3)} =\frac{3}{4}.$$
The metric is $$d\bar{s}^2=2GJ\Omega^2\left(-(1+r^2)d\tau^2+\frac{dr^2}{1+r^2}+d\theta^2+\Lambda^2(d\varphi+rd\tau)^2\right)$$ Where $$\Omega^2\equiv\frac{1+\cos^2\theta}{2},\quad\Lambda\equiv\frac{2\sin\theta}{1+\cos^2\theta}$$ It is said that the metric has enhanced $SL(2,\mathbb{R})\times U(1)$ isometry group. Now what exactly is enhanced symmetry? I only find mentioning of it in the context of string theory, so I'm not sure what to do with it. If we ignore that for a moment, looking at the groups in question, $SL(2,\mathbb{R})$ has 3 generators ($Sl(n,\mathbb{R})$ has $n^2-1$ elements), and $U(1)$ has one. The rotational $U(1)$ symmetry is generated by Killing vector: $$\xi_0=-\partial_\varphi$$ While time translations become part of an enhanced $SL(2,\mathbb{R})$ isometry group generated by the Killing vectors $$\xi_1=\frac{2r\sin\tau}{\sqrt{1+r^2}}\partial_\tau-2\sqrt{1+r^2}\cos\tau\partial_r+\frac{2\sin\tau}{\sqrt{1+r^2}}\partial_\varphi$$ $$\xi_2=-\frac{2r\cos\tau}{\sqrt{1+r^2}}\partial_\tau-2\sqrt{1+r^2}\sin\tau\partial_r-\frac{2\cos\tau}{\sqrt{1+r^2}}\partial_\varphi$$ $$\xi_3=2\partial_\tau$$ Now I wanted to try and find them, but that proved to be quite a challenge (I may try to do it in the end). So instead I wanted to check if they satisfy Killing equation. Another way of checking if they are Killing vectors is to check if Lie derivative of the metric along the Killing vectors is 0 $$\mathcal{L}_\xi g_{\mu\nu}=\xi^\sigma\partial_\sigma g_{\mu\nu}+g_{\sigma\nu}\partial_\mu\xi^\sigma+g_{\mu\sigma}\partial_\nu\xi^\sigma=0$$ So I put the two simplest ones ($\xi_0$ and $\xi_3$), and they give 0 for each component. Nice. I try with $\xi_2$, and I get non zero components. So what is wrong with my interpretation? I did the calculation by hand and with RGTC package in Mathematica, using LieD which calculates Lie derivative, and I made a code that calculates Killing equation, and still got non zero result. Edit I'll write out what I get for $\tau\tau$ component. So, my Killing vector $\xi_2$ has three nonzero components $\xi^\tau=\frac{2r\sin\tau}{\sqrt{1+r^2}}$, $\xi^r=-2\sqrt{1+r^2}\cos\tau$, $\xi^\varphi=\frac{2\sin\tau}{\sqrt{1+r^2}}$ And the $\tau\tau$ part of Lie derivative is $$\mathcal{L}_\xi g_{\tau\tau}=\xi^\sigma\partial_\sigma g_{\tau\tau}+g_{\sigma\tau}\partial_\tau\xi^\sigma+g_{\tau\sigma}\partial_\tau\xi^\sigma$$ Only nonzero metric components that can be used are $g_{\tau\tau}$, and $g_{\tau\varphi}=g_{\varphi\tau}$. They are $$g_{\tau\tau}=-2GJ\Omega(\theta)^2(1+r^2(1-\Lambda(\theta)^2))$$ $$g_{\varphi\tau}=4GJr\Omega(\theta)^2\Lambda(\theta)^2$$ So, for the first part of Lie derivative, since $g_{\tau\tau}$ depends only on $r$ and $\theta$ my $\xi^\sigma$ is only $\xi^r$, since $\xi^\theta=0$. And the second part we have 2 times the ($g_{\tau\tau}\partial_\tau\xi^\tau+g_{\varphi\tau}\partial_\tau\xi^\varphi$). Which, after some simplifying becomes $$\mathcal{L}_\xi g_{\tau\tau}=\frac{8 G J r \Lambda (\theta )^2 \Omega (\theta )^2 \cos (\tau )}{\sqrt{r^2+1}}$$
We receive a stream of $n-1$ pairwise different numbers from the set $\left\{1,\dots,n\right\}$. How can I determine the missing number with an algorithm that reads the stream once and uses a memory of only $O(\log_2 n)$ bits? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community You know $\sum_{i=1}^n i = \frac{n(n+1)}{2}$, and because $S = \frac{n(n+1)}{2}$ could be coded in $O(\log(n))$ bits this can be done in $O(\log n)$ memory and in one path (just find $S - \mathrm{currentSum}$, this is missing number). But this problem could be solved in general case (for constant $k$): we have $k$ missing numbers, find out all of them. In this case instead of calculating just sum of $y_i$, calculate sum of j'st power of $x_i$ for all $1\le j \le k$ (I assumed $x_i$ is missing numbers and $y_i$ is input numbers): $\qquad \displaystyle \begin{align} \sum_{i=1}^k x_i &= S_1,\\ \sum_{i=1}^k x_i^2 &= S_2,\\ &\vdots \\ \sum_{i=1}^k x_i^k &= S_k \end{align}$ $\qquad (1)$ Remember that you can calculate $S_1,...S_k$ simply, because $S_1 = S - \sum y_i$, $S_2 = \sum i^2 - \sum y_i^2$, ... Now for finding missing numbers you should solve $(1)$ to find all $x_i$. You can compute: $P_1 = \sum x_i$, $P_2 = \sum x_i\cdot x_j$, ... , $P_k = \prod x_i$ $(2)$. For this remember that $P_1 = S_1$, $P_2 = \frac{S_1^2 - S_2}{2}$, ... But $P_i$ is coefficients of $P=(x-x_1)\cdot (x-x_2) \cdots (x-x_k)$ but $P$ could be factored uniquely, so you can find missing numbers. These are not my thoughts; read this. From the comment above: Before processing the stream, allocate $\lceil \log_2 n \rceil$ bits, in which you write $x:= \bigoplus_{i=1}^n \mathrm{bin}(i)$ ($\mathrm{bin}(i)$ is the binary representation of $i$ and $\oplus$ is pointwise exclusive-or). Naively, this takes $\mathcal{O}(n)$ time. Upon processing the stream, whenever one reads a number $j$, compute $x := x \oplus \mathrm{bin}(j)$. Let $k$ be the single number from $\{ 1, ... n\}$ that is not included in the stream. After having read the whole stream, we have $$ x = \left(\bigoplus_{i=1}^n \mathrm{bin}(i)\right) \oplus \left(\bigoplus_{i \neq k } \mathrm{bin}(i)\right) = \mathrm{bin}(k) \oplus \bigoplus_{i \neq k } (\mathrm{bin}(i) \oplus \mathrm{bin}(i)) = \mathrm{bin}(k), $$ yielding the desired result. Hence, we used $\mathcal{O}(\log n)$ space, and have an overall runtime of $\mathcal{O}(n)$. HdM's solution works. I coded it in C++ to test it. I can't limit the value to $O(\log_2 n)$ bits, but I'm sure you can easily show how only that number of bits is actually set. For those that want pseudo code, using a simple $\text{fold}$ operation with exclusive or ($\oplus$): $$\text{Missing} = \text{fold}(\oplus, \{1,\ldots,N\} \cup \text{InputStream})$$ Hand-wavey proof: A $\oplus$ never requires more bits than its input, so it follows that no intermediate result in the above requires more than the maximum bits of the input (so $O(\log_2 n)$ bits). $\oplus$ is commutative, and $x \oplus x = 0$, thus if you expand the above and pair off all data present in the stream you'll be left only with a single un-matched value, the missing number. #include <iostream>#include <vector>#include <cstdlib>#include <algorithm>using namespace std;void find_missing( int const * stream, int len );int main( int argc, char ** argv ){ if( argc < 2 ) { cerr << "Syntax: " << argv[0] << " N" << endl; return 1; } int n = atoi( argv[1] ); //construct sequence vector<int> seq; for( int i=1; i <= n; ++i ) seq.push_back( i ); //remove a number and remember it srand( unsigned(time(0)) ); int remove = (rand() % n) + 1; seq.erase( seq.begin() + (remove - 1) ); cout << "Removed: " << remove << endl; //give the stream a random order std::random_shuffle( seq.begin(), seq.end() ); find_missing( &seq[0], int(seq.size()) );}//HdM's solutionvoid find_missing( int const * stream, int len ){ //create initial value of n sequence xor'ed (n == len+1) int value = 0; for( int i=0; i < (len+1); ++i ) value = value ^ (i+1); //xor all items in stream for( int i=0; i < len; ++i, ++stream ) value = value ^ *stream; //what's left is the missing number cout << "Found: " << value << endl;}
Also read, Circle Formulas Area Formulas Lines and Angles Coordinate Geometry Conic Section Formula The Straight line: Straight line definition is simple according to euclidian geometry that it is a breadthless length. Also, it is the collection of points which follows a straight path. Read straight line formula further in this article. Straight Line Formula: Earlier we studied about basics of coordinate geometry. In this section we will review all the formulas as well as results related to it. To make it convenient to read we have listed all the important formula in exact manner. Here is all the straight line formula. 1. Distance Formula: First let’s suppose two points P(x 1, y 1) and Q(x 2, y 2), then: \(\left| {PQ} \right| = \sqrt {{{\left( {{x_2} – {x_1}} \right)}^2} + {{\left( {{y_2} – {y_1}} \right)}^2}}\) 2. Mid Point Formula: Let’s suppose two points P(x 1, y 1) and Q(x 2, y 2), then mid point of the given line is: \(\overline {PQ} = \left( {\frac{{{x_1} + {x_2}}}{2},\frac{{{y_1} + {y_2}}}{2}} \right)\) 3. Any point formula: Case1. When the point A(x, y) divides the line PQ internally in the ratio k 1 : k 2: x = \(\frac{{{k_1}{x_2} + {k_2}{x_1}}}{{{k_1} + {k_2}}}\), y = \(\frac{{{k_1}{y_2} + {k_2}{y_1}}}{{{k_1} + {k_2}}}\) Case2. When the point A(x, y) divides the line PQ externally in the ratio k 1 : k 2: x = \(\frac{{{k_1}{x_2} – {k_2}{x_1}}}{{{k_1} – {k_2}}}\), y = \(\frac{{{k_1}{y_2} – {k_2}{y_1}}}{{{k_1} – {k_2}}}\) 4. Slope of line: straight line formula for any line \(\overline {PQ}\), slope is given by: m = \(\frac{{{y_2} – {y_1}}}{{{x_2} – {x_1}}}\) Notes: The slope of the x-axis is zero. The slope of a line parallel to the x-axis is zero. Also, the slope of y-axis is not defined. It is ∞. Again, the slope of the line parallel to the y-axis is not defined. It is ∞. 5. Straight line results on Equation: We write the equation of x-axis as y = 0 The equation of x-axis is y=0. The equation of line parallel to the x-axis and situated at a distance ‘a’ is y = a. Also, the equation of line parallel to the x-axis and situated at a distance ‘b’ is x = b. 6. General Equations involving straight line formula: Slope-intercept form:The equation of the line having slope m and y-intercept cis y = mx + c. Point-slope form:The equation of line having slope m and passing through (xis 1, y 1) y – y 1= m(x – x 1). Equation of line passing through two points:\(\frac{{y – {y_1}}}{{{y_2} – {y_1}}} = \frac{{x – {x_1}}}{{{x_2} – {x_1}}}\) Intercept form:The equation of line having x-intercept ‘a’ and y-intercept ‘b’ is given as: \(\frac{x}{a} + \frac{y}{b}\) = 1 Normal Form:The normal form of a straight line is given by the equation: \( x\cos \alpha + y\sin \alpha = p\) where p is the length of the perpendicular from O(0,0) to the line, and α is the inclination of the perpendicular. General form:The general form of the straight line equation is: ax + by + c=0 . Let’s suppose two lines l 1 having slope m 1 and l 2 having slope m 2 are: 1having slope m 1and l 2having slope m 2are: If lines l 1and l 2are parallel then m 1= m 2. If lines l 1and l 2are perpendicular then m 1.m 2= -1 Also, If θ is the angle between l 1and l 2then tanθ = \(\frac{{{m_2} – {m_1}}}{{1 + {m_1}{m_2}}}\) Distance of any point from the line:The straight line formula for distance of point (x 1,y 1)from the line ax + by + c=0is :\(\frac{{\left| {a{x_1} + b{y_1} + c} \right|}}{{\sqrt {{a^2} + {b^2}} }}\) Three lines to be concurrent: All the Three lines a 1x + b 1y + c 1 = 0, a 2x + b 2y + c 2 = 0, a 3x + b 3y + c 3 = 0 are concurrent if \(\left| \begin{array}{ccc} {a_1} & {b_1} & {c_1} \\ {a_2} & {b_2} & {c_2} \\ {a_3} & {b_3} & {c_3} \end{array} \right| = 0\)
I have this curve. It's definitely no sine or cosine. It consists of half circles. How do you call it and how do you describe it mathematically? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let's define a family of functions ($ r $ is the radius of a semicircle and $ n\in \mathbb {Z}$): $$f_n(x, r)=(-1)^n\sqrt {r^2-(x-nr)^2}$$ Then the function you're looking for is (viewing functions as sets): $$F (x, r)=\bigcup_{n\in \mathbb {Z}} f_n (x, r)$$ I don't know the name for this function and I honestly don't think it has one. To avoid convergence issues due to a Fourier Series, you could realize the periodicity using functions like Mod And Floor: $$ \sqrt{1-(((x+1) \bmod 2)-1)^2} \left(1-2 \left\lfloor (\frac{x+1}{2} \bmod 2)\right\rfloor \right) $$ Note that a half-circle centered at $x=c$ with radius $r$ is of the form $y = \pm \sqrt{r^2-(x-c)^2}$, where the sign is based on the center of the circle. Now we can use a step function to jump discretely from one half-circle to the next. Let $c = \text{round}(\frac{x}{r})r$ where "round" rounds to the nearest integer (and rounds arbitrarily when its argument is halfway between two integers). Let $s = 1 - 2(c \text{ mod } 2)$, which gives 1 or $-1$, respectively, when $c$ is the center of a circle going above or below the $x$ axis. Then we get the formula for the entire curve $y = s\sqrt{r^2-(x-c)^2}$. Bruteforced answer (don't take this too seriously) \begin{equation} y=\sqrt{1-(x-2n)^2}, \qquad n\in\mathbb{Z} \\ y \ge 0 \quad \text{if} \quad n|2 \\ y \le 0 \quad \text{if} \quad n\not|\ 2 \end{equation} Replace $1$ with $r^2$ if you want the semi-circles of arbitrary radius. The Fourier series corresponding the curve is on the form : $$y(x)=a_0+\sum_{n=1}^\infty \big(a_n \cos(\frac{n\pi x}{2r})+b_n \sin(\frac{n\pi x}{2r}) \big)$$ with $a_0=0$ and $b_n=0$ $$a_n=\frac{1}{L}\int_{-L}^L f(x) \cos(\frac{n\pi x}{L})dx=\frac{1}{2r}\int_{-2r}^{2r} f(x) \cos(\frac{n\pi x}{2r})dx$$ where $L=2r$ and $f(x)=\sqrt{r^2-x^2}$ in $-r<x<r$ and $f(x)=-\sqrt{r^2-x^2}$ in $-2r<x<-r$ and in $r<x<2r$ The functions are even, which allows the simplification : $$a_n=\frac{1}{2r}4\int_{0}^{r} \sqrt{r^2-x^2} \cos(\frac{n\pi x}{2r})dx=\frac{2r}{n}J_1(\frac{\pi}{2}n)$$ $J_1$ is the Bessel function of the first kind and of order $n$. $$y(x)=2r\sum_{n=1}^\infty \frac{1}{n}J_1(\frac{\pi}{2}n) \cos(\frac{n\pi x}{2 r})$$ This is only for theoretical interest. On the practical viewpoint, drawing the curve from the Fourier series is of no interest at all. The series is far to be quickly convergent and the numerical computation of the Bessel functions would be too much time consuming. 30 terms of the series given by Plot[Pi (BesselJ[1, Pi/2] Sin[x] - (BesselJ[1, (3 Pi)/2] Sin[3 x])/ 3 + (BesselJ[1, (5 Pi)/2] Sin[5 x])/ 5 - (BesselJ[1, (7 Pi)/2] Sin[7 x])/ 7 + (BesselJ[1, (9 Pi)/2] Sin[9 x])/9 ), {x, -50, 50}] has blocks that look like semi-circles of the form : Plot [Sqrt[ (Pi/2)^2 - (x - Pi/2)^2], {x, 0,Pi } ] , It is not "a" or one smooth curve. It is a wave form of an infinite series of curves with second order discontinuity at $ x = (2 k-1) \pi/2 $, slope and point continuities are ok. At every point on the x-axis curvature jumps from $ -\pi/2 \rightarrow +\pi/2$ then again $ -\pi/2 \rightarrow + \pi/2 ..$ We can evaluate coefficients in a Fourier series as for any periodic wave, as smoothness is not required. In electrical engg. applications such waves create infinite current flow at every jump point, so cannot be adopted. If you want non-infinite slope then they are not full semi-circles.The wave can be defined as a sloping straight line and then as a circle from tangent point.. Integrating an ODE $$ \dfrac {y'' y}{1+ y'^2 }= - 2 $$ you get a curve Elastica having proportion 1.2: 1 width/height with better curvature continuity.
The Kelvin–Helmholtz mechanism is an astronomical process that occurs when the surface of a star or a planet cools. The cooling causes the pressure to drop, and the star or planet shrinks as a result. This compression, in turn, heats up the core of the star/planet. This mechanism is evident on Jupiter and Saturn and on brown dwarfs whose central temperatures are not high enough to undergo nuclear fusion. It is estimated that Jupiter radiates more energy through this mechanism than it receives from the Sun, but Saturn might not. The latter process causes Jupiter to shrink at a rate of two centimetres each year. [1] The mechanism was originally proposed by Kelvin and Helmholtz in the late 19th century to explain the source of energy of the Sun. By the mid-19th century, conservation of energy had been accepted, and one consequence of this law of physics is that the Sun must have some energy source to continue to shine. Because nuclear reactions were unknown, the main candidate for the source of solar energy was gravitational contraction. However, it soon was recognized by Sir Arthur Eddington and others that the total amount of energy available through this mechanism only allowed the Sun to shine for millions of years rather than the billions of years that the geological and biological evidence suggested for the age of the Earth. (Kelvin himself had argued that the Earth was millions, not billions, of years old.) The true source of the Sun's energy remained uncertain until the 1930s, in which it was shown by Hans Bethe to be nuclear fusion. It was theorised that the gravitational potential energy from the contraction of the Sun could be its source of power. To calculate the total amount of energy that would be released by the Sun in such a mechanism (assuming uniform density), it was approximated to a perfect sphere made up of concentric shells. The gravitational potential energy could then be found as the integral over all the shells from the centre to its outer radius. Gravitational potential energy from Newtonian mechanics is defined as: U = -\frac{Gm_1m_2}{r}, where G is the gravitational constant, and the two masses in this case are that of the thin shells of width dr, and the contained mass within radius r as one integrates between zero and the radius of the total sphere. This gives: U = -G\int_0^R \frac{m(r) 4 \pi r^2 \rho}{r}\, dr, where R is the outer radius of the sphere, and m( r) is the mass contained within the radius r. Changing m( r) into a product of volume and density to satisfy the integral, U = -G\int_0^R \frac{4 \pi r^3 \rho 4 \pi r^2 \rho}{3r}\, dr = -\frac{16}{15}G \pi^2 \rho^2 R^5. Recasting in terms of the mass of the sphere gives the total gravitational potential energy as U = -\frac{3M^2G}{5R}. Then, applying the virial theorem, half of this energy is radiated during the collapse, giving the total radiated energy: U_\text{r} = \frac{3M^2G}{10R}. While uniform density is not correct, one can get a rough order of magnitude estimate of the expected age of our star by inserting known values for the mass and radius of the Sun, and then dividing by the known luminosity of the Sun (note that this will involve another approximation, as the power output of the Sun has not always been constant): \frac{U_\text{r}}{L_\odot} \approx \frac{1.1 \times 10^{41}~\text{J}}{3.9 \times 10^{26}~\text{W}} \approx 8\,900\,000~\text{years}, where L_\odot is the luminosity of the Sun. While giving enough power for considerably longer than many other physical methods, such as chemical energy, this value was clearly still not long enough due to geological and biological evidence that the Earth was billions of years old. It was eventually discovered that thermonuclear energy was responsible for the power output and long lifetimes of stars. [3] References ^ Patrick G. J. Irwin (2003). Giant Planets of Our Solar System: Atmospheres, Composition, and Structure. Springer. ^ == BW Carroll & DA Ostlie (2007). An Introduction to Modern Astrophysics (2nd Ed.). Pearson Addison Wesley. pp. 296–298. ^ R. Pogge (2006-01-15). "The Kelvin-Helmholtz Mechanism". Lecture 12: As Long as the Sun Shines. Ohio State University. Retrieved 2009-11-05. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Let us say that you are taking AP Statistics. The prerequisite is a passing grade of D or above in Algebra II. The kids that you are working with struggle with algebra and do not retain information very well. Even though you spent a month talking about z-scores and how to find them using the invNorm() function a lot of them are still confused as to what to do and need someone to spoonfeed them with simplified information that does not contain too many technical terms. Your challenge now is to teach them confidence intervals which involve dealing with a) the concept b) the math c) the interpretation and finally the d) misconceptions. As you can see, this is an uphill battle and what worsens it is the student apathy. You review how to find the corresponding z-scores of the middle 95th percentile using the invNorm() function. You explain that the first entry must be the area to the left however when they see $z_{\alpha}$ they think the area to the right. This has been an ongoing confusion for one month despite you repeating the same thing over weeks. Now you try giving them a motivating example: "Lets say you wanted to figure out the population mean length of all the world's bald eagles' wingspan. This is our parameter. Remember a parameter is a numerical value assigned to a whole population. You take a sample of 100 bald eagles and measure their wingspan. We call this sample mean a point estimate. Do you guys think that this point estimate accurately reflects the population mean?" (Introducing vocabulary) You transition into talking about 95% confidence intervals conceptually. They are lost. Then you present them this formula. They are now completely and hopelessly lost: $ \bar{x} - z_{\frac{\alpha}{2}}\sigma < \mu < \bar{x} + z_{\frac{\alpha}{2}}\sigma $ You draw a picture showing that the level of confidence is $1-\alpha$ and the remaining areas we don't want are $\frac{\alpha}{2}$ and $\frac{\alpha}{2}$. They don't get what's going on. Twenty minutes in, there are some students not paying attention anymore and doing another class' work.
This question already has an answer here: I have to prove that $$\lim_{n\to \infty} \frac {n^n} {n!}=\infty$$ I've tried to look for a lower bound that also converges to $\infty$ (I don't know if I'm explainig myself correctly), but I haven't found one yet. Applying L'Hôpital is way too complicated in $n!$, and the epsilon proof does not work as I have no way whatsoever of finding N. Any ideas?
As we known, if there is time derivative interaction in $\mathcal L_\mathrm{int}$, then $\mathcal{H}_\mathrm{int}\neq -\mathcal{L}_\mathrm{int}$. For example, Scalar QED, $$ \begin{aligned} \mathcal{L}_\mathrm{int}&= -ie \phi^\dagger(\partial_\mu \phi) A^\mu+ie(\partial_\mu \phi^\dagger) \phi A^\mu +e^2\phi^\dagger \phi A_\mu A^\mu \\ \mathcal{H}_\mathrm{int}&=-\mathcal{L}_\mathrm{int} -e^2 \phi^\dagger\phi (A^0)^2 \end{aligned} $$ There is the last term breaking Lorentz invariance. Derivation: \begin{eqnarray} \mathcal{L}&=&(\partial_\mu+i e A_\mu)\phi (\partial^\mu-i e A^\mu)\phi^\dagger-m^2\phi^\dagger \phi\\ &=&\mathcal{L}_0^\mathrm{KG}+\mathcal{L}_\mathrm{int} \end{eqnarray} where $$\mathcal{L}_0^\mathrm{KG}=\partial_\mu\phi \partial^\mu\phi^\dagger-m^2\phi^\dagger \phi$$ $$\mathcal{L}_\mathrm{int}= -ie \phi^\dagger(\partial_\mu \phi) A^\mu+ie(\partial_\mu \phi^\dagger) \phi A^\mu +e^2\phi^\dagger \phi A_\mu A^\mu $$ $$\pi=\frac{\partial \mathcal{ L}}{\partial(\partial_0 \phi)}=\partial^0\phi^\dagger-i e A^0 \phi^\dagger$$ $$\pi^\dagger=\frac{\partial \mathcal{ L}}{\partial(\partial_0 \phi^\dagger)}=\partial^0\phi+i e A^0 \phi $$ \begin{eqnarray} \mathcal{H}&=&\pi \dot\phi+\pi^\dagger \dot\phi^\dagger-\mathcal{L} \\ &=&\pi \dot\phi+\pi^\dagger \dot\phi^\dagger-(\dot\phi^\dagger\dot\phi-\nabla\phi^\dagger \cdot\nabla\phi-m^2 \phi^\dagger\phi)-\mathcal{L}_\mathrm{int} \\ &=&\pi(\pi^\dagger-ieA^0\phi)+\pi^\dagger (\pi +ieA^0\phi^\dagger)-((\pi^\dagger-ieA^0\phi)(\pi +ieA^0\phi^\dagger)-\nabla\phi^\dagger \cdot\nabla\phi-m^2 \phi^\dagger\phi)-\mathcal{L}_\mathrm{int}\\ &=&(\pi^\dagger \pi + \nabla\phi^\dagger \cdot\nabla\phi+m^2 \phi^\dagger\phi)-\mathcal{L}_\mathrm{int} -e^2 \phi^\dagger\phi (A^0)^2 \\ &=&\mathcal{H}_0^\mathrm{KG}+\mathcal{H}_\mathrm{int} \end{eqnarray} My questions: The Feynman Rules for Scalar QED is here. But we see there is an extra term in interaction Hamitonian $ -e^2 \phi^\dagger\phi (A^0)^2$, according to Wick's theorem, it should have some contribution to Feynman Rule which does not occur in this textbook. I've computed this vertex and I find it's nonzero. Why there is no Feynman rules for such Lorentz breaking term? As we known, for path integral quantization, the coordinate space path integral: $$Z_1= \int D q\ \exp\left(\int dt\ L(q,\dot q) \right)$$ And phase space path integral: $$Z_2= \int D p\, D q\ \exp\left(\int dt\ p\dot q -H(p,q) \right)$$ Only for this type Lagrangian $L=\dot q^2-V(p)$, then $Z_1=Z_2$. (The Feynman Rules for Scalar QED in textbook is same as what is derived by coordinate space path integral. ) I consider the 2nd method of path integral quantization is always equivalent to canonical quantization. So for Scalar QED, are these two kinds of path integral quantization same? How to prove? For non-abelian gauge theory, there is derivative interaction even in gauge field itself. It seems that all textbooks use $Z_1$ to get the Feynman rules. Are these two kinds of path integral quantization same in non-abelian gauge field? If not same, why we choose the coordinate space path integral? It's the axiom because it coincides with experiment?
The $\mathbb{Z_5}$-vector space $\mathfrak{B}$ 3 over the field $(\mathbb{Z_5}, +, .)$ $\mathfrak{B}$ 3over the field $(\mathbb{Z_5}, +, .)$ 1. BackgroundThis is a formal introduction to the genetic code $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$ over the field $(\mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable Document Format (CDF) named IntroductionToZ5GeneticCodeVectorSpace.cdf available in GitHub. This is graphic user interface with an interactive didactic introduction to the mathematical biology background that is explained here. To interact with a CDF users will require for Wolfram CDF Player or Mathematica. The Wolfram CDF Player is freely available (easy installation on Windows OS and on Linux OS). 2. Biological mathematical model If the Watson-Crick base pairings are symbolically expressed by means of the sum “+” operation, in such a way that hold: G + C = C + G = D, U + A = A + U = D, then this requirement leads us to define an additive group ($\mathfrak{B}^3$, +) on the set of five DNA bases ($\mathfrak{B}^3$, +). Explicitly, it was required that the bases with the same number of hydrogen bonds in the DNA molecule and different chemical types were algebraically inverse in the additive group defined in the set of DNA bases $\mathfrak{B}$. In fact eight sum tables (like that one shown below), which will satisfice the last constraints, can be defined in eight ordered sets: {D, A, C, G, U}, {D, U, C, G, A}, {D, A, G, C, U}, {D, U, G, C, A},{G, A, U, C},{G, U, A, C},{C, A, U, G} and {C, U, A, G} [1,2]. The sets originated by these base orders are called the strong-weak ordered sets of bases [1,2] since, for each one of them, the algebraic-complementary bases are DNA complementary bases as well, pairing with three hydrogen bonds (strong, G:::C) and two hydrogen bonds (weak, A::U). We shall denote this set SW. A set of extended base triplet is defined as $\mathfrak{B}^3$ = { XYZ | X, Y, Z $\in\mathfrak{B}$}, where to keep the biological usual notation for codons, the triplet of letters $XYZ\in\mathfrak{B}^3$ denotes the vector $(X,Y,Z)\in\mathfrak{B}^3$ and $\mathfrak{B} =$ {A, C, G, U}. An Abelian group on the extended triplets set can be defined as the direct third power of group: $(\mathfrak{B}^3,+) = (\mathfrak{B},+)×(\mathfrak{B},+)×(\mathfrak{B},+)$ where X, Y, Z $\in\mathfrak{B}$, and the operation “+” as shown in the table [2]. Next, for all elements $\alpha\in\mathbb{Z}_{(+)}$ (the set of positive integers) and for all codons $XYZ\in(\mathfrak{B}^3,+)$, the element: $\alpha \bullet XYZ = \overbrace{XYZ+XYX+…+XYZ}^{\hbox{$\alpha$ times}}\in(\mathfrak{B}^3,+)$ is well defined. In particular, $0 \bullet X =$ D for all $X\in(\mathfrak{B}^3,+) $. As a result, $(\mathfrak{B}^3,+)$ is a three-dimensional (3D) $\mathbb{Z_5}$-vector space over the field $(\mathbb{Z_5}, +, .)$ of the integer numbers modulo 5, which is isomorphic to the Galois field GF(5). Notice that the Abelian groups $(\mathbb{Z}_5, +)$ and $(\mathfrak{B},+)$ are isomorphic. For the sake of brevity, the same notation $\mathfrak{B}^3$ will be used to denote the group $(\mathfrak{B}^3,+)$ and the vector space defined on it. + D A C G U D D A C G U A A C G U D C C G U D A G G U D A C U U D A C G This operation is only one of the eight sum operations that can be defined on each one of the ordered sets of bases from SW. 3. The canonical base of the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$Next, in the vector space $\mathfrak{B}^3$, vectors (extended codons): e 1 =ADD, e 2 =DAD and e 3 =DDA are linearly independent, i.e., $\sum\limits_{i=1}^3 c_i e_i =$ DDD implies $c_1=0, c_2=0$ and $c_3=0$ for any distinct $c_1, c_2, c_3 \in\mathbb{Z_5}$. Moreover, the representation of every extended triplet $XYZ\in\mathfrak{B}^3$ on the field $\mathbb{Z_5}$ as $XYZ=xe_1+ye_2+ze_3$ is unique and the generating set $e_1, e_2$, and $e_3$ is a canonical base for the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$. It is said that elements $x, y, z \in\mathbb{Z_5}$ are the coordinates of the extended triplet $XYZ\in\mathfrak{B}^3$ in the canonical base ($e_1, e_2, e_3$) [3] References José M V, Morgado ER, Sánchez R, Govezensky T. The 24 Possible Algebraic Representations of the Standard Genetic Code in Six or in Three Dimensions. Adv Stud Biol, 2012, 4:119–52. Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Grau R. An algebraic hypothesis about the primeval genetic code architecture. Math Biosci, 2009, 221:60–76.
Refine Language English (10) (remove) The article provides an asymptotic probabilistic analysis of the variance of the number of pivot steps required by phase II of the "shadow vertex algorithm" - a parametric variant of the simplex algorithm, which has been proposed by Borgwardt [1] . The analysis is done for data which satisfy a rotationally invariant distribution law in the \(n\)-dimensional unit ball. Let \(a_i i:= 1,\dots,m.\) be an i.i.d. sequence taking values in \(\mathbb{R}^n\). Whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables which decompose additively relative to their boundary simplices, eg. the volume of \(P\), integral representations of their first two moments are given which lead to asymptotic estimations of variances for special "additive variables" known from stochastic approximation theory in case of rotationally symmetric distributions. Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\). Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail. An improved asymptotic analysis of the expected number of pivot steps required by the simplex algorithm (1995) Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data. Let \(A\):= {\(a_i\mid i= 1,\dots,m\)} be an i.i.d. random sample in (\mathbb{R}^n\), which we consider a random polyhedron, either as the convex hull of the \(a_i\) or as the intersection of halfspaces {\(x \mid a ^T_i x\leq 1\)}. We introduce a class of polyhedral functionals we will call "additive-type functionals", which covers a number of polyhedral functionals discussed in different mathematical fields, where the emphasis in our contribution will be on those, which arise in linear optimization theory. The class of additive-type functionals is a suitable setting in order to unify and to simplify the asymptotic probabilistic analysis of first and second moments of polyhedral functionals. We provide examples of asymptotic results on expectations and on variances. Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals. A Simple Integral Representation for the Second Moments of Additive Random Variables on Stochastic Polyhedra (1992) Let \(a_1, i:=1,\dots,m\), be an i.i.d. sequence taking values in \(\mathbb{R}^n\), whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables, which decompose additively relative to their boundary simplices, eg. the volume of \(P\), simple integral representations of its first two moments are given in case of rotationally symmetric distributions in order to facilitate estimations of variances or to quantify large deviations from the mean. Despite their very good empirical performance most of the simplex algorithm's variants require exponentially many pivot steps in terms of the problem dimensions of the given linear programming problem (LPP) in worst-case situtation. The first to explain the large gap between practical experience and the disappointing worst-case was Borgwardt (1982a,b), who could prove polynomiality on tbe average for a certain variant of the algorithm-the " Schatteneckenalgorithmus (shadow vertex algorithm)" - using a stochastic problem simulation.
Given a compact oriented submanifold $N \subset M$ one says that $N$ represents a homology class in $M$ by taking $i_*(\tau_N)$ where $i_*$ is induced by inclusion and $\tau_N$ is the fundamental class of $N$ chosen according to orientation. There are some cases where this is completely clear. For example, $S^1$ represents a generator in $\mathbb C \setminus \{0\}$, or $\mathbb CP^1$ represents a homology class in $\mathbb CP^2$ by the $CW$-structure. However, there are some more mysterious cases for me. For example, a degree $3$ complex projective curve should be $3 \cdot [\mathbb CP^1] \in H_2(\mathbb CP^2).$ But these are tori (when they are elliptic curves) so up to homology, $[T^2] =3 \cdot [\mathbb CP^1]$. Probably a satisfactory answer (assuming that I'm thinking about this correctly) would include something like: Can one prove that a torus represents $3 \cdot [\mathbb CP^1] \in \mathbb H_2(\mathbb CP^2)$ geometrically? or a reference pointing to how one can begin to do these types of geometric calculations.
In an examination, I was asked to calculate $\int_0^{1}\frac{x^{300}}{1+x^2+x^3}dx$. Options were gives as a - 0.00 b - 0.02 c - 0.10 d - 0.33 e - 1.00 Just look at the questions I felt the integral $I \geq \sum_0^1{}\frac{x^{300}}{1+x^2+x^3} = 0.33$. I felt, since numerator is very small as compared to denominator therefore, for value $\epsilon<1$, $1.00 $ isn't possible. So, I chose option d. But I am not sure whether its correct or not as I didn't follow standard procedure. what is the correct answer and How can it be solved using a standard procedure?
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
First note that if $k\neq 2,3$, then all eigenvalues are different, and hence the matrix is diagonalizable. In case when $k = 2$ or $k=3$, i.e. you have an eigenvalue with multiplicity more than $1$, you need to calculate dimension of the eigenspace. If all the dimensions of eigenspaces are equal to multiplicities of corresponding eigenvalues, then the matrix is diagonalizable, otherwise not. Can you continue from here? Edit: Let $\lambda$ be an eigenvalue of $A$ and $E_\lambda$ its corresponding eigenspace. Let $p_A$ be the characteristic polynomial of $A$. Since $\lambda$ is an eigenvalue, $p_A(\lambda) = 0$ and thus, there exists positive integer $n$ such that $p_A(x) = (x-\lambda)^nq(x)$ and $q(\lambda)\neq 0$. Define $a_\lambda$ as this $n$ in the corresponding factorization of $p_A$. We call this algebraic multiplicity of eigenvalue $\lambda$. On the other hand, we are also interested in number $g_\lambda = \dim E_\lambda = \dim\ker(A-\lambda I)$ which is called geometric multiplicity of eigenvalue $\lambda$. In general, $1\leq g_\lambda \leq a_\lambda$ and $A$ is diagonalizable if and only if $a_\lambda = g_\lambda$ for all eigenvalues $\lambda$ of $A$. In your case, you have characteristic polynomial $p_A(x) = (x-2)(x-3)(x-k)$. If $k\neq 2,3$, then $a_2 = a_3 = a_k = 1$ and since $1\leq g_\lambda \leq a_\lambda$, we have $g_2=g_3 = g_k = 1$, as well. That is, all geometric and algebraic multiplicities are equal and the matrix is diagonalizable. In the case $k = 3$, we have $a_2 = 1$, $a_3 = 2$, so again, $a_2 = g_2 = 1$, but we need to check whether $g_3 = 1$ or $g_3 = 2$. To find $g_3$ solve the homogeneous linear system $(A-3I)v = 0$. You will find out that solution space is of dimension $2$ (its obvious as soon as you write it), and hence $a_3 = g_3 = 2$, so $A$ is diagonalizable. In the case $k = 2$, similar to the previous case, we have $a_3 = g_3 = 1$, but $a_2 = 2$ so we need to solve homogeneous linear system $(A-2I)v = 0$, but in this case you will find out that $g_2 = 1 \neq a_2$, and thus $A$ is not diagonalizable.
I have mentioned this elsewhere, but it bears repeating because it is such an important concept: Sufficiency pertains to data reduction, not parameter estimation per se. Sufficiency only requires that one does not "lose information" about the parameter(s) that was present in the original sample. Students of mathematical statistics have a tendency to conflate sufficient statistics with estimators, because "good" estimators in general need to be sufficient statistics: after all, if an estimator discards information about the parameter(s) it estimates, it should not perform as well as an estimator that does not do so. So the concept of sufficiency is one way in which we characterize estimators, but that clearly does not mean that sufficiency is about estimation. It is vitally important to understand and remember this. That said, the Factorization theorem is easily applied to solve (a); e.g., for a sample $\boldsymbol x = (x_1, \ldots, x_n)$, the joint density is $$f(\boldsymbol x \mid \theta) = \prod_{i=1}^n \frac{\theta}{x_i^2} \mathbb 1 (x_i \ge \theta) \mathbb 1 (\theta > 0) = \mathbb 1 (x_{(1)} \ge \theta > 0) \, \theta^n \prod_{i=1}^n x_i^{-2},$$ where $x_{(1)} = \min_i x_i$ is the minimum order statistic. This is because the product of the indicator functions $\mathbb 1 (x_i \ge \theta)$ is $1$ if and only if all of the $x_i$ are at least as large as $\theta$, which occurs if and only if the smallest observation in the sample, $x_{(1)}$, is at least $\theta$. We see that we cannot separate $x_{(1)}$ from $\theta$, so this factor must be part of $g(\boldsymbol T(\boldsymbol x) \mid \theta)$, where $\boldsymbol T(\boldsymbol x) = T(\boldsymbol x) = x_{(1)}$. Note that in this case, our sufficient statistic is a function of the sample that reduces a vector of dimension $n$ to a scalar $x_{(1)}$, so we may write $T$ instead of $\boldsymbol T$. The rest is easy: $$f(\boldsymbol x \mid \theta) = h(\boldsymbol x) g(T(\boldsymbol x) \mid \theta),$$ where $$h(\boldsymbol x) = \prod_{i=1}^n x_i^{-2}, \quad g(T \mid \theta) = \mathbb 1 (T \ge \theta > 0) \theta^n,$$ and $T$, defined as above, is our sufficient statistic. You may think that $T$ estimates $\theta$--and in this case, it happens to--but just because we found a sufficient statistic via the Factorization theorem, this doesn't mean it estimates anything. This is because any one-to-one function of a sufficient statistic is also sufficient (you can simply invert the mapping). $T^2 = x_{(1)}^2$ is also sufficient (note while $m : \mathbb R \to \mathbb R$, $m(x) = x^2$ is not one-to-one in general, in this case it is because the support of $X$ is $X \ge \theta > 0$). Regarding (b), MLE estimation, we express the joint likelihood as proportional to $$\mathcal L(\theta \mid \boldsymbol x) \propto \theta^n \mathbb 1(0 < \theta \le x_{(1)}).$$ We simply discard any factors of the joint density that are constant with respect to $\theta$. Since this likelihood is nonzero if and only if $\theta$ is positive but not exceeding the smallest observation in the sample, we seek to maximize $\theta^n$ subject to this constraint. Since $n > 0$, $\theta^n$ is a monotonically increasing function on $\theta > 0$, hence $\mathcal L$ is greatest when $\theta = x_{(1)}$; i.e., $$\hat \theta = x_{(1)}$$ is the MLE. It is trivially biased because the random variable $X_{(1)}$ is never smaller than $\theta$ and is almost surely strictly greater than $\theta$; hence its expectation is almost surely greater than $\theta$. Finally, we can explicitly compute the density of the order statistic as requested in (c): $$\Pr[X_{(1)} > x] = \prod_{i=1}^n \Pr[X_i > x],$$ because the least observation is greater than $x$ if and only if all of the observations are greater than $x$, and the observations are IID. Then $$1 - F_{X_{(1)}}(x) = \left(1 - F_X(x)\right)^n,$$ and the rest of the computation is left to you as a straightforward exercise. We can then take this and compute the expectation $\operatorname{E}[X_{(1)}]$ to ascertain the precise amount of bias of the MLE, which is necessary to answer whether there is a scalar value $c$ (which may depend on the sample size $n$ but not on $\theta$ or the sample $\boldsymbol x$) such that $c\hat \theta$ is unbiased.
The essence of the matter is both strikingly simple and extremely general. Namely, there are infinitely many solutions to your congruence simply because the solutions lie in the orbit of an invertible map on a finite set. But such permutations decompose into finite cycles - so they repeat infinitely often upon iteration. Below I discuss your simple linear example and then I show how the technique generalizes to nonlinear recursions. First we convert $\rm\,f_n = 10^n\!-1 \,$ to shift form. Note $\rm\,f_{n+2}-f_{n+1}\! = 9\cdot 10^{n+1}\! = 10\ (f_{n+1}\!-f_n) \,$ which yields the recurrence relation $\rm\,\, f_{n+2} =\, 11\ f_{n+1}-10\ f_n. \,$ This is simpler as a shift map on pairs: $\rm\,S\, :\, (f_n,f_{n+1})\to (f_{n+1},f_{n+2}) =\, (f_{n+1},\,11\,f_{n+1}\! - 10\ f_n). \,$ Extended to all pairs it yields the map $\rm\,S\, :\, (a,b)\to (b,\,11\,b-10\,a = c).\,$ Computing the inverse of this map we obtain $\rm\,(b,c)\to (a = (11\,b-c)/10,\,b),\,$ which exists in any ring $\rm\,R\,$ where $10\,$ has an inverse. In particular, let $\rm\,R =\mathbb Z/m\,$ be the ring of integers modulo $\rm\,m>1\,$, for any $\rm\,m\,$ coprime to $10.\,$ Since $\rm\,R \,$ is finite, so too is its set of pairs $\rm R^2.\,$ On such pairs the shift map $\rm\,S \,$ is an invertible map on a finite set, so it is a permutation. Therefore its orbit decomposition consists of finite cycles. Thus the values $\rm f_n$ all occur in the orbit of the initial condition pair $\rm (f_0,f_1) = (0,9)$, and each value necessarily occurs infinitely many times since the shift map successively steps along the finite circular orbit. This also works for nonlinear recurrences with invertible shift map, e.g. from an old post Exercise $\,\,$ A sequence $\rm f_n$ satisfies the recurrence $\rm\, f_{n+2} = f_{n+1}^{\,2}\! - f_n,\,$ with $\rm\, f_1 = 39,\, f_2 = 45.$ Prove that $\, 1986 \,$ divides infinitely many terms of the sequence. Hint $\,\,$ The shift map is $\rm\, (a,b)\to (b,\,b^2-a = c)\,$ with inverse $\rm\, (b,c)\to (a = b^2-c,\, b)\,$ and maps $\ (39,45)\to (45,0) \ \pmod {1986}$ Note $\,\,$ When the recurrence is linear the shift map has a matrix representation, which may be used to quickly compute the terms of the recurrence by fast exponentiation algorithms (e.g. repeatedly squaring). However, neither the matrix nor any other consequences of linearity are required to deduce the cyclicity properties above. Often many students (and even some professors) don't realize this, which can lead to obfuscated solutions (compared to the simple approach above). Some solutions completely overlook the innate symmetry and, as a result, end up completely reinventing the wheel, i.e. discovering the innate cycle structure completely from scratch - not realizing that it is simply a special case of cyclic structure of permutations, i.e. orbit decomposition. The moral of the story is: always look for any innate symmetries of a problem before diving in head-first to apply more brute-force techniques. For another nice application of orbit decompositions see this answer, which illustrates how they prove key for finding polynomial and rational function solutions of recurrences.
Abbreviation: APoGrp An is a partially ordered group $\mathbf{A}=\langle A,+,-,0,\le\rangle$ such that abelian partially ordered group $\cdot$ is : $xy=yx$ commutative Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$ A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Abelian lattice-ordered groups]] expanded type [[Partially ordered groups]] [[Abelian groups]] reduced type
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
It looks like you're new here. If you want to get involved, click one of these buttons! We've been looking at feasibility relations, as our first example of enriched profunctors. Now let's look at another example. This combines many ideas we've discussed - but don't worry, I'll review them, and if you forget some definitions just click on the links to earlier lectures! Remember, \(\mathbf{Bool} = \lbrace \text{true}, \text{false} \rbrace \) is the preorder that we use to answer true-or-false questions like while \(\mathbf{Cost} = [0,\infty] \) is the preorder that we use to answer quantitative questions like or In \(\textbf{Cost}\) we use \(\infty\) to mean it's impossible to get from here to there: it plays the same role that \(\text{false}\) does in \(\textbf{Bool}\). And remember, the ordering in \(\textbf{Cost}\) is the opposite of the usual order of numbers! This is good, because it means we have $$ \infty \le x \text{ for all } x \in \mathbf{Cost} $$ just as we have $$ \text{false} \le x \text{ for all } x \in \mathbf{Bool} .$$ Now, \(\mathbf{Bool}\) and \(\mathbf{Cost}\) are monoidal preorders, which are just what we've been using to define enriched categories! This let us define and We can draw preorders using graphs, like these: An edge from \(x\) to \(y\) means \(x \le y\), and we can derive other inequalities from these. Similarly, we can draw Lawvere metric spaces using \(\mathbf{Cost}\)-weighted graphs, like these: The distance from \(x\) to \(y\) is the length of the shortest directed path from \(x\) to \(y\), or \(\infty\) if no path exists. All this is old stuff; now we're thinking about enriched profunctors between enriched categories. A \(\mathbf{Bool}\)-enriched profunctor between \(\mathbf{Bool}\)-enriched categories also called a feasibility relation between preorders, and we can draw one like this: What's a \(\mathbf{Cost}\)-enriched profunctor between \(\mathbf{Cost}\)-enriched categories? It should be no surprise that we can draw one like this: You can think of \(C\) and \(D\) as countries with toll roads between the different cities; then an enriched profunctor \(\Phi : C \nrightarrow D\) gives us the cost of getting from any city \(c \in C\) to any city \(d \in D\). This cost is \(\Phi(c,d) \in \mathbf{Cost}\). But to specify \(\Phi\), it's enough to specify costs of flights from some cities in \(C\) to some cities in \(D\). That's why we just need to draw a few blue dashed edges labelled with costs. We can use this to work out the cost of going from any city \(c \in C\) to any city \(d \in D\). I hope you can guess how! Puzzle 182. What's \(\Phi(E,a)\)? Puzzle 183. What's \(\Phi(W,c)\)? Puzzle 184. What's \(\Phi(E,c)\)? Here's a much more challenging puzzle: Puzzle 185. In general, a \(\mathbf{Cost}\)-enriched profunctor \(\Phi : C \nrightarrow D\) is defined to be a \(\mathbf{Cost}\)-enriched functor $$ \Phi : C^{\text{op}} \times D \to \mathbf{Cost} $$ This is a function that assigns to any \(c \in C\) and \(d \in D\) a cost \(\Phi(c,d)\). However, for this to be a \(\mathbf{Cost}\)-enriched functor we need to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category! We do this by saying that \(\mathbf{Cost}(x,y)\) equals \( y - x\) if \(y \ge x \), and \(0\) otherwise. We must also make \(C^{\text{op}} \times D\) into a \(\mathbf{Cost}\)-enriched category, which I'll let you figure out to do. Then \(\Phi\) must obey some rules to be a \(\mathbf{Cost}\)-enriched functor. What are these rules? What do they mean concretely in terms of trips between cities? And here are some easier ones: Puzzle 186. Are the graphs we used above to describe the preorders \(A\) and \(B\) Hasse diagrams? Why or why not? Puzzle 187. I said that \(\infty\) plays the same role in \(\textbf{Cost}\) that \(\text{false}\) does in \(\textbf{Bool}\). What exactly is this role? By the way, people often say \(\mathcal{V}\)-category to mean \(\mathcal{V}\)-enriched category, and \(\mathcal{V}\)-functor to mean \(\mathcal{V}\)-enriched functor, and \(\mathcal{V}\)-profunctor to mean \(\mathcal{V}\)-enriched profunctor. This helps you talk faster and do more math per hour.
ISSN: 1930-5346 eISSN: 1930-5338 All Issues Advances in Mathematics of Communications May 2007 , Volume 1 , Issue 2 Select all articles Export/Reference: Abstract: Let $A(n, d)$ denote the maximum number of codewords in a binary code of length n and minimum Hamming distance $d$. Upper and lower bounds on $A(n, d)$ have been a subject for extensive research. In this paper we examine upper bounds on $A(n, d)$ as a special case of bounds on the size of subsets in metric association scheme. We will first obtain general bounds on the size of such subsets, apply these bounds to the binary Hamming scheme, and use linear programming to further improve the bounds. We show that the sphere packing bound and the Johnson bound as well as other bounds are special cases of one of the bounds obtained from association schemes. Specific bounds on $A(n, d)$ as well as on the sizes of constant weight codes are also discussed. Abstract: We present public-key cryptographic protocols for key exchange, digital signatures, and encryption whose security is based on the presumed intractability of solving the principal ideal problem, or equivalently, the distance problem, in the real model of a hyperelliptic curve. Our protocols represent a significant improvement over existing protocols using real hyperelliptic curves. Theoretical analysis and numerical experiments indicate that they are comparable to the imaginary model in terms of efficiency, and hold much more promise for practical applications than previously believed. Abstract: In this paper, we consider double circulant and quasi-twisted selfdual codes over $\mathbb F_5$ and $\mathbb F_7$. We determine the highest minimum weights for such codes of lengths up to 34 for $\mathbb F_5$ and up to 28 for $\mathbb F_7$, and classify the codes with these minimum weights. In particular, we give a double circulant self-dual [32, 16] code over $\mathbb F_5$ which has a higher minimum weight than the previously best known linear code with these parameters. In addition, a self-dual code over $\mathbb F_7$ is presented which has a higher minimum weight than the previously best known self-dual code for length 28. Abstract: Recently Terence Tao approached Szemerédi's Regularity Lemma from the perspectives of Probability Theory and of Information Theory instead of Graph Theory and found a stronger variant of this lemma, which involves a new parameter. To pass from an entropy formulation to an expectation formulation he found the following: Let $Y$ , and $X,X'$ be discrete random variables taking values in $mathcal Y$ and $mathcal X$, respectively, where $mathcal Y \subset$ [ −1, 1 ], and with $X' = f(X)$ for a (deterministic) function $f$. Then we have $ \E(|\E(Y|X')-\E(Y|X)|)\leq2I(X\wedge Y|X')^{\frac12}.$ We show that the constant $2$ can be improved to $(2 \l n2)^{\frac{1}{2}}$ and that this is the best possible constant. Abstract: We use elementary facts about quadratic forms in characteristic 2 to evaluate the sign of some Walsh transforms in terms of a Jacobi symbol. These results are applied to the Walsh transforms of the Gold and Kasami-Welch functions. We prove that the Gold functions yield bent functions when restricted to certain hyperplanes. We also use the sign information to determine the dual bent function. Abstract: In this note, we study the covering radii of extremal doubly even self-dual codes. We give slightly improved lower bounds on the covering radii of extremal doubly even self-dual codes of lengths 64, 80 and 96. The covering radii of some known extremal doubly even self-dual [64, 32, 12] codes are determined. Abstract: The mean-centered cuboidal (or m.c.c.) lattice is known to be the optimal packing and covering among all isodual three-dimensional lattices. In this note we show that it is also the best quantizer. It thus joins the isodual lattices $\mathbb Z$, $A_2$ and (presumably) $D_4, E_8$ and the Leech lattice in being simultaneously optimal with respect to all three criteria. Abstract: An extremal singly even self-dual [88, 44, 16] code is constructed for the first time. Some optimal (extremal) singly even self-dual codes with weight enumerators which were not known to be attainable are also found for lengths 68 and 92. Abstract: Rivest proposed the idea of a chaffing-and-winnowing scheme, in which confidentiality is achieved through the use of an authentication code. Thus it would still be possible to have confidential communications even if conventional encryption schemes were outlawed. Hanaoka et al. constructed unconditionally secure chaffing-and-winnowing schemes which achieve perfect secrecy in the sense of Shannon. Their schemes are constructed from unconditionally secure authentication codes. In this paper, we construct unconditionally secure chaffing-and-winnowing schemes from unconditionally secure authentication codes in which the authentication tags are very short. This could be a desirable feature, because certain types of unconditionally secure authentication codes can provide perfect secrecy if the length of an authentication tag is at least as long as the length of the plaintext. The use of such a code might be prohibited if encryption schemes are made illegal, so it is of interest to construct chaffing-and-winnowing schemes based on ''short'' authentication tags. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Abbreviation: CRPoSgrp A is a residuated partially ordered semigroup $\mathbf{A}=\langle A, \cdot, \to, \le\rangle$ such that commutative residuated partially ordered semigroup $\cdot$ is : $xy=yx$ commutative Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be commutative residuated partially ordered monoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $h(x \to y)=h(x) \to h(y)$, and $x\le y\Longrightarrow h(x)\le h(y)$. A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Commutative residuated lattice-ordered semigroups]] expanded type [[Residuated partially ordered semigroups]] same type [[Commutative partially ordered semigroups]] reduced type
General intuition:The classical fisher information (CFI) is a measure of how quickly a probability distribution changes with respect to some parameter. While the quantum fisher information(QFI) is how quickly a quantum state (represented by a density matrix) changes with respect to some parameter. To define such a measure, one needs to a define a distance on the manifold of probability distributions or quantum states (Projective Hilbert Space). For a probability distribution such a metric can be fixed by a set of subtle mathematical assumptions but in general the direct expression for the fisher information is more illuminating: $F_c(\theta)=\sum_x P(x,\theta)[\frac{1}{P(x,\theta)}\frac{d}{d\theta}P(x,\theta)]^2$ This is the average of the percent change of each $P(x,\theta)$ for a small amount of change in $\theta$. The percent change, $\frac{1}{P(x,\theta)}\frac{d}{d\theta}P(x,\theta)$ is squared because otherwise the average would be 0.The QFI can be constructed from this expression. First, notice that to obtain a probability distribution of a quantum state, you must measure in a particular basis, say $R$. Then by optimizing the CFI over that observable R we obtain the quantum fisher information. $F_q(\theta)=max_R F_c(\rho(\theta),R)$ By thinking about the Brunes Metric and specifying how $\rho$ depends on $\theta$ one can relate this expression to one easy for calculations. If we define $\rho(\theta)=e^{iQ\theta}\rho e^{-iQ\theta}$, it's possible to derive: $F_q(Q,\theta=0)= 2\sum_{l,l'}\frac{(p_{l}-p_{l'})^{2}}{p_{l}+p_{l'}}\left|\left<l\right|Q\left|l'\right>\right|^2$where $\left| l\right>$ are the eigenvectors of the density matrix and $p_{l}$ are the eigenvalues. I don't know this proof so you'll have to look for articles if you interested. But it's not necessary for interpretation. Simply from the optimization of the CFI, you can see it is some quantification of how quickly a quantum state changes with respect to the parameter $\theta$. Now the article you linked makes really only a few substitutions to connect the QFI to the linear response of a thermal state $\rho=e^{-\beta H}/Z$ to a perturbation $Q$ driven at various frequencies. Thinking about the QFI as quantifying how quickly a state changes with respect to some operation Hamiltonian $Q$ gives some intuition about why it might be related to response functions. EntanglementRelating the QFI to entanglement is done in just a few steps. 1st) one uses the fact that the QFI of a pure state is 4 times the variance of $Q$: $F_q(Q,\theta=0)=4(\left<Q^2\right>-\left<Q\right>^2)$ (Easily proved from the above expression). 2nd) one uses the fact that the quantum fisher information is convex in the space of density matricies. That is if I have two pure states $\rho_a=\left|a\right>\left<a\right|$ and $\rho_b=\left|b\right>\left<b\right|$ the convex sum of them has a lower QFI: $F_q(p\rho_a+(1-p)\rho_b)<pF_Q(\rho_a)+(1-p)F_q(\rho_b)$ for $0<p<1$ and any $Q$. Thus for a mixed state, the QFI will be limited by the QFI of the eigenstate with maximum variance in Q. The final step is to bound the variance of an unentangled state for a particular $Q$. Suppose I break my system into $N$ parts for which I want to quantify the entanglement between. For example, I could take the parts as the $N$ qubits in a quantum computer or the $N$ particles in a system. I will now choose $Q$ to be a sum of observable of the individual parts: $Q=\sum_{i=1}^N Q_i$. I will also assume these operators to be bounded and have norm $|Q_i|=1$(magnitude of the maximum eigenvalue of $Q_i$ is 1). For a untangled pure state,the statistics of the parts are independent so the maximum variance is $N$. Due to convexity, the maximum fisher information for a unentangled mixed state is then 4N. Therefore a fisher information $F_q>4N$ is a witness of entanglement. There is also has been work to argue that $\max_Q F_q(Q)/4N$ is a approximation of the number of parts which must be mutually entangled. I can't find the work that does this but they do it by working with generalizations of the GHZ state Is there a prototypical model Hamiltonian/system that has nontrivial information encoded in its QFI? The QFI has general applicability to all quantum states (such as those produced by a sequence of Unitaries in a quantum computer) rather then equilibrium states of physical systems. Therefore I would claim that the prototypical states for discussing quantum fisher information are the GHZ state (the characteristic entangled state) and coherent states(the characteristic unentangled state). Since they are pure states, it's straight forward to calculate there variance and show that the GHZ state has maximum QFI $F_q=4N^2$ for the "maximum" observable, $Q$ and the coherent state has QFI $F_q(Q)=4N$. Again assuming $Q=\sum_iQ_i$ and $|Q_i|=1$. I believe the review suggested above("Introduction to quantum Fisher information" (14 Aug 2010), by Petz and Ghinea) has a more careful/precise analysis of these states.
I am trying to calculate shannon entropy of CWT. I am not sure if I am doing it right. Assume that $W(a_i,t), i=1;2;...;M$ is a set of wavelet coefficients. The Shannon wavelet entropy is calculated by: $E=-\sum_{i=1}^{M}d_i log(d_i)$ $\rightarrow$ where $d_i=\frac{|W(a_i,t)|}{\sum_{j=1}^{M}W(a_j,t)}$ I am confused how to calculate $E$. for example I have a coefficient matrix with size of $M\times N$, $M$ is scales number and $N$ is time segments. first I have to calulate $d_i$, this is my main problem. this is the wavelet coefficient matrix : $W_{M\times N} = \begin{pmatrix} w_{a_1,1} & w_{a_1,2} & \cdots & w_{a_1,N} \\ w_{a_2,1} & w_{a_2,2} & \cdots & w_{a_2,N} \\ \vdots & \vdots & \ddots & \vdots \\ w_{a_M,1} & w_{a_M,2} & \cdots & w_{a_M,N} \end{pmatrix}$ hmm i am pretty sure i am wrong, can anyone help? for example tell me how can i calculate $d_4$? Here I have right a little Matlab script to calculate shannon entropy of CWT. Is it right or wrong? and what should I do? [M,N]=size(coeffs);for js=1:M Ej(js)=sum(abs(coeffs(js,:)));end;Etot=sum(Ej);Pj=Ej./Etot;%shannon entropyshan_entr=-sum(Pj.*log(Pj));
This question is (in my opinion) the most important question to ask when trying to understand the mathematics of "quantum superposition." Quantum superposition is the essence of how quantum computations are made. If I have a coin, and I flip it 50% of the times I'll get heads and 50% of the time I can get tails: P(Heads) = 50% P(Tails) = 50% But if I make a quantum coin and write it in our fancy ket notation: $|\psi\rangle = \frac{1}{\sqrt{2}}(|H\rangle + |T\rangle)$ I can see that this fancy notation gives me the same results as my coin flip! P(H) = $| \langle H |\psi\rangle |^2 = \frac{1}{2}$ P(T) = $| \langle T |\psi\rangle |^2 = \frac{1}{2}$ So what's even the point? We say there's something special about quantum events, but the math is just the same as flipping coins? What we need to do is investigate a little deeper to see how a coin is different from a quantum coin: In the case where we simply check if our quantum coin is heads-or-tails, we don't see how its different from a normal coin. Instead we're going to do a different procedure (with an silly analogy for intuition): Without checking if our coin is heads-or-tails, we insert our quantum coin through a special slot machine. This special slot machine (meant for cheaters) has a trick: if we insert the coin in one orientation (Heads-side pointing to the left) it gives luckier odds than when its inserted in the other orientation (heads-side pointing to the right). This means that if we flip a coin and (without looking) insert it into the machine, our odds look like this: $$ \text{P(win)} = \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) $$ Half the time we get the lucky odds and half the time we get unlucky odds. (And everyone who plays this slot that doesn't know the trick will get this average between the two odds!) But what about the quantum coin? The quantum coin will not measure what was measured above. Let's work out the mathematical shapes of quantum mechanics, and define winning the slot machine is as a quantum mechanical operator: $P(\text{win|lucky-odds}) = |\langle W|H \rangle|^2$ and $P(\text{win|unlucky-odds}) = |\langle W|T \rangle|^2$ But now if I insert the Heads-to-the-left orientation into the slot machine, I get the probability of winning with the lucky odds (same as before), and if the same if the heads-to-the-right orientation. The difference now is that when I apply my fancy ket state from before $| \psi \rangle = |H\rangle + |T\rangle$, I am now working with a quantum state, so now to find the probabilties I have to square everything: \begin{align}P(Win) &= |\langle W|\psi\rangle|^2 \\&= \frac{1}{2}|\langle W | (|H\rangle+|V\rangle)|^2 \\&= \frac{1}{2}|\langle W|H\rangle + \langle W|T\rangle|^2 \\&= \frac{1}{2}|\langle W|H\rangle|^2 + |\langle W|T\rangle|^2 \\& + \langle T|W\rangle \langle W|H\rangle + \langle H|W\rangle \langle W|T\rangle \end{align} So now putting the "normal coin" together with our "quantum coin": \begin{align}P_{normal}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\P_{quantum}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\& + \langle T|W\rangle \langle W|H\rangle + \langle H|W\rangle \langle W|T\rangle \end{align} We see that we have extra terms that are in the quantum case! These "interference terms" are the terms that are fundamental to what a quantum superposition is! These "interference terms" change depending on the sign of the quantum superposition. So consider the case when $|\psi\rangle = |H\rangle - |T\rangle $ instead of $ |H\rangle + |T\rangle $ : \begin{align}P_{normal}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\P_{quantum}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\& - \langle T|W\rangle \langle W|H\rangle - \langle H|W\rangle \langle W|T\rangle \end{align} The sign actually carries through, and this affects the probabilities to win our slot machine. These weird interference terms are the essence of quantum mechanics, and while the notation of bras and kets are convenient, it's often easy to get lost in the mathematical shapes and not realize the essence or intuition of what's going on! So finally, to answer your question, what is the difference between $ |H\rangle + |T\rangle $ and $ |H\rangle \langle H | + |T\rangle \langle T| $? The difference is that $ |H\rangle + |T\rangle $ is a quantum coin that has these extra terms shown above. The state: $ |H\rangle \langle H | + |T\rangle \langle T| $ is a normal coin without any properties of quantum superposition. It has the porbabilities of $P_{normal}$. In normal unitary quantum mechanics typically taught in undergraduate classes, it's actually not possible to construct a state that acts like a normal coin without quantum superposition! To get this "normal coin" you actually need to add extra rules to quantum mechanics (called working in the "density matrix" framework).
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (22) (remove) 293 Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher. 301 We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms. 271 The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type. 283 A regularization Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems (1996) The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature. 277 A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number. 280 This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data. 274 This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts). Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem. 275 270 276 Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals. 282 Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\). 279 It is shown that Tikhonov regularization for ill- posed operator equation \(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach. 285 On derived varieties (1996) Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation. 284 A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry.
This is the problem: A particle is restrained to move in 1D between two rigid walls localized in x=0 and x=a. For t=0, it’s described by: $$\psi(x,0) = \left[\cos^{2}\left(\frac{\pi}{a}x\right)-\cos\left(\frac{\pi}{a}x\right)\right]\sin\left(\frac{\pi}{a}x\right)+B $$ For $t>0$, determine the probability of finding the particle between 0 and $\frac{a}{4}$. So, using some trigonometry and the orthonormal base $\phi_{n}(x)=\sqrt{\frac{2}{a}}\sin(\frac{n\pi}{a}x)$, I can write the wave function as: $$\psi(x,0)=\sqrt{\frac{a}{32}}\phi_{1}(x)+\sqrt{\frac{a}{8}}\phi_{2}(x)+\sqrt{\frac{a}{32}}\phi_{3}(x)+B$$ I still can’t use the evolution operator. I must find an expression to $B$, so I can put it in terms of the base. I use: $B=\sum_{n} C_{n}\phi_{n}(x)$ where, after finding the value of $C_{n}$, and noticing that only odd values of n contributes to the wave function: $$B\rightarrow -\frac{B}{\pi}\sqrt{8a}\sum_{0}^{\infty}\frac{1}{2n+1}\phi_{2n+1}(x)$$ So now, how could I add a constant to the wave function so it’s normalized? It’s just finding the value $B$ using $\langle\psi|\psi\rangle$? Or there is other way? Because using $\langle\psi|\psi\rangle$ I get a quadratic, and I’m not sure that is the way.
Left Inverse for All is Right Inverse Jump to navigation Jump to search Theorem $\forall x \in S: \exists x_L: x_L \circ x = e_L$ Proof Let $y = x \circ x_L$. Then: \(\displaystyle e_L \circ y\) \(=\) \(\displaystyle \paren {y_L \circ y} \circ y\) Definition of Left Inverse Element \(\displaystyle \) \(=\) \(\displaystyle y_L \circ \paren {y \circ y}\) as $\circ$ is associative \(\displaystyle \) \(=\) \(\displaystyle y_L \circ y\) Product of Semigroup Element with Left Inverse is Idempotent: $y = x \circ x_L$ \(\displaystyle \) \(=\) \(\displaystyle e_L\) Definition of Left Inverse Element $\blacksquare$ Also see Left Identity while exists Left Inverse for All is Identity Right Identity while exists Right Inverse for All is Identity
Brauer height-zero conjecture For notation and definitions, see also Brauer first main theorem. Let $\chi$ be an irreducible character in a block $B$ of a group $G$ with defect group $D$ (cf. also Defect group of a block). Let $\nu$ be the discrete valuation defined on the integers with $\def\a{\alpha}\nu(np^\a)=\a$ whenever $n$ is prime to $p$. By a theorem of Brauer, $\nu(\chi(1)\ge \nu(|G:D|)$. The height of $\chi$ is defined to be $$\nu(\chi(1))-\nu(|G:D|).$$ Every block contains an irreducible character of height zero. Brauer's height-zero conjecture is the assertion that every irreducible character in $B$ has height zero if and only if $D$ is Abelian (cf. also Abelian group). That every irreducible character in $B$ has height zero when $D$ is Abelian was proved for $p$-solvable groups (cf. also $\pi$-solvable group) by P. Fong (see [Fe], X.4). The converse for $p$-solvable groups was proved by D. Gluck and T. Wolf [GlWo], using the classification of finite simple groups. The "if" direction has been reduced to the consideration of quasi-simple groups by T.R. Berger and R. Knörr [BeKn]. The task of checking this half of the conjecture for the quasisimple groups was completed in 2011 by R. Kessar and G. Malle [KeMa], hence completing the proof of the "if" direction. The evidence for the "only if" direction is more slender. References [BeKn] T.R. Berger, R. Knörr, "On Brauer's height $0$ conjecture" Nagoya Math. J., 109 (1988) pp. 109–116 MR0931954 Zbl 0637.20006 [Fe] W. Feit, "The representation theory of finite groups", North-Holland (1982) MR0661045 Zbl 0493.20007 [GlWo] D. Gluck, T.R. Wolf, "Brauer's height conjecture for $p$-solvable groups" Trans. Amer. Math. Soc., 282 : 1 (1984) pp. 137–152 MR0728707 Zbl 0543.20007 [KeMa] R. Kessar, G. Malle, "Quasi-isolated blocks and Brauer's height conjecture" arXiv:1112.2642 How to Cite This Entry: Brauer height-zero conjecture. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Brauer_height-zero_conjecture&oldid=24289
In a previous mathstack question I posted a formal series solution for finding the fixed points of $z \mapsto \exp(z)-1+k$. This analytic Taylor series in $\sqrt{-2k}$ can generate both fixed points for bases between $1 < a < \exp(1/e)$ as well as the complex conjugate pair of fixed points for bases $>\exp(1/e)$, and works for complex bases, as well as real bases<1. Since that post, I I have a better way to formally calculate that series. For the Op's question, it seemed there might be a series that could work in the neighborhood of $\exp(-e)$, to generate the two periodic solutions. I was able to generate just such a series which work wells for $0<a\approx<0.6$. For calculating Gottfried's fixed point pair solution for $a^{a^x}$ for $0<a<e^{-e}$. we need this new Taylor series, which is centered on the parabolic fixed point with multiplier -1. We expect that the solution would be analytic in $\sqrt{x-\exp(-e)}$. But just as with the earlier problem, it is simplest to find such a Taylor series using a mathematically "congruent" problem. For the Ops problem, $a^{a^x}$, for $0<a<1$ I use this congruence equation, where we want period2 fixed points of $f(y)$ with $$k=\ln(-\ln(a))-1$$ and then we can instead iterate: $$y \mapsto f(y,k);\;\; \text{where} \;\;\;f(y,k)=-\exp(y)+1+k;\;\;\;\;\text{and}\;\;\;\;y=z\ln(a)+\ln(-\ln(a));$$ $$z \mapsto a^z;\;\;\;\text{is congruent to}\;\;\; y \mapsto f(y,k);\;\;\;\; k=\ln(-\ln(a))-1; $$$$ f\big(z\ln(a)+\ln(-\ln(a))\big)=(a^z)\ln(a)+\ln(-\ln(a));\;\;\; f \; \text{is congruent to}\; a^z$$ k=0 corresponds to $a=\exp(-e)$, which has a multiplier of -1. The two periodic fixed points of $f(y,k)$ has a series which I shall call $g$, whose definition is below. We then find a formal series solution for the two periodic fixed point pairs of $f(f(x,k))=x$ $$f(x,k) = -\exp(x)+1+k$$ $$f(g(\sqrt{6k},k) = g(-\sqrt{6k})\;\;\;\;\text{g(-x) is the other fixed point}$$$$-\exp((g(\sqrt{6k}))+1+k = g(-\sqrt{6k})\;\;\;\;\text{definition of f}$$$$-\exp(g(x))+1+\frac{x^2}{6}=g(-x)\;\;\;\;\text{by substituting k=x^2/6}$$ That is the formal series definition for g, where the $x^2/6$ term was chosen so that the for the g Taylor series below, the x^1 coefficient=1. In this equation, $g(x)$ corresponds to one fixed point, and $g(-x)$ corresponds to the other fixed point. And the two cycle fixed point of $y\mapsto -\exp(y)+1+k$ may be found by $y=g(\pm\sqrt{6k})$. And then the fixed point pair of z for $z\mapsto a^{a^z}$ may be found by using this equation: $$z=\frac{g(\pm\sqrt{6k})-\ln(-\ln(a))}{\ln(a)};\;\;\;\; k=\ln(-\ln(a))+1 $$ The first 16 terms of the series for g are as follows. I wrote a pari=gp program to calculate the formal series for g which requires iterating solving 2x2 simultaneous equations for pairs of consecutive terms, but it is not too bad. With enough terms, the series can be used on its own, or the series can be used as an input to Newton's method to get a more accurate answer. g= +x^ 1* 1 +x^ 2* -1/6 +x^ 3* 1/20 +x^ 4* -1/90 +x^ 5* 523/151200 +x^ 6* -23/28350 +x^ 7* 239/1008000 +x^ 8* -19/340200 +x^ 9* 1471949/100590336000 +x^10* -6583/1964655000 +x^11* 94891697/130767436800000 +x^12* -49909/328378050000 +x^13* 18670028801/988601822208000000 +x^14* -520019/241357866750000 +x^15* -88448773393/67224923910144000000 +x^16* 254033333/492370048170000000 .... So lets say we want to two real fixed points for $a^{a^z}$ for a=0.04Then $k=\ln(-\ln(0.04))-1\approx 0.1690321758870$, and we want $g(\pm\sqrt{6k})$so we wind up with z=0.0896008408659, 0.749451269718 as the two fixed points. For this case, the 16 term series is accurate to about 10-11 decimal digits, and a 48 term series is accurate to 26 decimal digits. Surprisingly with a 16 term series for a=0.01, we still get ~5 decimal digits of precision. One can also get the complex conjugate pair of fixed points for $a>-\exp(-e)$, for example for $a=0.1$ we get the following pair of complex conjugate fixed points, also accurate to about 10 decimal digits using the 16 term series above. z=0.294596558514 +/- 0.413195411460*I Here is a graph of the fixed points from 0.001 to 0.3. You can compare this graph to Gottfried's graph, where I have added the complex conjugate fixed points for x>exp(-e). The fixed points meet at exp(-e). We see the complex conjugate pair of fixed points to the right, and the real pair of fixed points to the left, as well as the primary real fixed point which is calculated by another method.
Reading the definition of partition of unity: Let $A\subset \Bbb R^n$ and let $\mathcal{O}$ be an open cover of $A$. Then there is a collection $\Phi$ of $C^\infty$ functions $\varphi$ defined in an open set containing $A$, with the following properties: For each $x \in A$ we have $0 \leq \varphi(x) \leq 1$. For each $x \in A$ there is an open set $V$ containing $x$ such that all but finitely many $\varphi \in \Phi$ are $0$ on $V$. For each $x \in A$ we have $\sum_{\varphi \in \Phi}\varphi(x)=1$ (by 2 for each $x$ their sum is finite in some open set containing $x$). For each $\varphi \in \Phi$ there is an open set $U$ in $\mathcal{O}$ such that $\varphi = 0$ outside of some closed set contained in $U$. Make me feel that implies second countable because of condition (2), but I am no quite sure if this hold.
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful? closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. Here's a cute and lovely theorem. There exist two irrational numbers $x,y$ such that $x^y$ is rational. Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$ (Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.) How about the proof that $$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$ I remember being impressed by this identity and the proof can be given in a picture: Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments. Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list. I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction! Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$. Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that $$x+iy = (a+ib)(c+id)$$ Taking the magnitudes of both sides are squaring gives $$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$ I would go for the proof by contradiction of an infinite number of primes, which is fairly simple: Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes. I think I learned that both in high-school and at 1st year, so it might be a little too simple... By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$ The first player in Hex has a winning strategy. There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy. You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$. For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$." Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$. Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros. But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction. Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks. Proof: Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the samecolor. Thus, it can no longer be possible to cover the remaining area. (Well, it may be too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...) One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree $\qquad\qquad$ Descent in the tree is given by the formula $$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$ e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant. Ascent in the tree by inverting this map, combined with trivial sign-changing reflections: $\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$ $\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$ $\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$ See my MathOverflow post for further discussion, including generalizations and references. I like the proof that there are infinitely many Pythagorean triples. Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$ One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1. Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere. If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other. Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.) In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first. The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice: Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal. This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles. Parity of sine and cosine functions using Euler's forumla: $e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$ $e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$ $cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$ Thus $cos\ (-\theta) = cos\ \theta$ $sin\ (-\theta) = -\ sin\ \theta$ $\blacksquare$ The proof is actually just the first two lines. I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$ If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic. Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$: $$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$ Proposition (No universal set): There does not exists a set which contain all the sets (even itself) Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set $$C=\{A\in X: A \notin A\}$$ of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction. Edit: Assuming that one is working in ZF (as almost everywhere :P) (In particular this proof really impressed me too much the first time and also is very simple) Most proofs concerning the Cantor Set are simple but amazing. The total number of intervals in the set is zero. It is uncountable. Every number in the set can be represented in ternary using just 0 and 2. No number with a 1 in it (in ternary) appears in the set. The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval. The Menger sponge which is a 3d extension of the Cantor set simultaneously exhibits an infinite surface area and encloses zero volume. The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here: Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as: $y=f(x)$ This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a). Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$. The slope of the line joining $P$ and $Q$ is given by: $tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$ Suppose now that the point $Q$ moves along the curve towards $P$. In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish. What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$: $m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$ The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$. It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$. Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as: $\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$, which is the required formula. This proof that $n^{1/n} \to 1$ as integral $n \to \infty$: By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $. Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner? The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible. The Eigenvalues of a skew-Hermitian matrix are purely imaginary. The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices. I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep.
I think there are two legitimate sources of complaint. For the first, I will give you the anti-poem that I wrote in complaint against both economists and poets. A poem, of course, packs meaning and emotion into pregnant words and phrases. An anti-poem removes all feeling and sterilizes the words so that they are clear. The fact that most English speaking humans cannot read this assures economists of continued employment. You cannot say that economists are not bright. Live Long and Prosper-An Anti-Poem May you be denoted as $k\in{I},I\in\mathbb{N}$, such that $I=1\dots{i}\dots{k}\dots{Z}$ where $Z$ denotes the most recently born human. $\exists$ a fuzzy set $Y=\{y^i:\text{Human Mortality Expectations}\mapsto{y^i},\forall{i\in{I}}\},$ may $y^k\in\Omega,\Omega\in{Y}$ and $\Omega$ is denoted as "long" and may $U(c)$, where c is the matrix of goods and services across your lifetime $U$ is a function of $c$, where preferences are well-defined and $U$ is qualitative satisfaction, be maximized $\forall{t}$, $t$ denoting time, subject to $w^k=f'_t(L_t),$ where $f$ is your production function across time and $L$ is the time vector of your amount of work, and further subject to $w^i_tL^i_t+s^i_{t-1}=P_t^{'}c_t^i+s^i_t,\forall{i}$ where $P$ is the vector of prices and $s$ is a measure of personal savings across time. May $\dot{f}\gg{0}.$ Let $W$ be the set $W=\{w^i_t:\forall{i,t}\text{ ranked ordinally}\}$ Let $Q$ be the fuzzy subset of $W$ such that $Q$ is denoted "high". Let $w_t^k\in{Q},\forall{t}$ The second is mentioned above, which is the misuse of math and statistical methods. I would both agree and disagree with the critics on this. I believe that most economists are not aware of how fragile some statistical methods can be. To provide an example, I did a seminar for the students in the math club as to how your probability axioms can completely determine the interpretation of an experiment. I proved using real data that newborn babies will float out of their cribs unless nurses swaddle them. Indeed, using two different axiomatizations of probability, I had babies clearly floating away and obviously sleeping soundly and securely in their cribs. It wasn't the data that determined the result; it was axioms in use. Now any statistician would clearly point out that I was abusing the method, except that I was abusing the method in a manner that is normal in the sciences. I didn't actually break any rules, I just followed a set of rules to their logical conclusion in a way that people do not consider because babies don't float. You can get significance under one set of rules and no effect at all under another. Economics is especially sensitive to this type of problem. I do belive that there is an error of thought in the Austrian school and maybe the Marxist about the use of statistics in economics that I believe is based on a statistical illusion. I am hoping to publish a paper on a serious math problem in econometrics that nobody has seemed to notice before and I think it is related to the illusion. This image is the sampling distribution of Edgeworth's Maximum Likelihood estimator under Fisher's interpretation (blue) versus the sampling distribution of the Bayesian maximum a posteriori estimator (red) with a flat prior. It comes from a simulation of 1000 trials each with 10,000 observations, so they should converge. The true value is approximately .99986. Since the MLE is also the OLS estimator in the case, it is also Pearson and Neyman's MVUE. Note how relatively inaccurate the Frequency based estimator is compared to the Bayesian. Indeed, the relative efficiency of $\hat{\beta}$ under the two methods is 20:1. Although Leonard Jimmie Savage was certainly alive when the Austrian school left statistical methods behind, the computational ability to use them didn't exist. The first element of the illusion is inaccuracy. The second part can better be seen with a kernel density estimate of the same graph. In the region of the true value, there are almost no examples of the maximum likelihood estimator being observed, while the Bayesian maximum a posteriori estimator closely covers .999863. In fact, the average of the Bayesian estimators is .99987 whereas the frequency based solution is .9990. Remember this is with 10,000,000 data points overall. Frequency based estimators are averaged over the sample space. The missing implication is that it is unbiased, on average, over the entire space, but possibly biased for any specific value of $\theta$. You also see this with the binomial distribution. The effect is even greater on the intercept. The red is the histogram of Frequentist estimates of the itercept, whose true value is zero, while the Bayesian is the spike in blue. The impact of these effects are worsened with small sample sizes because the large samples pull the estimator to the true value. I think the Austrians were seeing results that were inaccurate and didn't always make logical sense. When you add data mining into the mix, I think they were rejecting the practice. The reason I believe the Austrians are incorrect is that their most serious objections are solved by Leonard Jimmie Savage's personalistic statistics. Savages Foundations of Statistics fully covers their objections, but I think the split had effectively already happened and so the two have never really met up. Bayesian methods are generative methods while Frequency methods are sampling based methods. While there are circumstances where it may be inefficient or less powerful, if a second moment exists in the data, then the t-test is always a valid test for hypotheses regarding the location of the population mean. You do not need to know how the data was created in the first place. You need not care. You only need to know that the central limit theorem holds. Conversely, Bayesian methods depend entirely on how the data came into existence in the first place. For example, imagine you were watching English style auctions for a particular type of furniture. The high bids would follow a Gumbel distribution. The Bayesian solution for inference regarding the center of location would not use a t-test, but rather the joint posterior density of each of those observations with the Gumbel distribution as the likelihood function. The Bayesian idea of a parameter is broader than the Frequentist and can accomodate completely subjective constructions. As an example, Ben Roethlisberger of the Pittsburgh Steelers could be considered a parameter. He would also have parameters associated with him such as pass completion rates, but he could have a unique configuration and he would be a parameter in a sense similar to Frequentist model comparison methods. He might be thought of as a model. The complexity rejection isn't valid under Savage's methodology and indeed cannot be. If there were no regularities in human behavior, it would be impossible to cross a street or take a test. Food would never be delivered. It may be the case, however, that "orthodox" statistical methods can give pathological results that have pushed some groups of economists away.
The expression can be succintly written as ${\left({\mathbf{x}}^TA\mathbf{x} \right)}^n$, where $\mathbf{x}=(x_1,x_2,\ldots,x_N)$.The inner product $x^TAx$ is some polynomial $p_A(\mathbf{x})$ that is a sum of monomials of degree exactly $2$.Hence, each nonzero monomial in ${\left({\mathbf{x}}^TA\mathbf{x}\right)}^n ={\big[p_A(\mathbf{x})\big]}^n$ is of the form $x_1^{p_1}x_2^{p_2}\dots x_n^{p_N}$ with $p_i\geq 0$ and $\sum_ip_i=2n$. Let $\rho$ be one such $N$-tuple $\rho=(p_1,p_2,\ldots,p_N)$, and let $\lVert\rho\rVert = \sum_ip_i$. We will use combinatorics to calculate the coefficient of ${\mathbf{x}}^{\rho} = x_1^{p_1}x_2^{p_2}\dots x_n^{p_N}$ in ${\left({\mathbf{x}}^TA\mathbf{x}\right)}^n$. We will consider each different way in which we can choose a term from each copy of $p_A$ in the (ordered) product ${\big[p_A(\mathbf{x})\big]}^n$ so that the end result is a ${\mathbf{x}}^{\rho}$ monomial. Let $\mho$ be an alphabet $\mho=\{l_1,l_2,\dots,l_N\}$.We associate to each (ordered) choice of monomial $x_iA_{ij}x_j$ in $p_A(\mathbf{x})$ the string $l_il_j$ in $\mho$.This induces a correspondence between choices in the ordered product ${\big[p_A(\mathbf{x})\big]}^n$ and strings of length $\lVert\rho\rVert$ in $\mho$. Example with ${\big[p_A(\mathbf{x})\big]}^3$. Below, the $\longleftrightarrow$ arrow indicates the correspondence between choices and strings, while the $\Rightarrow$ arrow indicates the associated coefficient of the end monomial. $$\left[\underbrace{(x_1A_{13}x_3)}_{\text{$1^{\text{st}}$ choice}}\underbrace{(x_4A_{41}x_1)}_{\text{$2^{\text{nd}}$ choice}}\underbrace{(x_2A_{22}x_2)}_{{\text{$3^{\text{rd}}$ choice}}} \longleftrightarrow l_1l_3l_4l_1l_2l_2\right] \Rightarrow A_{13}A_{41}A_{22}$$ Notice that two different strings may give rise to the same coefficient.For instance, $l_2l_2l_4l_1l_1l_3$ is also associated to the coefficient $A_{22}A_{41}A_{13}=A_{13}A_{41}A_{22}$, but this is because the product of the numbers $A_{ij}$ does not depend on the order of the terms.This is not a contradiction with our correspondence, because different strings correspond to different choices (even if they are associated to the same coefficient). Let $A(s)$ be the coefficient associated to a string $s$.We note that $A(s)$ is in the coefficient of ${\mathbf{x}}^{\rho}$ if and only if $s$ contains $p_i$ $l_i$'s for each $i=1,2\ldots,N$. Hence, with \begin{equation}S(\rho)=\left\{s\,\middle|\,\begin{array}{l}\text{$s$ is a string of length $\lVert \rho \rVert$ on $\mho$ and}\\\text{for each $i=1,\dots,N$ the letter $i$ appears $p_i$ times on $s$}\end{array}\right\},\end{equation} then the coefficient $K(\rho)$ of ${\mathbf{x}}^{\rho}$ is given by \begin{equation}\tag{1}\label{ksr}K(\rho) = \sum_{s \in S(\rho)}A(s)\end{equation} We pause briefly to note a rather obvious lemma, that will nonetheless be useful to us. $\textbf{Lemma}$. Let $\rho$ be such that each $p_i=0$, except $p_k=2n$ for some positive integer $n$. Then $$K(\rho)={\left(A_{kk}\right)}^n$$ Proof.In ${\left(\sum_{i,j=1}^N \,x_i A_{ij} x_j\right)}^n$, the only choice that yields $x_k^{2n}$ is taking $i=j=k$ in each term of the $n$ products. $\square$ In the statement of the theorem below, we highlight the following piece of notation. The hat above $c_j$ in $c_1+\ldots+\hat{c_j}+\ldots+c_N$ means $c_j$ does not feature in the sum, and $e_i$ is the vector $$e_i=\underbrace{(0,0,\ldots,0,1,0,\ldots,0,0)}_{\text{$1$ at position $i$}}$$ With that out of the way: $\textbf{Theorem}$ $\text{$\big(K(\rho)$ expansion$\big)$.}$ Let $\rho=(p_1,\ldots,p_N)$, where each $p_i$ is a nonnegative integer and $\sum_ip_i=2n$. For each $j \in \{1,2,\ldots, N\}$ we have that $$K(\rho)=n!\cdot\sum_{a=0}^{\left\lfloor p_j/2\right\rfloor}\frac{{\left(A_{jj}\right)}^a}{a!} \cdot F(\rho,j,a),$$ where $F(\rho,j,a)$ equals $$\sum_{ \substack{ c_1+\ldots+\hat{c_j}+\ldots+c_N=p_j-2a\\ 0\leq c_i \leq p_i }} \left( \prod_{ \substack{1\leq k\leq N\\ k\neq j }} \frac{ {\left( A_{jk} \right)}^{c_k} } {c_k!} \right) \cdot \frac{1}{ \big(n-(p_j-a)\big)! } \cdot K \left( \sum\limits_{ \substack{ 1\leq k\leq N\\ k\neq j }} (p_k-c_k)e_k \right)$$ Notice that the expansion calculates $K(p_1,\ldots,p_N)$ in terms of coefficients $K(\overset{\sim}{p_1},\ldots,\overset{\sim}{p_N})$, where each $\overset{\sim}{p_i}\leq p_i$ and, most importantly, $\overset{\sim}{p_j}=0$.Together with the lemma, this means $K(p_1,\ldots,p_N)$ can be calculated in at most $(N-1)$ applications of the theorem. Proof.We consider $A$ to be a 'free' symmetric matrix, meaning we assume no relations between the $A_{ij}$ beyond $A_{ij}=A_{ji}$.That said, the theorem can be adapted to general 'free' matrices. The idea behind the expansion is to rewrite $\eqref{ksr}$ in a sum of the type \begin{equation}\tag{2}\label{krp}K(\rho)=\sum_{\alpha}\big|\{s \in S(\rho)\,|\,A(s)=\alpha\}\big|\cdot \alpha\end{equation} where $|X|$ is the cardinality of set $X$.Now, $\big|\{s \in S(\rho)\,|\,A(s)=\alpha\}\big|$ is given by a multinomial coefficient.If $\alpha=\prod_{1\leq i<k\leq N}{\left(A_{ik}\right)}^{n_{ik}}$, and remember $\sum n_{ik}=n$ as per hypotheses, then $$\big|\{s \in S(\rho)\,|\,A(s)=\alpha\}\big|=\frac{n!}{\prod_{1\leq i<k\leq N}n_{ik}!}$$ Next, we consider how terms $A_{ji}$ (where $j$ is the fixed index choice of the thoerem) show up in the values $\alpha$.Afterewards, the substrings that remain have no $l_j$, so the problem is reduced to an easier one, with a smaller alphabet. Each $A_{jj}$ corresponds to two $l_j$'s in a string $s$, while each $A_{ji}$ with $j\neq i$ corresponds to a single $l_j$.Hence, if $a$ is the number of $A_{jj}$'s in $A(s)$ and $b$ is the number of $A_{ji}$'s with $j\neq i$, we have that $2a+b=p_j$ and that the total number of terms in $A(s)$ which feature an index $j$ is $$a+b=p_j-a.$$ The number of terms in $A(s)$ that don't feature an index $j$ is thus $n-(p_j-a)$. Notice that $0\leq a \leq \left\lfloor p_j/2\right\rfloor$. Now, a choice of $a$ 'well-defines' the terms $A_{jj}$'s, but, even though it defines $b$, there's still room for choice with the $A_{ji}$'s (namely, the indices $i$).To know the terms $A_{ji}$, it suffices to choose a submultiset $M$ from $$N=\bigcup\limits_{\substack{1\leq k\leq N\\k\neq j}}\left\{l_k^{p_k}\right\}$$ with $|M|=b=p_j-2a$, where above we're using the multiset exponential notation for multiplicities.$M$ represents the indices $i$ (with their multiplicities). Hence, each such submultiset has the form $M=\bigcup\limits_{\substack{1\leq k\leq N\\k\neq j}}\left\{l_k^{c_k}\right\}$ and corresponds to a solution of \begin{equation}\tag{3}\label{msys}\left\{\begin{array}{ll}c_1+\ldots+\hat{c_j}+\ldots+c_N=p_j-2a&\\0\leq c_i \leq p_i & (\forall i\in\{1,\ldots,N\}\setminus\{j\})\end{array}\right.\end{equation} Besides not having any $l_j$'s, the substring that remains will have $c_i$ fewer $l_i$'s for each $i\in\{1,\ldots,N\}\setminus\{j\}$. This answer is getting long enough as is, but I'll try to summarize the points made and hope it makes for a satisfactory proof: We begin with the expression in $\eqref{krp}$ We factorize terms in $\alpha$ that feature an index $j$ There are $a$ terms with a double index $j$, and $p_j-2a$ terms with a single index $j$ $0\leq a \leq \left\lfloor p_j/2\right\rfloor$ Choices for the other index on terms with a single index $j$ correspond to solutions of $\eqref{msys}$ Factorials come from multinomial coefficients that count cardinalities in $\eqref{krp}$ The substring that remains will have no $l_j$'s and $(p_i-c_i)$ $l_i$'s for each $i\in\{1,\ldots,N\}\setminus\{j\}$ If you have any questions about this proof I'll be happy to answer, but I hope this is clear enough. $\square$ $\textbf{Corollary}$ $\text{$\big($Minimal $K(\rho)$ expansion$\big)$.}$ Let $\rho$ be as in the Theorem. Let $P^+=\{p_i\,|\,p_i>0\}$, $I^+=\{i\,|\,p_i>0\}$ and $j \in I^+$ be such that $p_j =\min P^+$. Then $$K(\rho)=n!\cdot\sum_{a=0}^{\left\lfloor p_j/2\right\rfloor}\frac{{\left(A_{jj}\right)}^a}{a!} \cdot F(\rho,j,a),$$ where $F(\rho,j,a)$ equals $$\sum_{ \substack{ \sum\limits_{i\in I^+\setminus\{j\}}c_i\, =\,p_j-2a\\ c_i\geq 0 }} \left( \prod_{ k\in I^+\setminus\{j\}} \frac{ {\left( A_{jk} \right)}^{c_k} } {c_k!} \right) \cdot \frac{1}{ \big(n-(p_j-a)\big)! } \cdot K \left( \sum\limits_{k\in I^+\setminus\{j\}} (p_k-c_k)e_k \right)$$ Notice the subscript in the summation of $F(\rho,j,a)$.In this form, the number of terms in the summation of $F(\rho,j,a)$ can be found easily via stars and bars.The number is $$\binom{(p_j-2a)+\left(\left|I^+\right|-1\right)-1}{p_j-2a}= \binom{p_j+\left|I^+\right|-2a-2}{p_j-2a}$$ Proof. For $i \notin I^+$, $p_i=0$. Hence, $0\leq c_i\leq p_i$ implies $c_i=0$, and it's easy to check that there's no loss in removing them from the summation. On the other hand, $\sum_{i\in I^+\setminus{\{j\}}}c_i=p_j-2a$ and $c_i\geq 0$ imply $c_i \leq p_j-2a\leq p_j\leq p_i$, so the sum is as in the theorem. $\square$
I was looking for why optical quantum computers don't need "extremely low temperatures" unlike superconducting quantum computers.Superconducting qubits usually work in the frequency range 4 GHz to 10 GHz.The energy associated with a transition frequency $f_{10}$ in quantum mechanics is $E_{10} = h f_{10}$ where $h$ is Planck's constant.Comparing the ... Firstly, a classical computer does basic maths at the hardware level in the arithmetic and logic unit (ALU). The logic gates take low and high input voltages and uses CMOS to implement logic gates allowing for individual gates to be performed and built up to perform larger, more complicated operations. In this sense, typing on a keyboard is sending ... Is a dilution refrigerator the only way to cool superconducting qubits down to 10 millikelvin?There's another type of refrigerator that can get to 10 mK: the adiabatic demagnetization refrigerator (ADR).$^{[a]}$why is dilution refrigeration the primary method?To understand that, let's talk about one of the main limitations of the ADR.How an ADR ... Here is my process for doing arithmetic on a quantum computer.Step 1: Find a classical circuit that does the thing you're interested in.In this example, a full adder.Step 2: Convert each classical gate into a reversible gate.Have your output bits present from the start, and initialize them with CNOTs, CCNOTs, etc.Step 3: Use temporary outputs.If ... The transmon is a Josephson junction and capacitor in parallel.Originally, transmons were differential circuits, i.e. two transmons on the same chip were not galvanically connected in any way.In other words, transmons didn't share a ground reference.Furthermore, in the early days, transmons were almost always embedded into the middle of a harmonic ... Because light, at the right frequencies, interacts weakly with matter.In the quantum regime, this translates to single photons being largely free of the noise and decoherence that is the main obstacle with other QC architectures.The surrounding temperature doesn't disturb the quantum state of a photon as much as it does when the quantum information is ... Disclosure: while I am not an experimental physicist, I am part of the NQIT project, which is aiming to develop quantum hardware which is suitable to realise scalable quantum computers. The architecture that we're investing most heavily in is optically linked ion traps.Ions represent some of the physically best understood systems to experimental and ... Getting enough capacitance and maintaining coherence essentially set the size limit. A superconducting qubit, for the purposes of answering this question, can be imagined as an oscillator consisting of an inductor and a capacitor. The frequency of the oscillator can't be too high otherwise controlling the qubit becomes difficult. At Google, we typically work ... It think the (very) short answer is that there is not a preferred platform yet. This is why there are very active research communities around each of these technologies. Often if someone says otherwise they are probably working on one of the platforms :) Kirchhoff to LagrangianLet's approximate the transmon as a parallel LC resonant circuit.Suppose we connect a voltage source through a coupling capacitor $C_d$ (d for "drive") to a transmon qubit.If the voltage of the source is $V_d(t)$, then Kirchhoff's equations for the circuit are$$\frac{1}{C/C_d} \dot{V}_d(t) = \ddot{\Phi}(t) + \frac{\omega_0^2}{1 + ... What follows turned out to be a rather technical explanation, so I'll start with the main point: The qubit state can change the resonator's state, and the resonator's state can be easily measured only if there is a large different in frequencies between the qubit and the resonator.Let's model a qubit as a two-level system and a resonator as a harmonic ... What does "obtaining samples" mean in this context?The same thing it means in a more classical context. Consider the probability distribution of the possible outcomes of a (possibly biased) coin flip. Sampling from this probability distributions means to flip the coin once and record the result (head or tail). If you sample many times, you can retrieve ... The resonance frequencies of TLS fluctuate due to their interaction with neighboring TLS, which occurs through electric dipole interaction or the local mechanical strain in the material. If a TLS at low energy (below kB*T) is involved, this one may change its state randomly due to thermal activation. The resulting change in local electric field or strain can ... In one sense, the Xmon qubit is a transmon qubit, in that they both operate in the $E_J>>E_c$ regime of the CPB Hamiltonian and take advantage of the exponentially suppressed charge noise vs. polynomial decrease in anharmonicity effect discussed in (Koch, 2007). You could work out the dynamics of a superconducting qubit-resonator system without ever ... Yes, they use $\require{\mhchem}\ce{^3He}$ and $\ce{^4He}$. No, they do not use compounds of these but instead a solution of these two (at the operating temperature) liquid nobel gases. The details can be found in the wikipedia article on dilution refrigerators. Here's a paper comparing Trapped Ion and Superconducting (the main competitors right now) from the group at UMD which compares their trapped ion system with IBM's transmon (superconducting) system. If you want to look at a more algorithm-focused line of thought.If you are looking for a more general summary of the strengths and weaknesses this paper seems ... TL/DR: The two-qubit gates are going by the moniker "Sycamore gates" in the paper, and it appears that they would ideally want to explore more of the $(\phi, \theta)$ phase-space but for their purposes (of quantum supremacy) their current Sycamore gate is sufficient. The pattern of gates $\mathrm{ABCDCDAB}$ was chosen to avoid "wedges" and maximize/optimize ... There's a superconducting circuit element called Josephson junction, which is roughly a nonlinear inductor.The inductance of a Josephson junction depends on current via the relation$$L(I) = \frac{L_0}{\sqrt{1 - (I/I_c)^2}}$$where $L_0$ is the inductance of the junction with no bias current and $I_c$ is the so-called "critical current" which is the maximum ... For superconducting qubits, x and y rotations are usually both done with microwave pulses, and as you said the phase of the pulse determines the rotation axis. See mathematical details in this Physics Stack Exchange post: How do we perform transverse measurements in a two level system?Rotations about the z axis are quite different; they are done by ... Each of the two spins, $q\in\{L,R\}$, has a bunch of energy levels $\{|n\rangle_q\}$, each at energy $\omega_{n}^q$. In other words, the basic Hamiltonian of the spins is:$$H=\sum_{n=0}^{N}\omega_{n}^L|n\rangle\langle n|_L+\omega_{n}^R|n\rangle\langle n|_R$$Written like this, the two spins are not interacting, so we won't get a two-qubit gatewithout ... DanielSank is correct, but I think the answer is actually even more subtle. If there was no loss there would also be no way the background radiation leaking into your quantum device. Even if it was initially thermally excited, one could actively reset the state of the qubits. Thus, in addition to thermal excitations of microwave qubits, the fundamental ... Whilst we normally talk about $\left|0\right>$ and $\left|1\right>$ as unchanging states in quantum computing, this is not usually the case in a physical realization where there tends to be an energy difference $\Delta E$ between these states such that $\left|1\right>_\mathrm{logical} = e^{-it \Delta E / \hbar} \left|1\right>_\mathrm{physical}$. ... I think the subject matter of supercondcuting qubits is rather broad and diverse, making it challenging to accurately capture it in a 'brief explanation'.With that said, this recent review (Krantz et al., Applied Physics Reviews 6, 021318 (2019)) - "A Quantum Engineer's Guide to Superconducting Qubits" (arXiv:1904.06560) from the MIT group may be a good ... While a follow-up question asks for the motivation behind the two-qubit gates used in Sycamore, this question focuses on the random nature of the single qubit operations used in Sycamore, that is, the gates $\{\sqrt{X},\sqrt{Y},\sqrt{W}=(X+Y)/\sqrt{2}\}$ applied to each of the $53$ qubits between each of the two-qubit gates.Although I agree with @Marsl ... This answer only addresses the part about the necessity of the randomness of the circuit because I am by no means familiar with the physical implementation of the qubits at Google and what kind of constraints these impose on the implementation of certain gates.Now, for the randomness: Consider the problem of sampling from the output distribution of a ... According to [1]:Readout error is the error in measuring qubits.You read the figure correctly (44 out of 1000 measurements fail on reading). Note there is yet another, though minor, error there: gate error. It is about errors in quantum gate operation, and is one 10th less than the error of measurement. So, actually there may occur more erroreus ... One advantage of the transmon design is the additional loop you gain from what you called two-island-design. The yellow flux bias line changes the Josephson energy, thus the resonance of the qubit. You can imagine this as changing the (Josephson) inductance of the SQUID loop being a non-linear LC-resonator. This helps for example in two-qubit gate ...
A while back I bought a couple of PIC16F57 (DIP) chips because they were dirt cheap. I figured someday I could use these in something. Yes, I know, this is a horrible way to actually build something and a great way to accumulate junk. However, this time the bet paid off! Only about a year or two too late; but that’s beside the point. The problem I now had was that I didn’t have a PIC programmer. When I bought these chips I figured I could easily rig a board up to the chip via USB. Lo and behold, I didn’t read the docs properly; this chipset doesn’t have a USB to serial interface. Instead, it only supports Microchip’s In-Circuit Serial Programming (ICSP) protocol via direct serial communication. Rather than spend the $40 to buy a PIC programmer (thus, accumulating even more junk I don’t need), I decided to think about how I could make this happen. Glancing at some of my extra devices lying around, I noticed an unused Arduino. This is how the idea for this project came to life. Believe me, the irony of programming a PIC chip with an ATMega is not lost on me. So for all of you asking, “why would anyone do this?” the answer is two-fold. First, I didn’t want to accumulate even more electronics I would not use often. Second, these exercises are just fun from time to time! Hardware Design My prototype’s hardware design is targeted to using an Arduino Uno (rev 3) and a PIC16F57. Assuming the protocol looks the same for other ICSP devices, a more reusable platform could emerge from a common connector interface. Likewise, for other one-offs it could easily be adapted for different pinouts. Today, however, I just have the direct design for interfacing these two devices: Overall, the design can’t get much simpler. For power I have two voltage sources. The Arduino is USB-powered and the 5V output powers the PIC chip. Similarly, I have a separate +12V source for entering/exiting PIC programming mode. For communication, I have tied the serial communication pins from the Arduino directly to the PIC device. The most complicated portion of this design is the transistor configuration; though even this is straightforward. I use the transistor to switch the 12V supply to the PIC chip. If I drive the Arduino pin 13 high, the 12V source shunts to ground. Otherwise, 12V is supplied to the MCLR pin on the PIC chip. I make no claims that this is the most efficient design (either via layout or power consumption), but it’s my first working prototype. Serial Communication with an Arduino Arduino has made serial communication pretty trivial. The only problem is that the Arduino’s serial communication ports are UART. That is to say, the serial communication is asynchronous. The specification for programming a PIC chip with ICSP clearly states a need for a highly controlled clock for synchronous serial communication. This means that the Arduino’s Serial interface won’t work for us. As a result, we will go on to use the Arduino to generate our own serial clock and also signal the data bits accordingly. Setting the Clock Speed The first task to managing our own serial communication with the Arduino is to select an appropriate clock speed. The key to choosing this speed was finding a suitable trade-off between programming speed (i.e. fast baud rate) vs. computation speed on the Arduino (i.e. cycles of computation between each clock tick). Remember, the Arduino is ultimately running an infinite loop and isn’t actually doing any parallel computation. This means that the amount of time it takes to perform all of your logic for switching data bits must be negligible between clock ticks. If your computation time is longer than or close to the clock ticking frequency, the computation will actually impact the clock’s ability to tick steadily. As a rule of thumb, you can always set your clock rate to have a period that is roughly 1 to 2 orders of magnitude than your total computation. Taking these factors into account, I chose 9600 baud (or a clock at 9.6KHz). To perform all the logic required for sending the appropriate programming data bits, I estimated somewhere in the 100’s of nanoseconds to 1’s of microseconds for computation. Giving myself some headroom, I selected a standard baud rate that was roughly two orders of magnitude larger than my computation estimate. Namely, a period of 104 microseconds corresponds to a 9.6KHz clock. After completing the project I could have optimized my clock speed. However, that was unnecessary for this project. The clock rate I had selected worked well. The 9600 baud rate is fast enough for timely programming the device because we don’t have much data to transmit. Similarly, it provides us a lot of headroom to experiment with different types of computation. Generating the Clock Signal While this discussion has primarily focused on the design decisions involved in choosing a clock signal rate, how did we generate it? The process really comes down to toggling a GPIO pin on the Arduino. In our specific implementation, I chose pin 2 on the Arduino. While you can refer to the code for more specific details, an outline of this process follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 inline bool clock_tick() { if (PORTD & _BV(SERIAL_CLOCK_PORT)) { PORTD &= ~_BV(SERIAL_CLOCK_PORT); // If clock is currently high, toggle to low return false; } PORTD |= _BV(SERIAL_CLOCK_PORT); // Return true if we have turned clock high return true; } void loop() { if (clock_tick()) { // ... compute and control data signals } // delay for 52us (half clock period) waitForHalfClockPeriod(); } As you can see, “ticking” the clock basically consists of toggling it and then making sure each loop iteration waits for half the clock period. The omitted section for data control is where most of the logic for the controller goes. However, it runs in a time that is far less than 52 microseconds. As a result, the duration of each loop iteration can be considered as: $$ \begin{equation} 52\mu s \gg \delta \\ 52\mu s + \delta \simeq 52\mu s \end{equation} $$ where \(\delta\) is the time required to perform computation for data control. Consequently, the clock ticks at an appropriate rate. I have included an image taken from my oscilloscope below. This image provides some empirical evidence that what we’re doing should work. While there is no data being sent on this image (we’ll show more of that below), we can generate a nice clock signal (notice the 1/|dX| and BX-AX lines on the image) at 9.6KHz by toggling the pin and waiting. Controlling the Data Line Now that we have a steady clock, we need to control the data line. Writing this section of code felt like I was back in my VHDL/Verilog days. The basic principal— from a signal generation perspective— was to only change the data lines on a positive clock edge. There were minor complications for the read data command (since the pin has to go from output to input), but this was an isolated case with a straightforward solution. To actually control the signal, we manually turn the serial data pin (in our case, pin 4) high or low depending on the command and data each clock cycle. This ICSP programming protocol starts with a 6 bit command sequence. If the command requires data, then a framed 14-bit word (total of 16 bits with the start and stop bits) is sent or received. Command and data bits are sent least significant bit first. In the case of my PIC16F57, the commands are only 4 bits where the upper 2 bits are ignored by the PIC. Likewise, since the PIC16F57 has a 12 bit word, the upper 2 bits of the data word are also ignored while sending and receiving data. The Load Data Command Let’s first investigate the load data command. This command queues up data to write to the chip. A series of additional commands and delays are executed to flush this data to the chip. The bits for the load command are 0bXX0010 (where X is a “don’t care” value). However, let’s take a look at it under the oscilloscope: The yellow curve is the clock and the blue curve is our data line. Starting from the left (and reading the blue curve under the yellow “high” marks) we can read our command exactly as intended: 0b0100XX. Notice that it is inverted since our least significant bits are sent first. If you follow along a little bit further on the top, you’ll notice a clock-low delay. This delay allows the PIC chip to prepare for data. The data for the command immediately follows the delay. Implementation Overview Without going too deeply into the details (again, I refer to the code), the command sequences are modeled as a state machine. Generally, when executing a command, we keep track of the number of steps taken for a particular command already. Since each command consists of sending a finite number of bits, we can now precisely what to do at each step. The other detail I mentioned earlier was about the read command. This command is sent over pin 4 in output mode, but during the delay this pin must switch to input mode. When in input mode, the PIC chip will proceed to send data at the given memory address. To accommodate this, each command starts by setting the pin as output mode. In the case of the read command, it sets the pin as input when appropriate. Conclusion I’ve enjoyed building out this project. When initially building, I really wanted to discover whether or not I could build a PIC programmer with an Arduino. This post reviews my initial prototype and a high-level description of the Arduino code. Unfortunately, the story doesn’t end here. Due to a variety of limitations, I had to introduce a PC-based controller to stream data to the Arduino. My finished product also removes extra elements (i.e. a second 12.5V power supply) and moves from a breadboard to a more permanent fixture. Even so, I leave these details to a part 2 of this post. In any case, you can checkout my code from this repo and run it today. While I work on the second part of the write up, you can always read through what I’ve done. For now though, I will leave you with a picture of some messy breadboarding.
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
My questions is about mass and Inertia, what is the difference between Mass and Inertia? Are they the same or different? How? I am really confused with it,some says mass is the measure of Inertia, if so the unit of Inertia is Kg? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Mass is one type of inertia. Inertia is a general term for an object's resistance against acceleration (or against change in its velocity). The definition of inertia in both cases arrive from Newton's 2nd law (and it's equivalent rotational version): $$\sum \vec F=m\vec a$$ $$\sum \vec \tau=I\vec \alpha$$ Newton's 1st law defines an inertial or Newtonian ,frame of reference , system . The quantitative measure of inertia within that system is called , mass . Suppose 2 objects, connected by a spring ,interact .How is the relative measure of inertia between objects A and B measured ?. Given that the objects are in the same frame of reference Their accelerations are In opposite directions and In a constant ratio . So dv[A]/dt = - k dv[B]/dt where k = m[B]/m[A] = relative measure of their inertias and is independent of units From here we find that m[A] dv[A]/dt = - m[B]/dv[B]/dt , this is the expression of Newton's 2nd law in so far as Change of motion is proportional to the force F = ma =m[A].d v[A]/dt in the direction of the motion. And since F[A] = - f[B], this embodies Newton's 3rd law. So much for inertia . What might be confusing are the terms 'Moments of Inertia' and how are they used. As the name implies the are many moments ,1st moments and 2nd moments [ see statistics and use of moments to find the mean , the standard deviation and other properties] 1st MOMENTS are used to find CENTRES OF MASS. X = sigma[m(i).y(i)]/M , and Y = sigma[m(i).x(i)]/M , watch the positions of x(i) and y(i) , the levers , are linear , that is raised to the power of one .[x^1 ,y^1], tHEN [x,y] is the centre of mass The 2nd moments are called MOMENTS OF INERTIA , is given the symbol I and is used to find kinetic energies Here the levers or the distances are squared and I = sigma[m(i).r(i)^2] =MR^2 To calculate the KE of a rotating body given the moment of inertia I, KE =T = [1/2] MV^2 , BUT V= wR , V^2 =w^2R^2 So T = [1/2] [m.R^2].w^2 Then T =[1/2].Iw^2 As can be seen there is a link between the moment of inertia I and mass m Also between angular velocity w and linear velocity in the two equations T = [1/2].mv^2 and T = [1/2] .Iw^2 See, inertia is just a (1)property of every body because of which the body stays in the same state in which it is. Like if its in motion it will remain in motion and if its in state of rest it will remain in state of rest until a force is applied on it. (2)It is also a property because of which the body tries to resist change in its state. Now you asked mass is the measure of inertia: Note, the second point, it say the body tries to resist change in its state, actually the resistance is due to mass of the body. Thus, if the mass is more inertia is more and vice-versa.
I know about tables containing information about descending in symmetry, i.e. from larger point groups to their sub-groups. My question is - does any similar table exist even for $C_{\infty v}$ and $D_{\infty h}$ point groups? Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community I know about tables containing information about descending in symmetry, i.e. from larger point groups to their sub-groups. My question is - does any similar table exist even for $C_{\infty v}$ and $D_{\infty h}$ point groups? Here they are, finally I found them in some random presentation for a lecture: $$\begin{array}{c|c} \hline D_{\infty \mathrm h} & D_\mathrm{2h} \\ \hline \mathrm{\Sigma_g^+} & \mathrm{A_g} \\ \mathrm{\Sigma_g^-} & \mathrm{B_{1g}} \\ \mathrm{\Pi_g} & \mathrm{B_{2g} + B_{3g}} \\ \mathrm{\Delta_g} & \mathrm{A_{g} + B_{1g}} \\ \mathrm{\Sigma_u^+} & \mathrm{B_{1u}} \\ \mathrm{\Sigma_u^-} & \mathrm{A_u} \\ \mathrm{\Pi_u} & \mathrm{B_{2u} + B_{3u}} \\ \mathrm{\Delta_u} & \mathrm{A_{u} + B_{1u}} \\ \hline \end{array}$$ $$\begin{array}{c|c} \hline C_{\infty \mathrm v} & C_\mathrm{2v} \\ \hline \mathrm{A_1 = \Sigma^+} & \mathrm{A_1} \\ \mathrm{A_2 = \Sigma^-} & \mathrm{A_2} \\ \mathrm{E_1 = \Pi} & \mathrm{B_1 + B_2}\\ \mathrm{E_2 = \Delta} & \mathrm{A_1 + A_2} \\ \hline \end{array}$$
For discussion of specific patterns or specific families of patterns, both newly-discovered and well-known. gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Kazyan wrote: Component found in a CatForce result: Code: Select all x = 42, y = 67, rule = LifeHistory A$.2A$2A5$7.A$8.2A$7.2A19$24.2A$24.2A2$39.A$37.A3.A$36.A$36.A4.A$36. 5A$14.2A.2D$13.A.AD.D$13.A$12.2A25$5.3A$7.A$6.A! That can be done with 4 gliders, although it's still interesting that it was found accidentally: Code: Select all x = 21, y = 30, rule = B3/S23 10b2o$11bo$11bobo$12b2o14$10bo4bo$10b2ob2o$9bobo2b2o8$2o17bo$b2o15b2o$ o17bobo! What were you looking for, exactly? A MWSS-to-herschel converter? Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm gmc_nxtman wrote:What were you looking for, exactly? A MWSS-to-herschel converter? I'd settle for any signal, but yes. The current Orthogonoids have geometry challenges that pad their size, and the limiting factor in their repeat time is the syringe. Repeat time is more important for single-channel operations than probably any other constructor design, so I'm trying to give that fire some better fuel. Tanner Jacobi mniemiec Posts: 1055 Joined: June 1st, 2013, 12:00 am gmc_nxtman wrote:4-glider trans-boat with tail edgeshoot: ... Even though was already buildable from 4 gliders, this method improves syntheses of one still-life and 18 pseudo-objects. Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm Potential component spotted in a failed eating reaction: Code: Select all x = 21, y = 17, rule = B3/S23 o$3o$3bo$2b2o2$6bo$5bobo2$5b3o$19bo$8bo9bo$18b3o3$15bo$14b2o$14bobo! Tanner Jacobi gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Unusual still life in 8 gliders: Code: Select all x = 18, y = 26, rule = B3/S23 11bo$10bobo$10b2o2$10bo$9b2o$9bobo5$obo$b2o$bo2$3b2o$4b2o$3bo$9bo$9b2o $8bobo2$15b3o$7b2o6bo$6bobo7bo$8bo! EDIT: This also gives 21.41458 in 9 gliders. Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm Potentially grow out a BTS into a structure like a snorkel loop: Code: Select all x = 15, y = 25, rule = B3/S23 2b2obo$3bob3o$bobo4bo$ob2ob2obo$o4bobo$b3obo$3bob2o3$10b3o2$8bo5bo$8bo 5bo$8bo5bo2$10b3o6$11b2o$10bo2bo$11b2o$11bo! I suspect that the drifter catalyst and its variants also have odd transformations, since both objects are robust. Tanner Jacobi gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Haven't seen a component quite like this before: Code: Select all x = 27, y = 18, rule = B3/S23 20bobo$20b2o$21bo7$15bo$15bobo$15b2o2$3o9bobo$b3o9b2o$13bo10b3o$24bo$ 25bo! EDIT: Better version: Code: Select all x = 15, y = 11, rule = B3/S23 13bo$12bo$12b3o3$7bo$6bobo$6bobo2b2o$7bo2b2o$3o9bo$b3o! Gamedziner Posts: 796 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth p8 c/2 derived from blinker puffer 1 : Code: Select all 2bo$o3bo$5bo$o4bo$b5o5$b2o2b2o$bob2ob2o$2b5o$3b3o$4bo$2bo3bo$7bo$2bo4bo$3b5o! Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A $4.2A18$4.2A$4.2A2.2A$8.2A! mniemiec Posts: 1055 Joined: June 1st, 2013, 12:00 am This is known. It can be easily synthesized from 10 gliders: Code: Select all x = 88, y = 26, rule = B3/S23 34bobo$35boo$35bo3$45bo$bo44boo$bbo42boo$3o$20boo18boo6bo$bbo17bobo17b obo4bo$boo18bo19bo5b3o$bobo$$43boo$44boo7b3o22bo4b3o$31b3o9bo9bobbo20b 3o3bobbo$33bo19bo16b3o3boobo3bo$32bo20bo3bo12bobbobb3o4bo3bo$53bo16bo 6boo4bo$54bobo13bo3bo3bo5bobo$70bo$71bobo$77bo$78bo$77bo! gameoflifemaniac Posts: 774 Joined: January 22nd, 2017, 11:17 am Location: There too Code: Select all x = 17, y = 17, rule = B3/S23 8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob o$b6o3b6o$o15bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobobob obo$4bo2bobo2bo$7bobo$8bo! Code: Select all x = 17, y = 17, rule = B3/S23 8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob o$b6o3b6o$o7bo7bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobob obobo$4bo2bobo2bo$7bobo$8bo! dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact: While incompetently welding a tremi-Snark this evening... Code: Select all x = 23, y = 31, rule = LifeHistory $3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B $4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B $7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8. 7A$8.A5.A$11.A$10.A.A$11.A! ... I ended up with a p3 that I didn't really want. Doesn't seem worth keeping it around until people are synthesizing all the 58-bit p3's, but it seemed mildly entertaining anyway. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: dvgrn wrote: Code: Select all x = 23, y = 31, rule = LifeHistory $3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B $4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B $7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8. 7A$8.A5.A$11.A$10.A.A$11.A! Pointless reduction: Code: Select all x = 17, y = 25, rule = LifeHistory 4B$.4B$2.4B5.2A$3.4B4.2A$4.9B$5.6B$5.4BA3B$3.7BA2B$3.5B3A2B$3.11B$.2A B.10B$.2AB3.B2A4B$6.2B2A5B$7.8B$7.6B$8.5B$9.3B$8.5B$7.B2AB2A$4.2A2.2A .AB2.2A$4.A2.B3.A.A2.A$5.7A.3A2$7.2A.4A$7.2A.A2.A! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Can someone salvage this? (Look at T≈20) Code: Select all x = 16, y = 12, rule = B3/S23 bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! BlinkerSpawn Posts: 1905 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's gmc_nxtman wrote: Can someone salvage this? (Look at T≈20) Code: Select all x = 16, y = 12, rule = B3/S23 bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! The red pattern inserted at gen 16 would do it: Code: Select all x = 17, y = 14, rule = LifeHistory 13.D$11.2D$.A14.D$2.A8.5D$3A7$3.2A5.2A2.2A$4.2A3.A.A.2A$3.A7.A3.A! AbhpzTa Posts: 475 Joined: April 13th, 2016, 9:40 am Location: Ishikawa Prefecture, Japan gmc_nxtman wrote: Can someone salvage this? (Look at T≈20) Code: Select all x = 16, y = 12, rule = B3/S23 bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! Code: Select all x = 27, y = 19, rule = B3/S23 16bo$4bo9b2o$5bo9b2o$3b3o$22bo$20b2o$21b2o$bo$2bo$3o2$25b2o$24b2o$26bo 3$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! Iteration of sigma(n)+tau(n)-n [sigma(n)+tau(n)-n : OEIS A163163] (e.g. 16,20,28,34,24,44,46,30,50,49,11,3,3, ...) : 965808 is period 336 (max = 207085118608). gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Reduced an old synthesis from eleven (I think) down to eight gliders: Code: Select all x = 34, y = 34, rule = B3/S23 10bo$bobo7bo19bobo$2b2o5b3o19b2o$2bo29bo3$32bo$30b2o$31b2o3$12bo$12bob o$12b2o11$14b2o$14bobo$14bo12b2o$27bobo$27bo3$b2o$obo$2bo! Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA Glider + two-glider loaf/tub/block/blinker constellation lasts for over 10K gens: Code: Select all x = 16, y = 13, rule = B3/S23 3bobo$3b2o$4bo9bo$13bobo$4b2o8bo$4b2o3$6bo$5bobo$4bo2bo$5b2o$3o! I Like My Heisenburps! (and others) Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am A glider synthesis of Sawtooth 311 x = 193, y = 140, rule = B3/S23 40bo$41bo$39b3o$72bo$70b2o$71b2o19$32bobo$33b2o$33bo30bo$63bo$63b3o2$ 75bo$74b2o$24bo49bobo$25b2o$24b2o$49bo$47b2o$48b2o2$2bo$obo$b2o7$34b2o $35b2o$34bo3$67b2o$67bobo$53b2o12bo$53bobo$53bo6$27b2o93b2o$26bobo93bo bo$28bo93bo2$53b3o$53bo74bobo$54bo73b2o$4bo124bo$4b2o172b2o$3bobo171b 2o$174b2o3bo$31b2o140bobo$30b2o98b2o43bo$32bo97bobo36bobo$130bo3bobo 10bo22b2o$135b2o11b2o20bo$135bo4bobo4b2o34bobo$115b2o21bobobobo6b2o20b o9b2o$116b2o21b2ob2o6b2o22bo9bo$115bo36bo19b3o2$120bo23b2ob2o23b3o5bo$ 120b2o21bobobobo24bo4bo$119bobo13b2ob2o5bobo25bo5b3o$134bobobobo$115bo 20bobo13b2o25b3o$116b2o12b2o19b2o22bo3bo$115b2o14b2o20bo22bo3bo$130bo 35bo4bo2b3o$164bobo2b2o$113b3o49b2o3b2o2b3o4bobo$115bo60bo4b2o$114bo 60bo6bo$179bo$126bo25bo18bo7b2o$125b2o23b2o19b2o5bobo$125bobo23b2o17bo bo$146b2o$147b2o$121b2o23bo$120b2o$122bo2$156b2o$142bobo10bobo$134bobo 5b2o13bo$134b2o7bo$135bo$132bo6bo$132b2o4bo$131bobo4b3o2$138b3o43bo$ 134bo3bo22bo20b2o$135bo3bo22b2o19b2o$133b3o25b2o13bobo$174bobobobo7bo$ 133b3o5bo25bobo5b2ob2o6bobo$135bo4bo24bobobobo15b2o3bo$63b3o68bo5b3o 23b2ob2o19b2o$63bo127b2o$64bo75b3o19bo$130bo9bo22b2o6b2ob2o$130b2o9bo 20b2o6bobobobo$129bobo34b2o4bobo$144bo20b2o$143b2o22bo12b2o$143bobo33b 2o$181bo2$139bo$138bo$138b3o2$137bo$136b2o$136bobo! Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am This Simkin-Glider=Gun=like object actually produces two MWSS: Code: Select all x = 53, y = 17, rule = B3/S23 44b2o5b2o$44b2o5b2o2$47b2o$47b2o$12bo$12b3o$12bobo$14bo4$4b2o$4b2o2$2o 5b2o$2o5b2o! mniemiec Posts: 1055 Joined: June 1st, 2013, 12:00 am Entity Valkyrie wrote:A glider synthesis of Sawtooth 311 ... It's nice to have syntheses like this. Unfortunately, in this case, there are several pairs of gliders that would have had to pass through each other earlier (i.e. they would have already collided before this phase). To make sure this doesn't happen, it is usually a good idea to backtrack all the gilders a certain amount (e.g. far enough away that they are in four distinct clouds, one coming from each direction) and then run them to see if any unwanted interactions occur first. Rhombic Posts: 1056 Joined: June 1st, 2013, 5:41 pm This component (the reverse component would have been more useful). Found accidentally though. Code: Select all x = 12, y = 14, rule = B3/S23 11bo$9b3o$8bo$9bo$6b4o$6bo$2b2o3b3o$2b2o5bo$9bobo$2bo7b2o$bobo$bob2o$o $2bo! Code: Select all x = 13, y = 15, rule = B3/S23 7bo$7b3o$10bo$2b2ob3o2bo$o2bobo2bob2o$2o4b3o3bo$9bobo$3b2o3b2ob2o$3b2o 2$3bo$2bobo$2bob2o$bo$3bo! Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA Switch engine turns two rows of beehives into two rows of table on tables: Code: Select all x = 88, y = 96, rule = B3/S23 13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$ 28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$ 45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo $24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o $68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo! I Like My Heisenburps! (and others) KittyTac Posts: 533 Joined: December 21st, 2017, 9:58 am Extrementhusiast wrote: Switch engine turns two rows of beehives into two rows of table on tables: Code: Select all x = 88, y = 96, rule = B3/S23 13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$ 28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$ 45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo $24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o $68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo! And then explodes. I wonder if there's a way to eat it at the end. dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact: KittyTac wrote: Extrementhusiast wrote:Switch engine turns two rows of beehives into two rows of table on tables... And then explodes. I wonder if there's a way to eat it at the end. Yeah, switch engine/swimmer eaters definitely aren't a problem: Code: Select all x = 96, y = 98, rule = B3/S23 13b2o$12bo2bo$13b2o6$8b3o10b2o$20bo2bo$8bo3bo8b2o$9b4o$12bo4$29b2o$28b o2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$45b 2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo$ 24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o$ 68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bob o$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bob o$64bobo$65bo5$73bo$72bobo$72bobo17b2o$73bo18bo$93b3o$95bo! #C [[ AUTOSTART STEP 9 THEME 2 ]] kiho park Posts: 50 Joined: September 24th, 2010, 12:16 am I found this c/3 diagonal fuse while searching c/3 long barge crawler. Code: Select all x = 10, y = 11, rule = B3/S23:T40,27 8b2o$7bo2$6bobo$5bo2bo$4bobo$3bobo$2bobo$bobo$obo$bo!
1. Series 28. Year Post deadline: - Upload deadline: - (2 points)2. streaimng streamlines Draw streamlines into the picture. Into both openings with an arrow the same amount of water flows, All the water then flows out through one opening, the third one. The flow is stable and is slow enough that we can consider it not to be turbulent. When drawing follow the rules that dictate the shapes of the streamlines and write these rules down as comments to the picture. We don't expect that this problem will be solved quantitatively. Comment Draw into the bigger picture available from the website. kolar (3 points)3. accelerating Explain why and how the following situations occur: In a cistern of a rectangular cuboid shape that is filled with water a ball is floating on the surface of the water. Describe the movement of the ball if the cistern starts moving with a constant acceleration small enough that the water shall not flow over the edge. In a cistern of a rectangular cuboid shape that is filled with water a ball filled with water is floating. Describe the movement of the ball if the cistern starts moving with a constant acceleration small enough that the water shall not flow over the edge. In a closed bus a ballon is floating near the ceiling. Describe its movement if the bus starts accelerating constantly Dominika and Pikoš during a physics exam (4 points)4. doom of the Titanic Náry always wanted a boat and so one beautiful day he bought himself one in the shape of a cuboid without a top side (like a bath) with outer sides of $a$, $b$, $c$ and with a width of wall of $d$, which was created from scented wood of a density $ρ$ (bigger density than water). The second day he took his boat outfor a ride on the water and he found out that it has a small hole on the bottom through which water flows with a flow rate of $Q_{1}$. That was unfortunate but since he was a man of action he started calculating how long until water starts entering the boat from the top. The same question is asked by this task.Conider also the situation where Náry of a mass $m$ would have sat in the boat and while calculating would spill water out of the boat with a flow rate of $Q_{2}$. The boat is horizontal the whole time. Kiki heard about the problem that nearly all tasks are thought up by Karel. (5 points)5. a thousand year old bee Calculate the power required by a bee to remain in the air and approximate how long a bee that has just eaten can remain in the air for(at a constant altitude). Michael thought during a discussion about quadcopters. (6 points)S. Unsure Write down the equations for a throw in a homogeneous gravitational field (you don't need to prove them but you need to know how to use them). Design a machine that will throw an item and determine the angle of approach and the velocity. You can throw with the item with a spring, determine its spring constant, mass of the object and calculate the kinetic energy and thus the velocity of the item. What do you think is the precision of the your value of the velocity and angle? Put the boundaries determined by this error into the equations and show in what boundaries we can expect the distance of the landing from the origin to be.Throw the item with your device at least five times and determine the distance of the landing and what are the boundaries within which you are certain of your distance? Show if your results fit into your predictions. (For a link to video with a throw you get a bonus point!) Tie a pendulum with an amplitude of $x$, which effectively oscillates harmonically but the frequency of its oscillations depends on the maximum displacement $x_{0}$ $$x(t) = x_0 \cos\left[\omega(x_0) t\right]\,, \quad \omega(x_0) = 2\pi \left(1 - \frac{x_0^2}{l_0^2}\right)\,,$$ where $l_{0}is$ some length scale. We think that are letting go of the pendulum from $x_{0}=l_{0}⁄2$ but actually it is from $x_{0}=l_{0}(1+ε)⁄2$. B By how much does the argument of the cosine differ from 2π after one predicted period? How many periods will it take for the pendulum to displaced to the other side than which we expect? Tip Argument of the cosine will in that moment differ from the expected one by more than π ⁄ 2. Take a pen into your hand and let it stand on its tip on the table. Why does it fall? And what will determine if it will fall to the right or to the left? Why can't you predict a die throw even though the laws of physics should predict it? When you play billiard is the inability to finish the game only due to being incapable of doing all the neccessary calculations? Write down your answers and try to enumerate physics phenomenons that occur in daily life which are unpredictable even if we know the situation well.
Displacement s, d (mileage), they would (elevation) meters – m Displacement is the total alternation in duration in different one track. Power it’s time kind of their time, P=dE/dt. Using Newton’s Secondly Regulation, Hooke’s Legislations, plus some differential Calculus, we had arrived capable to get the time and also consistency of the size rotaing with a early spring that any of us found over the last part! Note that the time in addition to frequency are completely independent of the plenitude. Wikipedia offers you a very good definition along with a substantial directory of locations. This manifestation is definitely with the way of Hooke’s Legislation: Changing form, sizing, and also bulk submitting changes the minute regarding inertia. Transverse surf may perhaps also System associated with testing interval can be moments (azines). Note about degree Celsius. The particular taken from product around Stand Three or more using the particular title degree Celsius in addition to unique symbol °C deserves thoughts. ) As a result, v is really a constant, as well as pace vector v furthermore moves along with regular value v, within the exact angular price ?. Rather then making this golf ball move on a bent sales channel, the same path is a lot more conveniently and what noticed by making the item the In that movie Chris Andersen describes just how the period it is time among wave as well as regularity is definitely the volume of surf for https://paperhelpers.org/research-paper-help each 2nd. In scenario we realize the second associated with inertia with the stringent human body, we can easily assess the above manifestation in the interval for the actual pendulum. Physics concise explaination get the job done: (pressure utilized ) multiplied by way of (long distance where your force functions). While out of place out of steadiness, the article carries out simple harmonic motions which includes an plenitude By and also a time T. Unit with gauging consistency is Hertz (Hertz), along with ‘F’ is considered the most widespread token utilized in science to indicate rate of recurrence. A replacement of this kind of phrase with regard to ?, we come across the posture x has by: You can establish the following by yourself simply by recalling the fact that circumference of a group is actually 2*pi*r, therefore, if the thing sailed round the overall group (one particular circumference) it’s going to have completed the angle of 2pi radians and traveled some sort of long distance regarding 2pi*r. Speed : Radians per second Practice remodeling amongst regularity and period The rate of your factor P throughout the eliptical is equal to |v max|. Пожаловаться The regularity is the amount of cycles designed in a strong period of your time. It is the shared on the interval and could be calculated using the picture f=1/T. For easy harmonic oscillators, a scenario of movement is obviously the second sequence differential picture that pertains a acceleration and speed as well as the displacement. The required parameters are generally y, this displacement, in addition to k, your planting season consistent. frequency: The quotient from the number of moments deborah any recurrent sensation arises within the time t that it develops: y Is equal to n And t. Отключить A easy pendulum works like a harmonic oscillator having a period reliant merely in L and also gary to get sufficiently little amplitudes. (n) Completing as a result of steadiness yet again many electrical power is usually kinetic. On the other hand, as soon as this leader receives there, this increases momentum as well as carries on go on to the ideal, generating the other deformation. In true connected with undamped, very simple harmonic action, the energy oscillates back and forth concerning kinetic plus possibilities, proceeding directly from one to the other as the system oscillates. Learn with this topic through these articles or blog posts: All physics works with electrical power plus topic. where ?=?t, ? is the constant angular speed, in addition to X would be the radius in the circle journey. The amount ? is called a angular regularity and is also depicted in For me personally, that simply implies getting S(X to 1/50 = 4.10 , without having importance made available to this devices of their time. The whole this adheres to straightforward harmonic motion can be described as simple harmonic oscillator. When this golf swings ( amplitudes ) are generally small, a lot less than pertaining to 15?, your pendulum gives basic harmonic oscillator along with period [latex]\text=2\pi \sqrt in which L may be the whole line and gary the gadget guy could be the development as a result of gravity. Uniform rounded movements is also sinusoidal because screening machine of the action behaves being a basic harmonic oscillator. simple pendulum: A theoretical pendulum consisting of a bodyweight halted by way of a weightless string. A straightforward pendulum is understood to be a physical object that includes a compact bulk, also known as this pendulum bob, that is hanging from the cable as well as string connected with negligible size. Отключить elastic likely energy: The force trapped in a deformable thing, such as a planting season. In a planting season procedure, the particular efficiency situation is presented as: [latex]\frac where by Y may be the highest possible displacement. Uniform Rigid Rod: Any inflexible rods using uniform muscle size submitting weighs from your rocker factor. While represented by way of the sinusoidal necessities, pressure variant in a sound say repeat by itself wide over the specific distance. Consider, by way of example, strumming some sort of vinyl leader revealed in the initial figure. help with my paper This size connected with oscillating residence (displacement in the water level pertaining to water work surface lake, size of electric area pertaining to electro-magnetic wave and so on.) on the place is called the plenitude. Intended for illustration, allow us to take into consideration a standard firm rod, pivoted from the figure while proven (see ). Acceleration * Radians each 2nd for each second What is definitely concise explaination atomic science? Intended for illustration, allow us to take into consideration a standard firm rod, pivoted from the figure while proven (see ). Periodic motions will be the steps to move which is reapeated over and over over a duration of moment. The amount ? is named your angular consistency and is stated in Often regular activity is best indicated with regards to angular rate of recurrence, displayed from the Ancient notice ? (omega). Do black color pockets ditch the laws and regulations connected with physics? Review factors liable for your sinusoidal behavior with unvarying spherical motion Then it is instructed to the particular still left, back again through equilibrium, and also the procedure is definitely replicated until dissipative pushes (electronic.grams., friction) lower your movement. The frequency is defined as the amount of periods every device occasion. where ?=?t, ? is the continual angular acceleration, plus X is definitely www.andrews.edu the distance of your sale paper path. able to show which the amount of oscillation of any very simple pendulum is proportional to the rectangle root of its time-span as well as rely upon their mass. What would be the purpose of excess fat in relation to physics? August 19, This year’s Published by simply Administrative There’s also the “Simple English” wikipedia. Height is the amount of level of the target changed into measurments. We are able to use differential calculus and locate the speed in addition to acceleration and speed being a objective of occasion: Sinusoidal as well as Non-Sinusoidal Vibrations: Only the major is actually sinusoidal. ) Stress in the cord accurately cancels the actual ingredient mgcos ? similar towards the sequence. This is of the same form since the traditional very simple pendulum and this also provides a time period of: Throughout looking to assess if you will find there’s basic harmonic oscillator, we have to note that regarding compact sides (under regarding 15?), sin ?? ? (sin ? and also ? are different by way of with regards to 1% or less at more compact sides). Uniform Rigid Rod: Any rigid fly fishing rod together with unvarying size distribution dangles from the rotate point. Any time ? will be stated inside radians, the particular arc time-span in a radius is related to it has the distance ( L in cases like this) through: Really the only things that modify the quantity of a fairly easy pendulum are usually its size as well as the development on account of the law of gravity. Acceleration is identified as quickening slowing or transforming path, throughout sale paper elements alter path A different way to think about this is always that Acceleration can be difference in Rate.
If I have computed correctly, logistic regression asymptotically has the same power as the t-test. To see this, write down its log likelihood and compute the expectation of its Hessian at its global maximum (its negative estimates the variance-covariance matrix of the ML solution). Don't bother with the usual logistic parameterization: it's simpler just to parameterize it with the two probabilities in question. The details will depend on exactly how you test the significance of a logistic regression coefficient (there are several methods). That these tests have similar powers should not be too surprising, because the chi-square theory for ML estimates is based on a normal approximation to the log likelihood, and the t-test is based on a normal approximation to the distributions of proportions. The crux of the matter is that both methods make the same estimates of the two proportions and both estimates have the same standard errors. An actual analysis might be more convincing. Let's adopt some general terminology for the values in a given group (A or B): $p$ is the probability of a 1. $n$ is the size of each set of draws. $m$ is the number of sets of draws. $N = m n$ is the amount of data. $k_{ij}$ (equal to $0$ or $1$) is the value of the $j^\text{th}$ result in the $i^\text{th}$ set of draws. $k_i$ is the total number of ones in the $i^\text{th}$ set of draws. $k$ is the total number of ones. Logistic regression is essentially the ML estimator of $p$. Its logarithm is given by $$\log(\mathbb{L}) = k \log(p) + (N-k) \log(1-p).$$ Its derivatives with respect to the parameter $p$ are $$\frac{\partial \log(\mathbb{L})}{ \partial p} = \frac{k}{p} - \frac{N-k}{1-p} \text{ and}$$ $$-\frac{\partial^2 \log(\mathbb{L})}{\partial p^2} = \frac{k}{p^2} + \frac{N-k}{(1-p)^2}.$$ Setting the first to zero yields the ML estimate ${\hat{p} = k/N}$ and plugging that into the reciprocal of the second expression yields the variance $\hat{p}(1 - \hat{p})/N$, which is the square of the standard error. The t statistic will be obtained from estimators based on the data grouped by sets of draws; namely, as the difference of the means (one from group A and the other from group B) divided by the standard error of that difference, which is obtained from the standard deviations of the means. Let's look at the mean and standard deviation for a given group, then. The mean equals $k/N$, which is identical to the ML estimator $\hat{p}$. The standard deviation in question is the standard deviation of the draw means; that is, it is the standard deviation of the set of $k_i/n$. Here is the crux of the matter, so let's explore some possibilities. Suppose the data aren't grouped into draws at all: that is, $n = 1$ and $m = N$. The $k_{i}$ are the draw means. Their sample variance equals $N/(N-1)$ times $\hat{p}(1 - \hat{p})$. From this it follows that the standard error is identical to the ML standard error apart from a factor of $\sqrt{N/(N-1)}$, which is essentially $1$ when $N = 1800$. Therefore--apart from this tiny difference--any tests based on logistic regression will be the same as a t-test and we will achieve essentially the same power. When the data are grouped, the (true) variance of the $k_i/n$ equals $p(1-p)/n$ because the statistics $k_i$ represent the sum of $n$ Bernoulli($p$) variables, each with variance $p(1-p)$. Therefore the expected standard error of the mean of $m$ of these values is the square root of $p(1-p)/n/m = p(1-p)/N$, just as before. Number 2 indicates the power of the test should not vary appreciably with how the draws are apportioned (that is, with how $m$ and $n$ are varied subject to $m n = N$), apart perhaps from a fairly small effect from the adjustment in the sample variance (unless you were so foolish as to use extremely few sets of draws within each group). Limited simulations to compare $p = 0.70$ to $p = 0.74$ (with 10,000 iterations apiece) involving $m = 900, n = 1$ (essentially logistic regression); $m = n = 30$; and $m = 2, n = 450$ (maximizing the sample variance adjustment) bear this out: the power (at $\alpha = 0.05$, one-sided) in the first two cases is 0.59 whereas in the third, where the adjustment factor makes a material change (there are now just two degrees of freedom instead of 1798 or 58), it drops to 0.36. Another test comparing $p = 0.50$ to $p = 0.52$ gives powers of 0.22, 0.21, and 0.15, respectively: again, we observe only a slight drop from no grouping into draws (=logistic regression) to grouping into 30 groups and a substantial drop down to just two groups. The morals of this analysis are: You don't lose much when you partition your $N$ data values into a large number $m$ of relatively small groups of "draws". You can lose appreciable power using small numbers of groups ($m$ is small, $n$--the amount of data per group--is large). You're best off not grouping your $N$ data values into "draws" at all. Just analyze them as-is (using any reasonable test, including logistic regression and t-testing).
Signal-to-noise ratio Don H. Johnson (2006), Scholarpedia, 1(12):2088. doi:10.4249/scholarpedia.2088 revision #126771 [link to/cite this article] Signal-to-noise ratio generically means the dimensionless ratio of the signal power to the noise power contained in a recording.Abbreviated SNR by engineers and scientists, the signal-to-noise ratio parameterizes the performance of optimal signal processing systems when the noise is Gaussian. Contents Basics The signal \(s(t)\) may or may not have a stochastic description; the noise \(N(t)\) always does. When the signal is deterministic, its power \(P_s\) is defined to be \[P_s = \frac{1}{T} \int_{0}^{T}\!\! s^2(t)\,dt\] where \(T\) is the duration of an observation interval, which could be infinite (in which case a limit needs to be evaluated). Special terminology is used for periodic signals.In this case, the interval \(T\) equals the signal's period and the signal's root mean squared (rms) value equals the square-root of its power.For example, the sinusoid \(A \sin 2\pi f_0 t\) has an rms value equal to \(A/\sqrt{2}\) and power \(A^2/2\ .\) When the signal is a stationary stochastic process, its power is defined to be the value of its correlation function \(R_s(\tau)\) at the origin.\[R_s(\tau) \equiv \mathsf{E}[s(t)s(t+\tau)];\quad P_s = R_s(0)\]Here, \(\mathsf{E}[\cdot]\) denotes expected value.The noise power \(P_N\) is similarly related to its correlation function\[P_N=R_N(0)\ .\]The signal-to-noise ratio is typically written as SNR and equals\[\mathrm{SNR}=\frac{P_s}{P_N}\ .\] Signal-to-noise ratio is also defined for random variables in one of two ways. \(X = s+N\ ,\) where \(s\ ,\) the signal, is a constant and \(N\) is a random variable having an expected value equal to zero. The SNR equals \(s^2/\sigma^2_N\ ,\) with \(\sigma^2_N\) the variance of \(N\ .\) \(X = S+N\ ,\) where both \(S\) and \(N\) are random variables. A random variable's power equals its mean-squared value: the signal power thus equals \(\mathsf{E}[S^2]\ .\) Usually, the noise has zero mean, which makes its power equal to its variance. Thus, the SNR equals \(\mathsf{E}[S^2]/\sigma^2_N\ .\) White Noise When we have white noise, the noise correlation function equals \(N_0/2\cdot\delta(\tau)\ ,\) where \(\delta(\tau)\) is known both as Dirac's delta function and as an impulse.The quantity \(N_0/2\) is the spectral height of the white noise and corresponds to the (constant) value of the noise power spectrum at all frequencies.White noise power is infinite and the SNR as defined above will be zero.White noise cannot physically exist because of its infinite power, but engineers frequently use it to describe noise that has a power spectrum that extends well beyond the signal's bandwidth.When white noise is assumed present, optimal signal processing systems can sometimes take it into account and their performance typically depends on a modified definition of signal-to-noise ratio.When the signal is deterministic, the SNR is taken to be\[\mathrm{SNR}=\frac{\int\!\! s^2(t)\,dt}{N_0/2}\ .\] Peak Signal-to-Noise Ratio (PSNR) In image processing, signal-to-noise ratio is defined differently.Here, the numerator is the square of the peak value the signal could have and the denominator equals the noise power (noise variance).For example, an 8-bit image has values ranging between 0 and 255.For PSNR calculations, the numerator is 255 2 in all cases. Expressing Signal-to-Noise Ratios in Decibels Engineers frequently express SNR in decibels as \[\mathrm{SNR} (\mathrm{dB}) = 10 \log_{10}\frac{P_s}{P_N}\ .\] Engineers consider a SNR of 2 (3 dB) to be the boundary between low and high SNRs. In image processing, PSNR must be greater than about 20 dB to be considered a high-quality picture. Interference These definitions implicitly assume that the signal and the noise are statistically unrelated and arise from different sources.In many applications, some part of what is not signal arises from man-made sources and can be statistically related to the signal.For example, a cellular telephone's signal can be corrupted by other telephone signals as well as noise.Such non-signals are termed interference and a signal-to-interference ratio, abbreviated SIR, can be defined accordingly.However, when both interference and noise are present, neither the SIR nor the SNR characterizes the performance of signal processing systems. References External Links J. Sijbers et al., "Quantification and improvement of the signal-to-noise ratio in a magnetic resonance image acquisition procedure", Magnetic Resonance Imaging, vol. 14, no. 10, pp. 1157-1163, 1996
Marginal costs MC is defined as $MC=\frac{dC}{dq}$. Taking into account that $C=wL+rK$, $$MC=\frac{dC}{dq}=w\frac{dL}{dq}+r\frac{dK}{dq}$$ Recall that marginal product of labor $MP_{L}=\frac{\partial q}{\partial L}$ and marginal product of capital $MP_{K}=\frac{\partial q}{\partial K}$. Question: is the following correct $$\frac{dL}{dq}=1/\frac{\partial q}{\partial L},\;\frac{dK}{dq}=1/\frac{\partial q}{\partial K}$$ which implies $$MC=w\frac{1}{MP_{L}}+r\frac{1}{MP_{K}}$$ If no, then no need to read further. If yes, then, consider profit maximization of a firm. $$\max_{L,K}pq\left(L,K\right)-wL-rK$$ FOC: $$\begin{cases}pMP_{L}=w\\pMP_{K}=r\end{cases}\Rightarrow\begin{cases}MP_{L}=\frac{w}{p}\\MP_{K}=\frac{r}{p}\end{cases}$$ Therefore, $$MC=w\frac{1}{MP_{L}}+r\frac{1}{MP_{K}}=w\frac{1}{w/p}+r\frac{1}{r/p}=p+p=2p$$ The result is wrong for sure. I wonder, at what step of derivation I made a mistake?
Abbreviation: COMon A is an (totally) ordered monoid $\mathbf{A}=\langle A,\cdot,1,\le\rangle$ such that commutative ordered monoid $\cdot$ is : $xy=yx$ commutative Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be commutative ordered monoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $h(1)=1$, and $x\le y\Longrightarrow h(x)\le h(y)$. A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Abelian ordered groups]] expansion [[Ordered monoids]] supervariety [[Commutative monoids]] subreduct
Let \(X_1, \ldots, X_N \sim \text{Poisson}(\lambda = 1.84)\). The probability density function for the Poisson random variable is \[ f(x | \lambda) = \lambda^x e^{{-\lambda}} / x! \] for \(x \in 0, 1, 2, 3, \ldots\) and \(\lambda > 0\) By hand, calculate the simplified log-likelihood and then write the simplified log-likelihood as a Python function with signature \(\texttt{ll_poisson(l, x)}\). By hand, calculate the maximum likelihood estimator. Include a photo of your hand written solution, or type out your solution using LaTeX. Create an array \(\texttt{X}\) by generating \(N = 1001\) Poisson random variables, with \(\lambda = 1.84\) from above. Use \(\texttt{np.random.poisson(}\lambda \texttt{, N)}\). Use \(\texttt{scipy.optimize.minimize(ll, (l0), args=(X), method='L-BFGS-B', bounds=[...])}\) to calculate the maximum likelihood estimate \(\hat{\lambda}\) of \(\lambda\). Let \(X_1, \ldots, X_N \sim \text{IG}(\mu = 12.41, \lambda = 3.9)\), named inverse Gaussian or Wald. The probability density function for the inverse Gaussian random variable is \[ f(x | \mu, \lambda) = \left( \frac{{\lambda}}{{2\pi x^3}} \right)^{1/2} e^{{ -\frac{{\lambda(x - \mu)^2}}{{2\mu^2x}} }} \] for \(x > 0\), \(\mu > 0\), and \(\lambda > 0\). By hand, calculate the simplified log-likelihood and then write the simplified log-likelihood as a Python function with signature \(\texttt{ll_ig(theta, x)}\). Create an array \(\texttt{X}\) by generating \(N = 1001\) IG/Wald random variables, with \(\mu = 12.41\), \(\lambda = 3.9\) from above. Use \(\texttt{np.random.wald(}\mu, \lambda \texttt{, N)}\). Use \(\texttt{scipy.optimize.minimize(ll, theta0, args=(X), method='L-BFGS-B', bounds=[...])}\) to calculate the maximum likelihood estimate \(\hat{\theta}\) of \(\theta = (\mu, \lambda)\).
I have just begun with my third year intermediate course in electrodynamics. A standard problem in electrostatics that one may repeatedly encounter is that of finding the potential due to a uniformly charged ring of radius a and total charge q, at point P (r, θ) as shown in the figure below. $$ \\ $$ $$ \\ $$Solving the Laplace's Equation $\vec \nabla^2V(r, \theta) = 0$ for this problem yields $$V(r, θ) = \begin{cases} \frac{q}{4\pi\epsilon_{0}}\sum_{l=0}^{\infty}{\frac{a^l}{r^{l+1}}}P_{l}(0)P_l(\cos{θ}), & \text{if}\ r > a \\ \frac{q}{4\pi\epsilon_{0}}\sum_{l=0}^{\infty}{\frac{r^l}{a^{l+1}}}P_{l}(0)P_l(\cos{θ}), & \text{if}\ r < a \\ \end{cases}$$ where $P_l(x)$ are the Legendre polynomials of order $l$ given by : $$P_l(x) = \frac{1}{2^ll!}\frac{d^l}{dx^l}(x^2-1)^l $$ Since the system has azimuthal symmetry, there is no dependence on $\phi$. Now, no problem arises with the continuity of $V(r, \theta)$ at $r = a$, but there is a mismatch when we look at the left hand and the right hand radial derivatives of $V(r, \theta)$ at $r = a$. Hence the continuity in the radial component $E_r = -\frac{\partial{V}}{\partial{R}} $ of the electric field is lost on the sphere $r = a$. But again, any discontinuity in electric field should be in general only due to a surface charge density and there is no charge at $r = a$ when $\theta \neq \pi/2 $. What should be the plausible physical explanation for this ? Thanks in advance!
This paper is based on the material of Section 4 and Appendix C in arXiv:1503.05523v6, which was excluded from the subsequent versions of arXiv:1503.05523. We present the definition of a dedualizing complex of bicomodules over a pair of cocoherent coassociative coalgebras $\mathcal C$ and $\mathcal D$. Given such a complex $\mathcal B^\bullet$, we ... Let G be a connected reductive affine algebraic group. In this short note we define the "variety of G-characters" of a finitely generated group F and show that the quotient of the G-character variety of F by the action of the trace preserving outer automorphisms of G normalizes the variety of G-characters when F is a free group, free abelian group,... In this paper we study a class of algebras having $n$-dimensional pyramid shaped quiver with $n$-cubic cells, which we called $n$-cubic pyramid algebras. This class of algebras includes the quadratic dual of the basic $n$-Auslander absolutely $n$-complete algebras introduced by Iyama. We show that the projective resolution of the simples of $n$-cub... International audience We review the list of non-degenerate invariant (super)symmetric bilinear forms (briefly: NIS) on the following simple (relatives of) Lie (super)algebras: (a) with symmetrizable Cartan matrix of any growth, (b) with non-symmetrizable Cartan matrix of polynomial growth, (c) Lie (super)algebras of vector fields with polynomial coefficients, (d) string... Consider the restriction of an irreducible unitary representation $\pi$ of a Lie group $G$ to its subgroup $H$. Kirillov's revolutionary idea on the orbit method suggests that the multiplicity of an irreducible $H$-module $\nu$ occurring in the restriction $\pi|_H$ could be read from the coadjoint action of $H$ on $O^G \cap pr^{-1}(O^H)$ provided $... Published inAlgebras and Representation Theory Let BunG be the moduli space of G-bundles on a smooth complex projective curve. Motivated by a study of boundary conditions in mirror symmetry, Gaiotto (2016) associated to any symplectic representation of G a Lagrangian subvariety of T∗BunG. We give a simple interpretation of (a generalization of) Gaiotto’s construction in terms of derived symplec... We consider deformations of quantum exterior algebras extended by finite groups. Among these deformations are a class of algebras which we call truncated quantum Drinfeld Hecke algebras in view of their relation to classical Drinfeld Hecke algebras. We give the necessary and sufficient conditions for which these algebras occur, using Bergman's Diam... Published inAlgebras and Representation Theory We construct polynomial quantization (a variant of quantization in the spirit of Berezin) on para-Hermitian symmetric spaces. For that we use two approaches: (a) using a reproducing function, (b) using an “overgroup”. Also we show that the multiplication of symbols is an action of an overalgebra. Published inAlgebras and Representation Theory We study the structure of certain k-modules 𝕍 over linear spaces 𝕎 with restrictions neither on the dimensions of 𝕍 and 𝕎 nor on the base field 𝔽. A basis 𝔅={vi}i∈I\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setleng...
Most of what is covered in basic controls study is linear time invariant systems. If you're lucky, you may also get discrete sampling and z transforms at the end. Of course, switching mode power supplies (SMPS) are systems that evolve through topological states discontinuously in time, and also mostly have nonlinear responses. As a result, SMPS are not well analyzed by standard or basic linear control theory. Somehow, in order to continue to use all the familiar and well understood tools of control theory; like Bode plots, Nichols charts, etc., something must be done about the time invariance and nonlinearity. Take a look at how the SMPS state evolves with time. Here are the topological states for the Boost SMPS: Each of these separate topologies is easy to analyze on it own as a time invariant system. But, each of the analyses taken separately isn't of much use. What to do? While the topological states switch abruptly from one to the next, there are quantities or variables that are continuous across the switching boundary. These are usually called state variables. The most common examples are inductor current and capacitor voltage. Why not write equations based on the state variables for each topological state and take some kind an average of the state equations by combining as a weighted sum to get a time invariant model? This is not exactly a new idea. State-Space Averaging -- State averaging from the outside in In the 70's Middlebrook 1 at Caltech published the seminal paper about state-space averaging for SMPS. The paper details combining and averaging topological states to model low frequency response. Middlebrook's model averaged states over time, which for fixed frequency PWM control comes down to duty cycle (DC) weighting. Let's start with the basics, using the boost circuit operating in continuous conduction mode (CCM) as an example. On state duty cycle of the active switch relates the output voltage to input voltage as: \$V_o\$ = \$\frac{V_{\text{in}}}{1-\text{DC}}\$ The equations for each of the two states and their averaged combinations are: \$\begin {array} {cccc}&\text {Active State} &\text {Passive State} &\text {Ave State} \\ \text {State Var $\backslash $ Weight} &\text {DC} &\text {(1 - DC)} & \\ \frac {\text {di} _L} {\text {dt}} &\frac {V_ {\text {in}}} {L} &\frac {-V_C + V_ {\text {in}}} {L} &\frac {(-1 + \text {DC}) V_C + V_ {\text {in}}} {L} \\ \frac {\text {dV} _C} {\text {dt}} & - \frac {V_C} {C R} &\frac {i_L} {C} - \frac {V_C} {C R} &\frac {(R - \text {DC} R) i_L - V_C} {C R}\end {array}\$ Ok, that takes care of averaging the states, resulting in a time invariant model. Now for a useful linearized (ac) model, a perturbation term needs to be added to the control parameter DC and each state variable. That will result in a steady state term summed with a twiddle term. \$\text{DC}\rightarrow \text{DC}_o + d_{\text{ac}}\$ \$i_L\rightarrow I_{\text{Lo}} + i_L\$ \$V_c\rightarrow V_{\text{co}} + v_c\$ \$V_{\text{in}}\rightarrow V_{\text{ino}} + v_{\text{in}}\$ Substitute these into the averaged equations. Since this is a linear ac model, you just want the 1st order variable products, so discard any products of two steady state terms or two twiddle terms. \$\frac{d v_c}{\text{dt}}\$ = \$\frac{\left(1-\text{DC}_o\right) i_L-I_{\text{Lo}} d_{\text{ac}}}{C} -\frac{v_c}{C R}\$ \$\frac{d i_L}{\text{dt}}\$ = \$\frac{d_{\text{ac}} V_{\text{co}}+v_c \left(\text{DC}_o-1\right)+v_{\text{in}}}{L}\$ This is just the usual linear variation about an operating point. Also, since we are looking for an AC solution, \$\frac{d}{\text{dt}}\$ can always be replaced by s (or \$\text{j$\omega $}\$). Solving to get output voltage \$v_c\$ as related to \$d_{\text{ac}}\$ yields: \$\frac{v_c}{d_{\text{ac}}}\$ = \$\frac{-V_{\text{co}} \text{DC}_o+V_{\text{co}}-L I_{\text{Lo}} s}{C L s^2+\text{DC}_o^2-2 \text{DC}_o+\frac{L s}{R}+1}\$ From this transfer function it is possible to see the location of the right half plane zero \$f_{\text{rhpz}}\$ and the complex pole pair location \$f_{\text{cp}}\$ . \$f_ {\text {rhpz}}\$ = \$\frac{V_{\text{co}} \left(1-\text{DC}_o\right){}^2}{2 \pi L i_o}\$ \$f_{\text{cp}}\$ = \$\frac{1-\text{DC}_o}{2 \pi \sqrt{L C}}\$ For the circuit values of L1=500uH, C2=500uF, Vin=400V, Vo=500V, and R1=25 Ohms; \$f_ {\text {rhpz}}\$ is 5093 Hz and \$f_{\text{cp}}\$ is 255 Hz. The gain and phase plots show the complex poles and the right half plane zero. Q of the poles is so high because ESR of L1 and C2 have not been included. To add extra model elements now would require going back and adding them into the starting differential equations. I could stop here. If I did, you would have the knowledge of a cutting edge technologist ... from 1973. The Vietnam war would be over, and you could stop sweating that ridiculous selective service lotto number you'd got. On the other hand, shiny nylon shirts and disco would be hot. Better keep moving. PWM Averaged Switch Model -- State averaging from the inside out In the late 80's, Vorperian (a former student of Middlebrook) had a huge insight regarding state averaging. He realized that what really changes over a cycle is the switch condition. It turns out that modeling converter dynamics is much more flexible and simple when averaging the switch than when averaging circuit states. Following Vorperian 2, we work up an averaged PWM switch model for the CCM boost. Starting from the point of view of a canonical switch pair (active and passive switch together) with input-output nodes for active switch (a), passive switch (p), and the common of the two (c). If you refer back to the figure of the 3 states of the boost regulator in the state space model, you will see a box is drawn around the switches that show that connection of the PWM average model. You will want equations that relate the input and output voltages \$V_{\text{ap}}\$ and \$V_{\text{cp}}\$, and input and output currents \$i_a\$ and \$i_c\$ in an average sort of way. By inspection, and knowledge of what these simple voltages and currents look like, get: \$V_{\text{ap}}\$ = \$\frac{V_{\text{cp}}}{\text{DC}}\$ and \$i_a\$ = DC \$i_c\$ Then add the perturbation \$\text{DC}\rightarrow \text{DC}_o + d_{\text{ac}}\$ \$i_a\rightarrow I_a + i_a\$ \$i_c\rightarrow I_c + i_c\$ \$V_{\text{ap}}\rightarrow V_{\text{ap}} + v_{\text{ap}}\$ \$V_{\text{cp}}\rightarrow V_{\text{cp}} + v_{\text{cp}}\$ so, \$v_{\text{ap}}\$ = \$\frac{v_{\text{cp}}}{\text{DC}_o}\$ - \$\frac{d_{\text{ac}} V_{\text{ap}}}{\text{DC}_o}\$ and, \$ i_a\$ = \$i_c \text{DC}_o + i_c d_{\text{ac}}\$ These equations can be rolled into an equivalent circuit suitable for use with SPICE. The terms with the steady state DC combined with small signal ac voltages or currents are functionally equivalent to an ideal transformer. The other terms can be modeled as scaled dependent sources. Here is an AC model of the boost regulator with an averaged PWM switch: The Bode plots from the PWM switch model look very similar to the state space model, but not quite the same. The difference is due to addition of ESR for L1 (0.01Ohms) and C2 (0.13Ohms). That means loss of about 10W in L1 and output ripple of about 5Vpp. So, the Q of the complex pole pair is lower, and the rhpz is hard to see since it's phase response is covered by the ESR zero of C2. The PWM switch model is very powerful intuitive concept: The PWM switch, as derived by Vorperian, is canonical. That means the model shown here can be used with boost, buck or boost-buck topologies as long as they are CCM. You just have to change connections to match p with passive switch, a with active switch, and c with the connection between the two. If you want DCM you will need a different model ... and it's more complicated than the CCM model ... you can't have everything. If you need to add something to the circuit like ESR, there is no need to go back to the input equations and start over. It is easy to use with SPICE. PWM switch models are widely covered. There is an accessible write up in "Understanding Boost Power Stages in Switchmode Power Supplies" by Everett Rogers (SLVA061). Limitations? The models here don't comprehend any of the resonance or switching frequency effects (like Nyquist sampling), so stay at least a decade lower than \$f_s\$ with loop bandwidths. A fundamental assumption is that time constants like L1/R1 and R1C2 are much larger than the switching period \$T_s\$ (if either are less than about 10x \$T_s\$, it's time to start worrying about accuracy). Now you are into the 1990's. Cell phones weigh less than a pound, there's a PC on every desk, SPICE is so ubiquitous that it is a verb, and computer viruses are a thing. The future starts here. 1 G. W. Wester and R. D. Middlebrook, "Low - Frequency Characterization of Switched Dc - Dc Converters," IEEE Transactions an Aerospace andElectronic Systems, Vol. AES - 9, pp. 376 - 385, May 1973. 2 V. Vorperian, "Simplified Analysis of PWM Converters Using the Model of the PWM Switch : Parts I and II," IEEE Transactions on Aerospace and ElectronicSystems, Vol. AES - 26, pp. 490 - 505, May 1990.
6. Series 28. Year Post deadline: - Upload deadline: - (2 points)1. ...au The Turtle A'Tuin, on the shell of which the four elephants that carry on their backs Discworld stand, isn't tiny. Let us assume that we would be bored with the sphericity of our Earth and we would want to exchange it for a circular disc with the same mass and density and with the width $h=1\;\mathrm{km}$ carried by the same turtle-elephant band. In case that the turtle would have hit with the tip of its tale into a planetoid, how long would it take for it to notice the impulse of pain, given that her tail and her brain are connected by a very long neuron? (This neuron is approximately as long as the diameter of the disc) How much earlier/later would A'tuin realise this pain (the length of the neuron is equivalent to its length 18 000 km)? For a numerical estimate assume that the speed of the spread of the signal in the large animals is the same as with normal land animals who experience a speed of $v≈120\;\mathrm{m}\cdot \mathrm{s}^{-1}$. víte snad o lepším vymyšleném světě, než je Zeměplocha? Kiki ne! (2 points)2. breathe deeply Mage Greyhald celebrated his 100th birthday a long time ago and has begun to fear that Death will pay him a long delayed visit. He decided thus that he will ecase himself into a magic chest, where Death can't reach him. Unfortuntely he forgot to tell the craftsmen to add breathing holes. Air in the chest takes up a volume;$V_{0}=400l$, the percentage of the volume that is oxygen is $φ_{0}=0,21$. With every breath he uses up only $k=20%$ of the volume's oxygen $V_{d}=0,5l$. The frequency of breaths of the mage after the closing of the chest rises according to the relation $$\\f(t)=f_0 \cdot \frac{\varphi_0}{\varphi (t)}\,,$ wheref_{0}=15breaths\cdot min^{−1}is$ the initial brath frequency is $φ(t)$ and the percetage of the volume that is oxygen at time case $t$. Determine how long until Death will come for Greyhald if the minimum volume of oxygen in the air required for survival is $φ_{s}=0,06$. DARK IN HERE, ISN'T IT? (Aneb Mirek a jeho kamarád Smrť.) (4 points)3. The Interview One of the offices of lord Vetinari has a circular layout with a perimeter $R$ and is placed on bearings, thanks to which it can turn around its axis. To ensure maximum rotation it uses an engine that can apply any moment of force. During the turning the room has a torque of friction acting on it $M_{0}$, independent of speed, which is equivalent to the static friction torque. The chair for visitors is positioned so that a person will feel the rotation only if it will exceed $ε_{0}$. Determine what is the smallest time it will take to turn by 180°, so thatthe visitor won't notice and what is the work required to achieve it. The mass of the room, that can be approximated to a homogeneous disc, is $m$. Bonus: Assume that the visitor will feel the roation only if the change in angular acceleration is greater than $j_{0}$. Mirek si už zase spletl dveře od pokoje. (5 points)4. Unbearable weight Before the edge of Discworld was reached and overcome and scientific expeditions were made to confirm the existence of the four elephants, turtle and determine its gender some primitve tribespeople thought that the force that kept them on the surface was due to a disc made from a superdense Wasneverwasium. It was truly a very primitive idea because as we know today the expedition, that confirmed the turtle's existence, infamously ended when its boat tore apart and fell or rather did not fall;… Nevertheless we would be interested what kind of surface density would such a disc have to if an object in its middle should experience, while neglecting magical forces, attracted with the same force as the gravitational force on Earth. Assume that, as the legends told, the disc is very thin and is placed ;$H=8^{4}m=4096\;\mathrm{m}$ under the surface of Discworld. The Disc should be hmomogeneous and the masses of other bodies negligible. Neglect the movement of the turtle and the elephants. If you haven't read the works of a genius for whom Death came recently then just replace Discworld with Earth. Discworld has a diameter of precisely $d=10000\;\mathrm{km}$ for our purposes. Karel likes gravity (5 points)5. pub fight During his visit to Ankh-Morpork Two flower also visited a pub. It wouldn't have been a good pub if a general brawl didn't occur. A brawl during which chairs, bottles and other things fly fromone side of the pub to the other. Twoflower obviously documented everything with his camera. Now he is currently taking a picture of a ball of radius $Rthat$ is flying with a velocity $v$ (which is close to the speed of light $c)$. Even in such establishments the theory of relativity is valid and from it stems that Twoflower could have measured the Fitzgerald contraction of the ball in his rest frame in the direction of movement by a factor of $$\\ \sqrt{1- \frac{v^2}{c^2}}$$ What radius of the ball was documented by the camera with a negligibly small exposition?The position of the camera is general. Not only Jakub M. knows that you have to properly document everything (5 points)P. waters of Discworld We all know very well how well the water supply is arranged in Discworld. And none of us need to know how. What if somethig serious would happen and magic would stop working? How long would it take until Discworld would be without water? For simplicity you can assume a pesimistic situation where nobody would hold the water in any way. You know very well that Discworld has a diameter of $d=10000\;\mathrm{km}$, a homogeneous gravitational acceleration $g≈10\;\mathrm{m}\cdot \mathrm{s}^{-2}acts$ everywhere and is perfectly circular. The true complete volume and the distribution of water on Discworld is unknown to everybody so we can consider water being homogeneously distributed over Discworld, that can be considered flat and water has a height of $H=5\;\mathrm{m}$ (that is very pesimistic because everybody would have to be standing on stakes to stay out of the water or being completely submerged in it). The aim of the task is to find an approximate model that would give a good estimation as to the time it would take to lose all the water.. Karel was curious about how water flows from Discworld (8 points)E. alchemial On Discworld it is not unusual to be an alchemist. So we have decided that you should try it. Imagine that you are sitting an exam to enter the guild of alchemists. Together with the brochure of the series you got three wrapped pieces of metal. They are thin plates of metal so be careful with them so that yu won't destroy them and ideally don't touch them. It is your task to find out which (precious?) metals we sent you. We don't require you return the metals and so you can use whatever procedures to determine that, even destructive processes but we shall acknowledge only the sufficiently scientific ones. Your solution will be the description of the procedure required and to determine as precisely as possible the cmposition of the individual specimens and you should menntion the label that was on their packaging. Don't forget that it is even good to determine what metals they aren't. Note If someone wouldlike to become a new participant in this seminar and they would like to solve this task then they should write an email to alchymie@fykos.cz and they will recieve the package from a week later up to 10 days. Karel wanted to send out the bought gold, platinum and palladium. (6 points)S. mixing Copy the function $iterace_stanMap$ from the series and using the following commands choose ten very close initial conditions for some $K$. K=…; X01=…; Y01=…; Iter1 = iterace_stanMap(X01,Y01,1000,K); … X10=…; Y10=…; Iter10 = iterace_stanMap(X10,Y10,1000,K); </pre> Between $Iter1$ and $Iter10$ there are hidden a thousand iterations of given initial conditions using the Standard map. As to see how the ten points look after the $nth$ iteration, you have to write n=…; plot(Iterace1(n,1),Iterace1(n,2),„o“,…,Iterace10(n,1),Iterace10(n,2),„o“) xlabel („x“); ylabel („y“); axis([0,2*pi,-pi,pi],„square“); refresh; </pre> we write $"o"$ into $plot$ so that the points will draw themselves as circles. The rest of the commands is then included so that the graph will include the whole square and that it would have the correct labels. Set some strong kicks, $Kat$ least approx. -0,6, and place the 10 initial conditions very close to each other somewhere in the middle of the chaotic region (ie for example „on the tip of a pen“). How do the ten iteration's distances with respect to each other change? Document on graphs. How do the ten initially very close initial conditions change after 1 000 iterations? What can we learn from this about the „willingness to mix“of the given area? Take again a large kick and set your ten initial conditions along the horizontal equilibrium of the rotor ie $x=0$, $y=0$. How will these ten initial conditions change in time with respect to each other? What can we say about their distance after a large amount of kickso? *Bonus:** Try to code and plot the behaviour of some other map. (For inspiration you can look at the sample solution of the last series.)
Forgot password? New user? Sign up Existing user? Log in ∫0π/2sin100x dx∫0π/2sin102x dx \large \dfrac{\displaystyle\int_{0}^{\pi/2} \sin^{100} x \, dx}{\displaystyle\int_{0}^{\pi/2} \sin^{102} x \, dx} ∫0π/2sin102xdx∫0π/2sin100xdx If the integral above can be expressed as ab \dfrac abba, where aaa and bbb are coprime positive integers, find a+ba+ba+b. Problem Loading... Note Loading... Set Loading...
Note on the vocabulary: the word "hamiltonian" is used in this question to speak about hermitian matrices. The HHL algorithm seems to be an active subject of research in the field of quantum computing, mostly because it solve a very important problem which is finding the solution of a linear system of equations. According to the original paper Quantum algorithm for solving linear systems of equations (Harrow, Hassidim & Lloyd, 2009) and some questions asked on this site Quantum phase estimation and HHL algorithm - knowledge on eigenvalues required? Quantum algorithm for linear systems of equations (HHL09): Step 2 - Preparation of the initial states $\vert \Psi_0 \rangle$ and $\vert b \rangle$ the HHL algorithm is limited to some specific cases. Here is a summary (that may be incomplete!) of the characteristics of the HHL algorithm: HHL algorithm The HHL algorithm solves a linear system of equation $$A \vert x \rangle = \vert b \rangle$$ with the following limitations: Limitations on $A$: $A$ needs to be Hermitian (and only Hermitian matrix works, see this discussion in the chat). $A$'s eigenvalues needs to be in $[0,1)$ (see Quantum phase estimation and HHL algorithm - knowledge on eigenvalues required?) $e^{iAt}$ needs to be efficiently implementable. At the moment the only known matrices that satisfy this property are: local hamiltonians (see Universal Quantum Simulators (Lloyd, 1996)). $s$-sparse hamiltonians (see Adiabatic Quantum State Generation and Statistical Zero Knowledge (Aharonov & Ta-Shma, 2003)). Limitations on $\vert b \rangle$: $\vert b \rangle$ should be efficiently preparable. This is the case for: Specific expressions of $\vert b \rangle$. For example the state $$\vert b \rangle = \bigotimes_{i=0}^{n} \left( \frac{\vert 0 \rangle + \vert 1 \rangle}{\sqrt{2}} \right)$$ is efficiently preparable. $\vert b \rangle$ representing the discretisation of an efficiently integrable probability distribution (see Creating superpositions that correspond to efficiently integrable probability distributions (Grover & Rudolph, 2002)). Limitations on $\vert x \rangle$ (output): $\vert x \rangle$ cannot be recovered fully by measurement. The only information we can recover from $\vert x \rangle$ is a "general information" ("expectation value" is the term employed in the original HHL paper) such as $$\langle x\vert M\vert x \rangle$$ Question: Taking into account all of these limitations and imagining we are in 2050 (or maybe in 2025, who knows?) with fault-tolerant large-scale quantum chips (i.e. we are not limited by the hardware), what real-world problems could HHL algorithm solve (including problems where HHL is only used as a subroutine)? I am aware of the paper Concrete resource analysis of the quantum linear system algorithm used to compute the electromagnetic scattering cross section of a 2D target (Scherer, Valiron, Mau, Alexander, van den Berg & Chapuran, 2016) and of the corresponding implementation in the Quipper programming language and I am searching for other real-world examples where HHL would be applicable in practice. I do not require a published paper, not even an unpublished paper, I just want to have some examples of real-world use-cases. EDIT: Even if I am interested in every use-case, I would prefer some examples where HHL is directly used, i.e. not used as a subroutine of an other algorithm. I am even more interested in examples of linear systems resulting of the discretisation of a differential operator that could be solved with HHL. But let me emphasise one more time I'm interested by every use-case (subroutines or not) you know about.
Abbreviation: FL A , or full Lambek algebra , is a structure $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \backslash, /, 0\rangle$ of type $\langle 2,0,2,0,2,1,2,2\rangle$ such that FL-algebra $\langle A, \vee, \wedge, \cdot, 1, \backslash, /\rangle$ is a residuated lattice and $0$ is an additional constant (can denote any element). Let $\mathbf{A}$ and $\mathbf{B}$ be FL-algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x\vee y)=h(x)\vee h(y)$, $h(x\wedge y)=h(x)\wedge h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(x\backslash y)=h(x)\backslash h(y)$, $h(x/y)=h(x)/h(y)$, $h(1)=1$, $h(0)=0$ Example 1: Classtype variety Equational theory decidable 1) Quasiequational theory undecidable First-order theory undecidable Locally finite no Residual size unbounded Congruence distributive yes Congruence modular yes Congruence n-permutable yes, n=2 Congruence regular no Congruence e-regular yes Congruence uniform no Congruence extension property no Definable principal congruences no Equationally def. pr. cong. no Amalgamation property Strong amalgamation property Epimorphisms are surjective $\begin{array}{lr} f(1)= &1\\ f(2)= &2\\ f(3)= &9\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$ Bounded residuated lattices subvariety FLe-algebras subvariety FLw-algebras subvariety FLc-algebras subvariety Distributive FL-algebras subvariety Residuated lattices reduct
I am now analyzing the following Chua's circuit. The triangle at the bottom denotes zero electric potential (I struggle to find the right symbol). \$N_R\$ is the Chua's diode. So the circuit is non-linear. This version of the circuit can be modeled by the following system of ODEs, which can be found on Wikipedia. $$ \frac {dx}{dt}=\alpha [y-x-f(x)],$$ $$ RC_{2}\frac {dy}{dt}=x-y+Rz, $$ $$ \frac {dz}{dt}=-\beta y. $$ x(t), y(t), and z(t) represent the voltages across the capacitors C1 and C2 and the electric current in the inductor L1 respectively. My question is, how can this circuit work if it does not have a power supply? The only component supplied by the battery is the opamps inside the Chau's diode. Such power supplies are not part of the main circuit, so how can the main circuit gain power? The initial conditions of the differential equations are $$x=y=z=0, t=0$$ (since all electric potentials are zero before we finish connecting the circuit). Those initial conditions will generate the solution \$x=y=z=0\$ for all time. If that is the case, then how can we observe the double scroll pattern? Everything should be constantly zero.
An example of methylation analysis with simulated datasets Part 2: Potential DMPs from the methylation signal Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches. 1. Background Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1. 2. Potential DMPs from the methylation signal using empirical distribution As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$. DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "ECDF") 3. Potential DMPs detected with Fisher’s exact test In Methyl-IT Fisher’s exact test (FT) is implemented in function FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing ( pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested ( tv.cut = 0.926). ft = FisherTest(LR = divs, tv.cut = 0.926, pAdjustMethod = "BH", pooling.stat = "mean", pvalCutOff = 0.05, num.cores = 4L, verbose = FALSE, saveAll = FALSE) ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None", tv.cut = 0.926, tv.col = 7, alpha = 0.05) There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality: $TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1]. where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1. So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$ 4. Potential DMPs detected with Weibull 2-parameters model Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values. nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)# Potential DMPs from 'Weibull2P' modelDMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Weibull2P")nlms.wb$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV## shape 0.991666258901194 0.996595712743823 34.7217494754823## scale ## AIC BIC COV.shape COV.scale## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06## scale -1.165129e-06 2.427280e-04## COV.mu n## shape NA 50000## scale NA 50000 5. Potential DMPs detected with Gamma 2-parameters model As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values $TV_d > 0.926$ are considered DMPs. nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P")# Potential DMPs from 'Gamma2P' modelDMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P")nlms.g2p$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282## scale 76.1580083 0.0642929555 1184.547 0 ## rho R.Cross.val DEV## shape 0.999998194084045 0.998331895911125 0.00752417919133131## scale ## AIC BIC COV.alpha COV.scale## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06## scale -8.581717e-06 4.133584e-03## COV.mu df## shape NA 49998## scale NA 49998 Summary table: data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),Gamma = unlist(lapply(DMPs.g2p, length))) ## ft ft.hd ecdf Weibull Gamma## C1 1253 773 63 756 935## C2 1221 776 62 755 925## C3 1280 786 64 768 947## T1 2504 1554 126 924 1346## T2 2464 1532 124 942 1379## T3 2408 1477 121 979 1354 6. Density graphic with a new critical value The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$. suppressMessages(library(ggplot2)) # Some information for graphic dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif") References Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427.
Connecting the Dots Between Theory, Model, and App Simulation apps, as we’ve highlighted on the blog, are a powerful tool for hiding complex physics behind an easy-to-use, intuitive interface. While the app can be used by those with little simulation expertise, understanding the layers beneath its interface — the embedded model and underlying theory — does require a good understanding of COMSOL Multiphysics and the physics at hand. Let’s explore the connection between theory, model, and app using the example of analyzing buckling in a truss tower design. The Multiple Layers of a Simulation App To help extend simulation capabilities to a wider audience, numerical modeling apps are designed with simplicity and ease-of-use in mind. While the interface that users interact with appears this way, there are many other important layers to consider behind an app’s design. The underlying theory and the embedded model, for instance, are crucial elements, as they help to ensure accuracy in the simulation results obtained by users. So how do we connect the dots between these different elements — the theory, the model, and the app? Today, we’ll demonstrate this relationship by looking at the theory and model behind our Linear Buckling Analysis of a Truss Tower app. While an app sometimes merely embeds a model and places a simplified user interface (UI) on it, this case involves using an app to generate an advanced extension of the built-in functionality available in COMSOL Multiphysics. The Underlying Problem: Buckling in a Truss Tower To begin, let’s focus on the problem that the app is designed to study: buckling. If a tall vertical structure is subject to an increasing compressive load, deformations will be very small until the critical value of the load is reached. If the load is slightly increased after this point, the structure can suddenly collapse. My colleague Henrik Sönnerlind discussed this phenomenon, known as buckling, in an earlier blog post. Here, we will focus on buckling as it specifically relates to a truss tower design. Truss towers are slender structures that can face the risk of buckling. In this model, we will consider the effects of the weight of the truss structure itself, the tension effects of the optional guy wires, and a concentrated vertical force at the top. The latter is the “payload”, typically large antennas. From the viewpoint of buckling, a load can be considered live or dead. A dead load, like the self-weight, has a fixed value. The live load, the weight of the antenna in this case, is the load against which we want to compute the safety factor. COMSOL Multiphysics does not include a built-in setup for solving this problem that allows us to distinguish between the live load, gravity, and wire tension effects. But with some understanding of the theory behind buckling and how the software works, such a study can be set up. We will have to write some extra equations, weak contributions as they are more often called, which are simple to incorporate into the model. This represents an important strength of COMSOL Multiphysics: Users can adjust and extend the capabilities of available features by modifying the existing implementation or writing new mathematical terms. The tower that we will consider has a rectangular cross section with four vertical bars at the corners. Three types of members — longitudinal, transverse, and diagonal — form the tower structure. The guy wires, which are attached to the tower at two different levels, give the structure greater stiffness to protect it against, for instance, wind loads. Note that the wires are under pretension, otherwise they would not provide any stiffness. The bottom part of the truss is pinned to the ground. The screenshot below depicts the model tree structure for the buckling analysis. Model tree settings for the truss tower buckling analysis. The nodal labels in the above diagram are self-explanatory: Linear Elastic Material 1specifies the material properties of the truss elements Linear Elastic Material 2and Linear Elastic Material 3relate to the material properties of the guy wires External Stressspecifies pretension in the guy wires Gravityconsiders the weight of the truss members and guy wires Point Loadallows us to apply the vertical load on the truss tower Weak Contribution 1and Weak Contribution 2enable us to add in extra mathematical terms Understanding Buckling Equations Study 1 in the model tree is a predefined buckling analysis study that is included in the Truss interface and consists of two individual study steps. The first is a stationary study step in which you compute the state of stress in a structure for a given load. The second study step allows you to solve an eigenvalue problem to determine the critical load as a multiple of the load you applied. In a typical analysis of a structure, we are interested in identifying the nodal displacements due to a load \mathbf f_0 acting on it. If we put all of the nodal displacements in a vector $$\mathbf{u}_0$$ and if the structure stiffness matrix is \mathbf K, then this amounts to solving a system of equations of the form The stiffness matrix \mathbf K can be split into linear and nonlinear parts. Thus, \mathbf K = \mathbf K_{L} + \mathbf K_{NL} (\mathbf f_0), where \mathbf K_{L} is the linear part and \mathbf K_{NL} is the extra contribution caused by considering geometric nonlinearity. Note that the nonlinear part of the stiffness matrix depends on the applied load. In a linear buckling analysis, we assume that the nonlinear part of the stiffness matrix is a linear function of the load (i.e., \mathbf K_{NL}(\lambda \mathbf f_0) = \lambda\mathbf K_{NL}(\mathbf f_0), where \lambda is a scalar multiplier). When the structure buckles, the deformation is unbounded. Numerically, this manifests itself in a singular stiffness matrix, so we must solve for the value of $\lambda$ that renders the matrix singular. In other words, we solve the eigenvalue problem defined above. The smallest eigenvalue $\lambda_0$ is the critical load factor, and the corresponding eigenvector \mathbf u defines the buckled shape. Now, let’s try to understand the current problem with respect to the theory explained above. Remember our assumptions? We want to include the weight of the tower and cables, as well as the pretension in the wires, in the analysis. These loads, however, do have fixed values so that their contribution to the nonlinear stiffness matrix should not be scaled by \lambda. Therefore, we are interested in solving a modified eigenvalue problem: where \mathbf K_{NL,d}(\mathbf f_d) captures the effect of the dead loads. The eigenvalue step in the buckling study, however, only allows you to solve the standard buckling problem where all loads are considered as live loads. To change this in order to accomplish our goals, we need to look at additional mechanisms that COMSOL Multiphysics offers. Setting Up the Linear Buckling Analysis Problem in COMSOL Multiphysics The first step in solving our problem is to perform a stationary analysis to isolate the stiffness due to dead loads. Once this is done, we can manipulate the eigenvalue solver to include the effects of the dead loads in the problem. The plan is to solve three stationary problems, respectively, in succession: Solve for gravity effects and pretension Consider the effects of pretension in the guy wires only Analyze the combined effects of live weight and wire pretension We must include the wire pretension in all static load cases. Without it, the wires have no stiffness, so the problem would be singular. As such, it is not possible to directly create a load case containing only the live weight. To solve three different load conditions within one single stationary study step, we can group them as load groups and define appropriate load cases within the solver. In the example below, we have created three load groups: Gravity Load, Point Load, and External Stress. These groups correspond to gravity, point load, and stresses in the wires, respectively. To create these load groups, simply right-click on the Global Definitions node in the model tree and select Load Group. This generates a new load group for you, the name of which you can adjust as needed. Say you want to include the gravity node under the load group Gravity Load. In that case, simply right-click on the node and choose the Gravity Load option under Load Group. Defining load groups and grouping load types. The reason for adding these load groups will be clear when we look at the Stationary node of Study 1. In the Study Extensions section, we can define a number of load cases to create several different stationary analyses within the same study step. In the following screenshot, three load cases are defined: Dead Load 1: Gravity (GR) and pretension in wires (ES) Dead Load 2: Pretension in wires (ES) Live Load: Point load acting on the tower (PL) and pretension in wires (ES) In the case of Dead Load 1, as it may already be intuitively clear to you, the stationary solver will only solve for gravity and pretension effects, excluding the point load from the analysis. Similar considerations apply to other cases as well. Defining load cases in the stationary solver. Writing Weak Form Equations for the Buckling Analysis So why we are interested in solving the stationary problem for the load cases that we just defined? We want to compute nonlinear stiffness due to dead loads, which is generated by the first stationary analysis. In this case, the effects of the weight of the truss, the weight of the guy wires, and the pretension in the wires are all included. The second stationary analysis isolates the pretension effects in the wire. Note that the weight of the wire is excluded in this step. These results are then used to eliminate the effects of guy wire pretension in the eigenvalue solver. In other words, we are solving the original problem as where \mathbf K_{NL}^{i}, \; i=1, 2, 3 denotes the nonlinear stiffness matrix coming from the solutions of the corresponding load case scenarios. With the assumption of linearity in the nonlinear stiffness matrix contributions, this is exactly the problem that we want to solve. To get the extra stiffness contributions, we must manually enter two terms via the Weak Contribution 1 and Weak Contribution 2 nodes. It is important to note that these weak contributions should be active only during the eigenvalue solver stage of the problem, not during the stationary study. This can be controlled in the settings for the individual study steps. The next step involves understanding what to write in the fields of these nodes. In the case of geometric nonlinearity, the basic truss equation is where $$Y$$ is the Young’s modulus, $$A$$ is the cross-sectional area, and \hat{u} is the displacement. As the formatting of the dependent and independent variables emphasizes, the equations are written in local directions. The equation above is the so-called strong form equation of truss. The corresponding weak form is given by the expression where $$S$$ is the axial stress and E is the axial strain. The symbol \delta denotes variation and is represented by the test() operator in COMSOL Multiphysics. This is the type of expression that COMSOL Multiphysics understands, and it uses it to generate the stiffness matrix. In fact, if you go to the Equation View of Linear Elastic 1, you will see the same expression written under Weak Expressions, as highlighted below. Axial strain and weak expression for the truss tower model. The expression for the strain, truss.en, can actually have two different values, 0.5*((truss.tlex*(2uTx+uTx^2+vTx^2+wTx^2)+truss.tley*(vTx+uTy+uTy*uTx+vTy*vTx+wTy*wTx)+ truss.tlez*(wTx+uTz+uTz*uTx+vTz*vTx+wTz*wTx))*truss.tlex+ (truss.tlex*(vTx+uTy+uTy*uTx+vTy*vTx+wTy*wTx)+truss.tley*(2vTy+uTy^2+vTy^2+wTy^2) +truss.tlez*(wTy+vTz+uTz*uTy+vTz*vTy+wTz*wTy))*truss.tley+ (truss.tlex*(wTx+uTz+uTz*uTx+vTz*vTx+wTz*wTx)+truss.tley*(wTy+vTz+uTz*uTy+vTz*vTy+wTz*wTy)+ truss.tlez*(2wTz+uTz^2+vTz^2+wTz^2))*truss.tlez) or (uTx*truss.tlex+0.5*truss.tley*(vTx+uTy)+0.5*truss.tlez*(wTx+uTz))*truss.tlex+ (0.5*truss.tlex*(vTx+uTy)+truss.tley*vTy+0.5*truss.tlez*(wTy+vTz))*truss.tley+ (0.5*truss.tlex*(wTx+uTz)+0.5*truss.tley*(wTy+vTz)+truss.tlez*wTz)*truss.tlez The first expression is the one you would see in the case of a geometrically nonlinear study. The latter strain expression is much simpler, as it is for a geometrically linear case. Now the plan of action is simple. The linear terms of stiffness \mathbf K_L are independent of the loads and they are automatically included in the analysis. To include the nonlinear stiffness matrix due to dead loads, we should extract the stresses from the first stationary analysis and multiply this by the test of only the nonlinear part of the strain function. The stress from the first stationary analysis can be obtained with the withsol() operator in COMSOL Multiphysics. This gives us the following expression that goes in the Weak Contribution 1 node. -withsol(sol2,truss.Sn,setval(loadcase,1))*test(0.5* ((truss.tlex*(uTx^2+vTx^2+wTx^2)+truss.tley*(uTy*uTx+vTy*vTx+wTy*wTx)+truss.tlez*(uTz*uTx+vTz*vTx+wTz*wTx))*truss.tlex +(truss.tlex*(uTy*uTx+vTy*vTx+wTy*wTx)+truss.tley*(uTy^2+vTy^2+wTy^2)+truss.tlez*(uTz*uTy+vTz*vTy+wTz*wTy))*truss.tley +(truss.tlex*(uTz*uTx+vTz*vTx+wTz*wTx)+truss.tley*(uTz*uTy+vTz*vTy+wTz*wTy)+truss.tlez*(uTz^2+vTz^2+wTz^2))*truss.tlez))*truss.area A similar expression is used to exclude the effects of guy wire stresses from the eigenvalue analysis. Note that the expression there is multiplied with a $$\lambda$$ at the end. Further note that the stresses from the second stationary analysis are accessed via the withsol() operator. The argument of the setval() function is changed to 2. Now if you click the Compute button, you should get the correct critical load for the truss tower design. The Underlying Theory and Physics Behind a Numerical Modeling App Numerical modeling apps are comprised of various layers. Behind an app’s simplified user interface is an embedded model and underlying theory that helps to ensure both accuracy and efficiency in simulation results. Here, we have highlighted this in the case of an app developed to analyze linear buckling in a truss tower design. As the example shows, the combined flexibility of COMSOL Multiphysics and the Application Builder is powerful in addressing complex problems, fostering efficiency at every step of the design process. Learn More About How to Develop and Design Simulation Apps Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
We have had several questions about the relation of Cook and Karp reductions. It's clear that Cook reductions (polynomial-time Turing reductions) do not define the same notion of NP-completeness as Karp reductions (polynomial-time many-one reductions), which are usually used. In particular, Cook reductions can not separate NP from co-NP even if P $\neq$ NP. So we should not use Cook reductions in typical reduction proofs. Now, students found a peer-reviewed work [1] that uses a Cook-reduction for showing that a problem is NP-hard. I did not give them full score for the reduction they took from there, but I wonder. Since Cook reductions do define a similar notion of hardness as Karp reductions, I feel they should be able to separate P from NPC resp. co-NPC, assuming P $\neq$ NP. In particular, (something like) the following should be true: $\qquad\displaystyle L_1 \in \mathrm{NP}, L_2 \in \mathrm{NPC}_{\mathrm{Karp}}, L_2 \leq_{\mathrm{Cook}} L_1 \implies L_1 \in \mathrm{NPC}_{\mathrm{Karp}}$. The important nugget is that $L_1 \in \mathrm{NP}$ so above noted insensitivity is circumvented. We now "know" -- by definition of NPC -- that $L_2 \leq_{\mathrm{Karp}} L_1$. As has been noted by Vor, it's not that easy (notation adapted): Suppose that $L_1 \in \mathrm{NPC}_{\mathrm{Cook}}$, then by definition, for all languages $L_2 \in \mathrm{NPC}_{\mathrm{Karp}} \subseteq \mathrm{NP}$ we have $L_2 \leq_{\mathrm{Cook}} L_1$; and if the above implication is true then $L_1 \in \mathrm{NPC}_{\mathrm{Karp}}$ and thus $\mathrm{NPC}_{\mathrm{Karp}} = \mathrm{NPC}_{\mathrm{Cook}}$ which is still an open question. There may be other differences between the two NPCs but co-NP. Failing that, are there any known (non-trivial) criteria for when having a Cook-reduction implies Karp-NP-hardness, i.e. do we know predicates $P$ with $\qquad\displaystyle L_2 \in \mathrm{NPC}_{\mathrm{Karp}}, L_2 \leq_{\mathrm{Cook}} L_1, P(L_1,L_2) \implies L_1 \in \mathrm{NPC}_{\mathrm{Karp}}$? On the Complexity of Multiple Sequence Alignment by L. Wang and T. Jiang (1994)
Acceptance by empty stack for DPDA is weaker than acceptance by final state, but any regular language can be accepted by a DPDA on empty stack. Definition of empty stack for DPDA can be defined as follows $$\mathcal{L}(DPDA)_{EmptyStack}=\{w\in \Sigma^*\;|\;(q_0,w,$)\rightarrow (q,\lambda,\lambda)\}$$ where all transition rules are uniquely determined. Suppose you have a regular language $L$. Because it is regular you can make a DFA for it. You can improve this DFA to DPDA by adding a stack (that contains one beginning symbol $\$$) and copy all the transition rules (neglecting what about the pop and push of stack by using $\lambda$ and some other symbols). Just for those transitions of DFA which put it in the final state you should consider the top of the stack and pop the last symbols by adding few more states. Edit:Note that unlike DFA a DPDA can continue its way after reading its input's word is finished. You should Just pay attention when you are adding the rules like $(q_i,\lambda,S)\rightarrow (q_j,\lambda,\lambda)$ you still keep the transition rule deterministic. It is possible by adding more states.
An example of methylation analysis with simulated datasets Part 2: Potential DMPs from the methylation signal Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches. 1. Background Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1. 2. Potential DMPs from the methylation signal using empirical distribution As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$. DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "ECDF") 3. Potential DMPs detected with Fisher’s exact test In Methyl-IT Fisher’s exact test (FT) is implemented in function FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing ( pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested ( tv.cut = 0.926). ft = FisherTest(LR = divs, tv.cut = 0.926, pAdjustMethod = "BH", pooling.stat = "mean", pvalCutOff = 0.05, num.cores = 4L, verbose = FALSE, saveAll = FALSE) ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None", tv.cut = 0.926, tv.col = 7, alpha = 0.05) There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality: $TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1]. where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1. So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$ 4. Potential DMPs detected with Weibull 2-parameters model Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values. nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)# Potential DMPs from 'Weibull2P' modelDMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Weibull2P")nlms.wb$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV## shape 0.991666258901194 0.996595712743823 34.7217494754823## scale ## AIC BIC COV.shape COV.scale## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06## scale -1.165129e-06 2.427280e-04## COV.mu n## shape NA 50000## scale NA 50000 5. Potential DMPs detected with Gamma 2-parameters model As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values $TV_d > 0.926$ are considered DMPs. nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P")# Potential DMPs from 'Gamma2P' modelDMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P")nlms.g2p$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282## scale 76.1580083 0.0642929555 1184.547 0 ## rho R.Cross.val DEV## shape 0.999998194084045 0.998331895911125 0.00752417919133131## scale ## AIC BIC COV.alpha COV.scale## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06## scale -8.581717e-06 4.133584e-03## COV.mu df## shape NA 49998## scale NA 49998 Summary table: data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),Gamma = unlist(lapply(DMPs.g2p, length))) ## ft ft.hd ecdf Weibull Gamma## C1 1253 773 63 756 935## C2 1221 776 62 755 925## C3 1280 786 64 768 947## T1 2504 1554 126 924 1346## T2 2464 1532 124 942 1379## T3 2408 1477 121 979 1354 6. Density graphic with a new critical value The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$. suppressMessages(library(ggplot2)) # Some information for graphic dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif") References Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427.
Given points $v_1,\dots,v_n\in\mathbb Z^n$ in codimesion $1$ hyperplane $x_1+\dots+x_n=t$ with $0\leq x_{i}$ and a cyclic shift permutation $\sigma$ where $v_1,\dots,v_n$ when written as columns of a matrix has rank $n$ and each row/column sum same if $v_i=(x_{i1},\dots,x_{in})$ and $v_j=(x_{j1},\dots,x_{jn})$ then $v_j=(x_{i\sigma(1)},\dots,x_{i\sigma(n)})$ if $v_i=(x_{i1},\dots,x_{in})$ then $\forall i,r,r'\in\{1,\dots,n\}$ with $r\neq r'$ we have $x_{ir}\neq x_{ir'}$ is it possible to characterize the half-space representation of the convex hull of the points efficiently and may be explicitly? For example for the standard simplex with $n$ points in codimenion $1$ hyperplane $x_1+\dots+x_n=1$ has full rank matrix when corner points are written as matrix and satisfies above properties 1. and 2..