text
stringlengths
256
16.4k
Expectations and conditional expectations are either random or fixed points. It depends upon your choice of axioms. Unfortunately, I have never found a good book that fairly describes both well. On the Bayesian side, I would suggest the very polemical "Probability Theory: The Language of Science" by E.T. Jaynes. On the null hypothesis side, I would suggest the undergraduate text "John Freund's Mathematical Statistics." The late John Freund wrote my introductory undergraduate text back when dinosaurs roamed the Earth. The undergraduate text for "Mathematical Statistics" as opposed to "Elementary Statistics" provides a good grounding in null hypothesis methods. Most graduate students are trained in null hypothesis methodologies. In that axiomatic framework, parameters are fixed points and data is random. Because the null hypothesis fixes the parameter space all randomness is due to chance alone. The probability test is of a result as extreme or more extreme than the observed result. Randomness is chance. Bayesian methods are orthogonal to null hypothesis methods. Parameters are random and data is fixed. After all, you saw the data, there is no uncertainty about it. It is fixed. The data fixes the sample space all randomness is due to uncertainty about the location of the parameter. The probability test is about the truth of a hypothesis given the observed data. Randomness is defined as uncertainty. You will need to be mentally careful when reading books on either one as they often define the same words with fundamentally different meanings. The simple example is the definition of an expectation. The expectation under null hypothesis thinking is $$E(\tilde{x})=\int_{\tilde{x}\in\chi}\tilde{x}p(\tilde{x})\mathrm{d}\tilde{x},$$ while the expectation under Bayesian thinking is $$E(\theta)=\int_{\theta\in\Theta}\theta{p}(\theta)\mathrm{d}\theta.$$ Using Keynesian notation, a Bayesian test of a hypothesis is $\Pr(\theta|X)$ while a Frequentist test is $\Pr(X|\theta)$. Some terms, such as conditional probability, don't resemble the meaning in the other framework. All Bayesian inference is called conditional probability. A conditional expectation in a Bayesian framework is a posterior expectation $E(\theta|X)$ and is a random variable. An unconditional expectation would be a prior expectation $E(\theta)$. On the null hypothesis side, it is a bit more complicated. An unconditional expectation is just the expectation of the distribution involved, $E(P_\theta(X))$. Conditional expectation is more complex. It depends on whether you are conditioning on a stochastic or non-stochastic variable. The added richness to the discussion comes from the differing role the sample space has. On the Bayesian side, all data is fixed and the remainder of the sample space is discarded as irrelevant. As for links, on the null hypothesis side consider reading Deborah Mayo whose area is the philosophy of science. Her website is https://errorstatistics.com/ Alternatively, you could read Cosma Shalizi who is a statistician at http://www.stat.cmu.edu/~cshalizi/ On the Bayesian side, consider Andrew Gelman, a statistician, at http://www.stat.columbia.edu/~gelman/ or consider the psychologist Eric-Jan Wagenmakers at https://www.ejwagenmakers.com/ There is also a good posting on an existing stack exchange via the idea of an interval. The post constructs Frequentist confidence intervals for a data set of cookies versus the same Bayesian credible intervals (also called credible sets). It also gives a good idea of how the two groups think of conditioning. Since the intervals do not match and do not have the same properties it gives a way to think about the consequence of considering one thing random versus another. It is at https://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval
In Chapter 3 we illustrated how intraindividual covariation is examined within the multilevel modeling framework. We now build on that foundation in various ways. In particular, this tutorial demonstrates how the generalized multilevel model is used when the outcome variable is binary (or Poisson). Using link functions, the generalized model provides opportunity to articulate and test hypotheses about another set of dynamic characteristics using experience sampling data. Much of the push into “ecological momentary assessment”, as one type of experience sampling study design, was done in health area. Often, the variables of interest are behaviors or states that are categorical - often binary (e.g., sick vs. not-sick, adherence vs. non-adherence) or ordinal counts (e.g., number of drinks, number of cigarettes). When these variables are the outcome variables, we need to use a generalized linear multilevel model. After some explanantion of the modeling framework, we will work through a few examples. The first example follows directly from Bolger & Laurenceau (2013) Chapter 6. The second example is drawn from the AMIB data. In this session we cover … A. Introduction to the Generalized Multilevel Model B. Modeling Binary Outcome Example 1: Categorical Outcomes Dataset from Bolger & Laurenceau (2013) Chapter 6 C. Modeling Count Outcome Example 2: Poisson Outcome from AMIB Dataset *Note that we switch from the nlme package to the lme4 package. The functions work similarly, but the coding is slightly different. library(psych)library(ggplot2)library(lme4)library(gtools)library(plyr)library(nlme)library(MASS)library(glmmTMB) The basic linear multilevel model is written as \[y_{it} = \beta_{0i} + \beta_{1i}x_{it} + e_{it}\] where \(\beta_{1i}\) is individual i’s level of , and where … -ity \[\beta_{0i} = \gamma_{00} + \gamma_{01}z_{i} + u_{0i}\] \[\beta_{1i} = \gamma_{10} + \gamma_{11}z_{i} + u_{1i}\] where \[e_{it} \tilde{} N(0,\mathbf{R})\], and \[\mathbf{U}_{i} \tilde{} N(0,\mathbf{G})\] \[\mathbf{R} = \mathbf{I}\sigma^{2}_{e}\], where where \(I\) is the identity matrix (diagonal matrix of 1s) and \(\sigma^{2}_{e}\) is the residual (within-person) variance. \[\mathbf{G} = \left[\begin{array} {rr} \sigma^{2}_{u0} & \sigma_{u0u1} \\ \sigma_{u1u0} & \sigma^{2}_{u1} \end{array}\right]\] The generalized linear multilevel model is an extension of linear multilevel models that allows that response variables from different distributions besides Gaussian (see also http://www.ats.ucla.edu/stat/mult_pkg/glmm.htm). To do this, we introduce a link function. Let the linear predictor (the right hand side of the equation) be called \(\mathbf{\eta}\). \(\mathbf{\eta}\) is the combination of the fixed and random effects excluding the residuals. Writing the above model in a general way common in statistics \[\mathbf{\eta = X\beta + Z\gamma}\] We introduce a generic link function, \(g(\cdot)\), that relates the outcome \(\mathbf{y}\) to the linear predictor \(\mathbf{\eta}\). \(g(\cdot)\) = link function \(h(\cdot)=g^{−1}(\cdot)\) = inverse link function We use these link functions to formalize that the conditional expectation of \(\mathbf{y}\) (conditional because it is the expected value depending on the level of the predictors) is \[g(E(\mathbf{y}))=\mathbf{\eta}\] So, basically the link function “transforms” the outcome variable \(\mathbf{y}\) into a normal outcome. We could also model the expectation of \(\mathbf{y}\) as \[E(\mathbf{y})=h(\mathbf{\eta})=\mathbf{\mu}\] With \(\mathbf{y}\) itself equal to \[\mathbf{y}=h(\mathbf{\eta}) + \mathbf{e}\] For the first example, we use data from the book. As explained in section 6.1 (p. 106-107), the dataset is from a daily diary study of couples during what was for them a typical 4-week period. Each day both partners in each couple provided reports in the morning and in the evening. For this illustration we use a categorical outcome. In particular, following the example in the book, we examine evening reports of conflict from the view of the male partner, testing whether the anger/irritability of the female partner at the beginning of a given day increased the risk of conflicts later in the day. The data can be downloaded from the book’s website … http://www.intensivelongitudinal.com/ch6/ch6R.zip and then needs to be unzipped. We then read in the data from the Downloads folder … #Setting the working directory#setwd("~/Downloads/Ch6R") #Person 1 Computer#setwd("~/Desktop/Fall_2017") #Person 2 Computer#set filepath for data file#filepath <- "https://quantdev.ssri.psu.edu/sites/qdev/files/AMIBbrief_raw_daily1.csv"#read in the .csv file using the url() functiondaily <- read.csv(file="categorical.csv",header=TRUE) Lets have a quick look at the data file and the descriptives. #data structurehead(daily,10) ## id time time7c pconf lpconf lpconfc amang amangc## 1 1 2 -1.8756988 0 0 -0.1568773 0.4166667 -0.0697026## 2 1 3 -1.7275506 0 0 -0.1568773 0.0000000 -0.4863693## 3 1 4 -1.5794025 0 0 -0.1568773 0.0000000 -0.4863693## 4 1 5 -1.4312543 0 0 -0.1568773 0.0000000 -0.4863693## 5 1 6 -1.2831062 1 0 -0.1568773 0.0000000 -0.4863693## 6 1 7 -1.1349580 0 1 0.8431227 0.0000000 -0.4863693## 7 1 8 -0.9868099 0 0 -0.1568773 0.0000000 -0.4863693## 8 1 9 -0.8386617 0 0 -0.1568773 0.0000000 -0.4863693## 9 1 10 -0.6905136 0 0 -0.1568773 0.0000000 -0.4863693## 10 1 11 -0.5423654 0 0 -0.1568773 0.0000000 -0.4863693## amangcb amangcw## 1 -0.4709372 0.4012346## 2 -0.4709372 -0.0154321## 3 -0.4709372 -0.0154321## 4 -0.4709372 -0.0154321## 5 -0.4709372 -0.0154321## 6 -0.4709372 -0.0154321## 7 -0.4709372 -0.0154321## 8 -0.4709372 -0.0154321## 9 -0.4709372 -0.0154321## 10 -0.4709372 -0.0154321 #unique idsunique(daily$id) ## [1] 1 3 5 6 9 10 11 12 13 14 15 16 17 18 19 20 21## [18] 22 23 24 26 27 30 31 32 33 34 36 37 38 42 43 44 45## [35] 46 48 55 57 59 60 65 69 70 71 72 74 75 76 78 79 80## [52] 81 83 87 89 91 92 93 95 100 102 #unique occasionsunique(daily$time) ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24## [24] 25 26 27 28 #descriptivesdescribe(daily) ## vars n mean sd median trimmed mad min max range## id 1 1345 45.61 30.19 38.00 44.44 37.06 1.00 102.00 101.00## time 2 1345 14.66 7.76 14.00 14.60 10.38 2.00 28.00 26.00## time7c 3 1345 0.00 1.15 -0.10 -0.01 1.54 -1.88 1.98 3.85## pconf 4 1345 0.14 0.35 0.00 0.06 0.00 0.00 1.00 1.00## lpconf 5 1345 0.16 0.36 0.00 0.07 0.00 0.00 1.00 1.00## lpconfc 6 1345 0.00 0.36 -0.16 -0.09 0.00 -0.16 0.84 1.00## amang 7 1345 0.49 1.11 0.00 0.22 0.00 0.00 10.00 10.00## amangc 8 1345 0.00 1.11 -0.49 -0.27 0.00 -0.49 9.51 10.00## amangcb 9 1345 0.00 0.49 -0.18 -0.10 0.27 -0.47 1.61 2.08## amangcw 10 1345 0.00 1.00 -0.17 -0.15 0.27 -2.10 9.40 11.50## skew kurtosis se## id 0.28 -1.29 0.82## time 0.06 -1.18 0.21## time7c 0.06 -1.18 0.03## pconf 2.02 2.09 0.01## lpconf 1.88 1.55 0.01## lpconfc 1.88 1.55 0.01## amang 3.96 21.05 0.03## amangc 3.96 21.05 0.03## amangcb 1.72 2.32 0.01## amangcw 3.78 22.51 0.03 The variables of interest are: id = couple ID number time = diary day pconf = male partner’s evening report of the occurrence of conflict that day amang = female partner’s morning report of anger/irritability Lets look at the data for each couple (a version of Figure 6.2) #faceted plotggplot(data=daily, aes(x=amang,y=pconf)) + geom_point() + #(position=position_jitter(h=.025)) + xlab("Female Partner Morning Anger") + ylab("Male Partner Evening Report of Conflict") + scale_x_continuous(limits=c(0,10),breaks=c(0,5,10)) + scale_y_continuous(limits=c(0,1), breaks=c(0,1)) + facet_wrap( ~ id) OK - Lets briefly consider the substantive framework of our inquiry. We define a person-level dynamic characteristic, anger sensitivity as the extent to which anger influences risk of conflict. Anger sensitivity is the intraindividual covariation of pconf and amang. Formally, \[log\left(\frac{p(y_{it})}{1-p(y_{it})}\right) = \beta_{0i} + \beta_{1i}x_{it}\] where the outcome (left side of equation) has been “transformed” using the logistic link function, \(g(\cdot) = log_{e}\left(\frac{p}{1−p}\right)\), and where \(\beta_{1i}\) is individual i’s level of anger sensitivity. As usual we split the predictor into “trait” (between-person differences) and “state” (within-person deviations) components. In these data, this has already been done. Specifically, the variable amang has been split into two varaibles: amangcb is the sample-mean centered between-person component, and amangcw is the person-centered within-person component. Covariates include lpconfc which is a centered variable that indicates yesterday’s conflict report, and time7c which is a centered variable that indicates passage of time in weeks. The lme4 package contains tools for fitting generalized linear mixed effects models (GLLM), in particular the glmer() function. Note that lme() (from the nlme package) and glmer() have different coding setups. Usage: glmer(formula, data = NULL, family = gaussian, control = glmerControl(), start = NULL, verbose = 0L, nAGQ = 1L, subset, weights, na.action, offset, contrasts = NULL, mustart, etastart, devFunOnly = FALSE, …) formula = a two-sided linear formula object describing both the fixed-effects and fixed-effects part of the model, with the response on the left of a ~ operator and the terms, separated by + operators, on the right. Random-effects terms are distinguished by vertical bars (“|”) separating expressions for design matrices from grouping factors. For illustration of the code, lets fit the usual unconditional means model … um.fit <- glmer(formula = pconf ~ 1 + (1|id), data=daily, family="binomial", na.action=na.exclude)summary(um.fit) ## Generalized linear mixed model fit by maximum likelihood (Laplace## Approximation) [glmerMod]## Family: binomial ( logit )## Formula: pconf ~ 1 + (1 | id)## Data: daily## ## AIC BIC logLik deviance df.resid ## 1098.7 1109.1 -547.3 1094.7 1343 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -0.6792 -0.4056 -0.3746 -0.3175 3.1767 ## ## Random effects:## Groups Name Variance Std.Dev.## id (Intercept) 0.3132 0.5596 ## Number of obs: 1345, groups: id, 61## ## Fixed effects:## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -1.8625 0.1117 -16.67 <2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The output is similar to what we are used to - although, note that there is no residual term (as per the equation above). OK - lets add the predictor, amangcw to examine anger sensitivity. #simple model model1.fit <- glmer(formula = pconf ~ 1 + amangcw + (1|id), data=daily, family="binomial", na.action=na.exclude)summary(model1.fit) ## Generalized linear mixed model fit by maximum likelihood (Laplace## Approximation) [glmerMod]## Family: binomial ( logit )## Formula: pconf ~ 1 + amangcw + (1 | id)## Data: daily## ## AIC BIC logLik deviance df.resid ## 1090.6 1106.2 -542.3 1084.6 1342 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -0.8490 -0.4081 -0.3614 -0.3040 4.0245 ## ## Random effects:## Groups Name Variance Std.Dev.## id (Intercept) 0.3244 0.5695 ## Number of obs: 1345, groups: id, 61## ## Fixed effects:## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -1.88160 0.11345 -16.586 < 2e-16 ***## amangcw 0.21986 0.06605 3.329 0.000872 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Correlation of Fixed Effects:## (Intr)## amangcw -0.099 #other useful outputs:#confint(fit) # 95% CI for the coefficients#exp(coef(fit)) # exponentiated coefficients#exp(confint(fit, method="Wald")) # 95% CI for exponentiated coefficients#predict(fit, type="response") # predicted values#residuals(fit, type="deviance") # residuals Interpretation occurs in the usual way for logistic models. # Odds Ratio Estimatesexp(fixef(model1.fit)) ## (Intercept) amangcw ## 0.1523457 1.2459069 #with confidence intervals exp(cbind(OR = fixef(model1.fit), confint(model1.fit, method="Wald")[-1,])) ## OR 2.5 % 97.5 %## (Intercept) 0.1523457 0.1219735 0.1902808## amangcw 1.2459069 1.0946212 1.4181015 Lets look at predicted scores. This can be done in two ways. 1. We can predict in the link scale (for us that is log odds), or 2. We can predict in the response scale (for us that is probability). daily$pred.m1logit <- predict(model1.fit, type="link")daily$pred.m1prob <- predict(model1.fit, type="response") Plotting predicted scores. #Logit predictionsggplot(data=daily, aes(x=amangcw,y=pred.m1logit, group=id)) + geom_line() + xlab("Female Morning Anger (centered)") + ylab("Predicted Log Odds")
Let's suppose we have two variables $(\alpha, \beta)$ on two functions $k_1, k_2$ that can be defined in terms of matrix relations: $$ k_1 = \left| \xi^T \mathcal{F}^T \mathcal{F} \xi \right| $$ $$ k_2 = \left| \xi^T \mathcal{G}^T \mathcal{G} \xi \right| $$ where: $$ \mathcal{F}= \left( \begin{matrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{matrix} \right) $$ $$ \mathcal{G}= \left( \begin{matrix} d_{11} & d_{12} \\ d_{21} & d_{22} \end{matrix} \right) $$ $$ \xi = \left( \begin{matrix} \alpha \\ \beta \end{matrix} \right) $$ Suppose that the determinants of $\mathcal{F}$ and $\mathcal{G}$ are non-zero. It's possible to invert the relation such that: $$ \alpha = f(k_1, k_2) $$ $$ \beta = g(k_1, k_2) $$ However the multiple solutions return by the CAS (mathematica) are extremely complicated. Is there a compact way to express the functions $f$ and $g$ supposing we are only interested in positive values for $k_1$ and $k_2$?
Given an SDE for an underlying: $$dS(t) = \mu(S,t)dt+\sigma(S,t)dW(t)$$ the SDE for the value of the option $V=V(S,t)$ is given via Ito's lemma as: $$dV = V_tdt+V_S\mu(S,t)dt+\frac{1}{2}V_{SS}\sigma^2(S,t)dt+V_S\sigma(S,t)dW(t)$$ It seems that this would results in an SDE containing $S(t)$. How does one then obtain an SDE for the option value so that it can be simulated directly without simulating the underlyings, i.e. something like $$dV(t) = m(V,t)dt+s(V,t)dW(t)?$$
How to type this expression, where has \min on top and \forall at the bottom? TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community The bottom part is a TeX subscript (like \sum): \min_{\forall s \in S_j} q_k(s) Output: Compile the following code sequence to understand the difference between textstyle and displaystyle: \documentclass{article}\def\sample{\min\nolimits_{\forall s \in S_j} q_k(s)}\begin{document}\noindent Text-Textstyle: \(\sample\)\\ Text-Displaystyle: \(\displaystyle\sample\) \[ \textrm{Text-Textstyle: }\textstyle\sample \]\[ \textrm{Display-Displaystyle: }\sample \]\end{document} You can also use \nolimits and \limits to force textstyle limits and displaystyle limits respectively: \min\limits_{\forall s \in S_j} q_k(s)
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Let $\Delta$ be a simplicial complex (or more generally, a regular CW complex). Let $\mathcal{M}$ be a Morse matching (or equivalently, a discrete Morse function) on $\Delta$. By Forman's theorems, $\Delta$ is homotopy equivalent to a CW-complex whose cells are $\mathcal{M}$-critical (=unmatched) simplices. Its homology is computed from the chain complex, in which an entry of the $k$-th boundary matrix equals the sum of signs of all zig-zag paths (=directed paths in the Hasse diagram of $\Delta$ with the edges from $\mathcal{M}$ reversed) between a critical $k$-cell and critical $k\!-\!1$-cell. Do zig-zag paths determine how the critical cells are glued onto each other? A CW-complex $X$ is determined, up to homotopy, by the number of cells in each dimension, and the homotopy class of each gluing map $S^k \!\longrightarrow\! X^{(k)}$. Q1: If all critical cells are in dimensions $0,k,k\!+\!1$, then gluing maps are determined by equivalence classes in $\pi_k(S^k)\cong\mathbb{Z}$. Does the sum of the signs of all zig-zag paths from a $k\!+\!1$-cell to a $k$-cell equal the degree of the gluing map? I suspect this to be true. More intriguingly: Q2: If all critical cells are in dimensions $0,k,k\!+\!2$ with $k\!\geq\!3$, then gluing maps are determined by equivalence classes in $\pi_{k+1}(S^k)\cong\mathbb{Z}_2$. Does the sum of the signs of all paths (of a new type) from a $k\!+\!2$-cell to a $k$-cell determine the gluing map? In general, the homotopy groups of spheres are not all of the form $\mathbb{Z}_m$, so probably summing the signs of zig-zag paths is not sufficient, i.e. some information is lost when applying the matching. For instance, if critical cells are of dimension $1,4,8$, then gluing maps are determined by equivalence classes in $\pi_7(S^4)\cong\mathbb{Z}\!\oplus\!\mathbb{Z}_{12}$. Furthermore, the gluing maps of $k\!+\!1$-cells go into the $k$-skeleton, so they are determined by their representatives in $\pi_k(X^{(k)})$, but according to Mihai Damian, On the higher homotopy groups of a finite CW-complex, Topology Appl. 149 (2005), no. 1-3, 273--284., a homotopy group of a finite CW-complex may be infinitely-generated! I have an example of $\Delta$ and $\mathcal{M}$ that produce critical $1$ $0$-simplex, $1$ $5$-simplex, $10$ $7$-simplices. Can I conclude that $\Delta\simeq S^5\vee\bigvee_{\!10}S^7$ by inspecting certain (???) paths in the Hasse diagram?
Why is the work done on a charge calculated from infinity to a point? Why not from one particular point to other? Consider the form of the potential energy between two point charges in the case that I use a reference distance $r_0$ as the zero (written here in SI units). $$ U_{r_0} = \frac{q_1 \,q_2}{4 \pi \epsilon_0} \left( \frac{1}{r} - \frac{1}{r_0} \right) \;.$$ This is quite general, but it will get to be very messy to write down and manipulate very quickly indeed. It also means that the sign of the energy depends on the the relative sign of the charges and the relative size of $r$ and $r_0$ Now, the special case of taking $r_0$ as arbitrarily distant, gets us the familiar form \begin{align*} U_\infty &= \lim_\limits{r_0 \to \infty} U_{r_0}\\ &= \frac{q_1 \,q_2}{(4 \pi \epsilon_0) r} \;, \end{align*} which is algebraically simpler and the sign of which can be known at any distance just from the relative sign of the charges. The conventional form is simply easier to use in the majority of cases. But it gets better, because the same kind of consideration applies to Newtonian gravitation, and the convention of zero energy at infinite remove means that the total energy bound bodies is negative while that of free bodies is positive (with zero the parabolic edge case). It really is a natural choice after you've looked at the ways you're going to be using the quantity. See, infinity is the place that is considered to have no charges, and is at 0 potential all the time. So, potential at a point in an electric field is defined as work done in bringing an unit positive charge from infinite distance to that point. Actually, we are measuring the potential difference between infinity and the required point. But we've named it potential of the point as it is with reference point infinity. Basically, infinity is considered as a reference place that is fixed. If other points are considered, then one has to define the other point first and the potential at that point to find the potential at a new point. N.B.: In practical application, we can measure potential difference and not strictly potential at a point.
Closure (闭包) $\text{cl}(C)$ of a set is the collection of its limit points. Projection of a vector on a closed set isthe set of vectors in the set closest to the vector: $P_C(x) = \arg\min_{y \in C} \| y - x \|$. Distance of a vector to a set is the smallest distance from it to points in the set:$d_C(x) = \min_{y \in C} \| y - x \|$. Convex set in a vector space is a set that includes the line segmentbetween any two of its elements [@Bertsekas2003, 1.2.1]:$A = \cup_{x, y \in A} \{x + \lambda (y - x) : \lambda \in [0,1]\}$.Dual definition of closed convex set:A set is closed and convex, if it is the intersection of all closed half-spaces containing the set.An extreme point of a (nonempty) convex setis a vector not between any two other vectors on the set.An extreme direction of a (nonempty) convex set is the direction of one of its extreme points. Convex hull (凸包) $\text{conv}(C)$is the smallest (intersection of all) convex set containing a set. Convex combination is a nonnegative weighted average of vectors.In comparison, nonnegative combination is a linear combination with nonnegative coefficients.Convexity is closely related to expectation:If $P(X \in C) = 1$ and $C \subseteq \mathbb{R}^n$ is convex, then $\mathbf{E}X \in C$. Affine set is a shifted subspace. Affine hull (仿射包) $\text{aff}(X)$is the smallest (intersection of all) affine set containing a set.The affine dimension of a set is the dimension of its affine hull.A set of vectors are affinely independentif when picking one as the origin, the rest are linearly independent.The number of affinely independent vectors in a setcannot be more than one plus the affine dimension of these vectors, if finite.The relative interior of a set $\text{ri}(C)$is the intersection of its interior and its affine hull. A set is a cone, if it is scale invariant.A cone is pointed if it contains no line.A cone is solid if it has nonempty interior.A proper cone (真锥) is a pointed, closed solid, convex cone. Generated cone $\text{cone}(C)$ is the cone generated by a set is the nonnegative span of it.A cone is finitely generated, if it is generated by a finite set of vectors.A vector is a direction of recession of a convex set,if arbitrary shifts of the set along this vector are within the set.The recession cone of a set $R_C$ is the cone generated by its recession directions.The lineality space of a set is the set of directions of recessionwhose oppose are also direction of recession: $L_C = R_C \cap -R_C$. A generalized inequality is the partial ordering on $\mathbb{R}^n$associated with a proper cone $K$.Vector inequality is a special case of generalized inequalitywhere the associated cone is the nonnegative cone. $$\begin{align} x \preceq_K y, y \succeq_K x &\Leftrightarrow y−x \in K \ x \prec_K y, y \succ_K x &\Leftrightarrow y−x \in \text{int}(K) \end{align}$$ The minimum element (最小元) of a multidimensional set w.r.t. a generalized inequality $\preceq_K$is an element no greater than any other element in the set in such partial ordering:$S \subseteq \{x\} + K$.Similarly, the maximum element of a multidimensional set w.r.t. a generalized inequality$\succeq_K$ is an element no less than any other element in the set in such partial ordering:$S \subseteq \{x\} - K$. A minimal element (极小元) of a multidimensional set w.r.t. a generalized inequality $\preceq_K$is an element no less than any element in the set in such partial ordering:$(\{x\} - K) \cap S = \emptyset$.Similarly, a maximal element of a multidimensional set w.r.t. a generalized inequality $\succeq_K$is an element no greater than any element in the set in such partial ordering:$(\{x\} + K) \cap S = \emptyset$. The polar cone of a set is its shade, which is a cone:$C^∗ = \{ x | \langle x, y \rangle \le 0, \forall y \in C \}$. Dual cone of a set is the negative polar cone: $\hat{C} = -C^∗$.A set is polyhedral (多面的) if it is the intersection of finitely many closed half-spaces:${x | A x \le b}$.A cone is polyhedral if it is the polar cone of some finite set. The dual generalized inequality associated with a proper coneis the generalized inequality associated with its dual cone.This concept can be used for a dual characterization of the minimum/minimal elements.The minimum element of a multidimensional set w.r.t. a generalized inequality $\preceq_K$is an element such that for all elements $\lambda$ in the interior of the dual cone $K^∗$,it is the unique minimizer for $\langle \lambda, \cdot \rangle$ in the set.A minimal element of a multidimensional set w.r.t. a generalized inequality $\succeq_K$is an element that minimizes $\langle \lambda, \cdot \rangle$ over the setfor an element $\lambda$ in the interior of the dual cone. An extended real-valued function is a function whose range is the extended real line.The extended-value extension $\tilde{f}: \mathbb{R}^n → \mathbb{R} ∪ \{+\infty\}$of a convex function $f : C \to \mathbb{R}$ is the functiondefined in its containing (Euclidean) space that takes value $+\infty$ outside its original domain. The graph of a function is the collection of point-value pairs:$\{ (x, f(x)) | x \in \text{dom}f \}$.The epigraph (supergraph) of a function is the collection of points on and above its graph:$\text{epi}f = \{ (x, y) | x \in \text{dom}f, y \ge f(x) \}$. Consistent definitions of convex functions: A function is concave if its negative is a convex function.While concave set is simply any non-convex set, concave functions exactly mirror convex functions. A level curve is the feasible set of an equality constraint.A level set is the feasible set of an inequality constraint.A sub-level set is the feasible set of a less-than-or-equal-to constraint.A function is quasi-convex if all its sub-level sets (and thus its domain) are convex.A function is quasi-concave if its negative is quasi-convex.A function is unimodal if it is quasi-convex or quasi-concave.A function is quasilinear if it is both quasi-convex and quasi-concave. A positive function is log-concave (log-convex) if its logarithm is concave (convex). Given a proper cone $K$ in its domain space, a real-valued function is $K$ -nondecreasingif it is dominated by values in its $K$-cone: $\forall x, y, x \preceq_K y : f(x) \le f(y)$.The function is $K$ -increasing if it is strictly dominated by values in its $K$-coneother than the vertex: $\forall x \ne y, x \preceq_K y : f(x) < f(y)$.$K$ -nonincreasing and $K$ -decreasing are similarly defined.As a special case, a real-valued function on the space of symmetric matrices is matrix increasing(decreasing) if it is increasing (decreasing) with respect to the positive semidefinite cone. Given a proper cone $K$ in its range space, a vector-valued function is $K$ -convexif every line segment of the function is $K$-dominated by its linear interpolation:$\mathbf{f}(\boldsymbol\lambda \cdot \mathbf{x}) \preceq_K\boldsymbol\lambda \cdot \mathbf{f}(\mathbf{x})$.The function is strictly $K$-convex if the inequality is strict except on the endpoints.As a special case, a symmetric matrix valued function is matrix convexif it is convex with respect to the positive semidefinite cone. Given the dual pair of vector spaces $X, Y$ w.r.t a bilinear form $\langle \cdot, \cdot \rangle$,functions $f: X \to \mathbb{R}$ and $f^\star: Y \to \mathbb{R}$ are conjugateif their sum is supported from below by the bilinear form pointwise in the dual space:$f(x) + f^\star(y) \ge \langle y,x \rangle$ ( Young's inequality), and$\forall y \in Y, \exists x \in X: f(x) + f^\star(y) = \langle y,x \rangle$.Equivalently, $f^\star(y) = \sup_{x \in X} \{ \langle y,x \rangle - f(x) \}$,also known as Fenchel transformation [@Fenchel1949].If a function is smooth, strictly convex, and increases at infinity faster than a linear function,its conjugate is equivalent to its Legendre transformation [@Legendre1789]:$f^\star(y) = \langle x^\star, \nabla f(x^\star) \rangle − f(x^\star)$, where $y = \nabla f(x^\star)$. Common convex sets: Common convex functions: Common non-trivial log-concave and log-convex functions: Common matrix monotone functions: (Real-valued functions over the symmetric matrix space.) Common matrix convex/concave functions: Set relations: $C \subseteq \text{cl}(C) \subseteq \text{conv}(C) \subseteq \text{cone}(C)$ Operations that preserves set convexity: [@Bertsekas2003, 1.2.1] Properties of convex set: [@Bertsekas2003, 1.4.1] If a set is closed with nonempty interior and has a supporting hyperplane at every point in its boundary, it is convex. The dual cone of a proper cone is also proper. Generated cone is convex. Recession cone theorem [@Bertsekas2003, 1.5.1]: For a (nonempty) closed convex set, Caratheodory's theorem [@Bertsekas2003, 1.3.1]:(minimal representation of generated cone and convex hull) Projection theorem [@Bertsekas2003, 2.2.1]: For a (nonempty) closed convex set $C$, Supporting hyperplane theorem: Any non-interior point of a (nonempty) convex sethas some hyperplane that contains the set in one of its closed half-spaces. [@Bertsekas2003, 2.4.1] Separating hyperplane theorem: Any two disjoint (nonempty) convex setscan be separated by some hyperplane. [@Bertsekas2003, 2.4.2] Strict separation theorem [@Bertsekas2003, 2.4.3]: Two disjoint (nonempty) convex sets $C_1, C_2$can be strictly separated by some hyperplane if any of the following holds: Polar cone is closed and convex. Polar cone of a set is also polar cone of the cone generated by the set: $C^∗ = \text{cone}(C)^∗$. [@Bertsekas2003, 3.1.1] Polar cone theorem: Twice polar cone of a cone is the closure of its convex hull:$C^{ ** } = \text{cl}(\text{conv}(C))$.In particular, closed convex cones are self-dual to polarity: $C^{ ** } = C$. Cone generated by a finite set is closed. (Nontrivial)The polyhedral cone and the cone generated by the same finite set of vectors are polar to each other:for a finite set $A$, $A^∗ = \text{cone}(A)^∗$, and ( Farkas' Lemma) $A^{**} = \text{cone}(A)$.[@Bertsekas2003, 3.2.1] Minkowski-Weyl Theorem: Any polyhedral cone can be represented as a finitely generated cone,and the converse is also true: $A^∗ = \text{cone}(B)$. Minkowski-Weyl representation: Given finite sets $A, B$, a polyhedral set can be represented asthe Minkowski sum of the convex hull of a finite set and a finitely generated cone;the converse is also true: $P = \text{conv}(A) + \text{cone}(B)$.[@Bertsekas2003, 3.2.2] [@Bertsekas2003, 3.3.1] For a (nonempty) convex set, extreme points on a supporting hyperplane are also extreme points of the set. A closed convex set has at least one extreme point iff it does not have two opposing recession direction. For a bounded convex set, all extreme points form the minimal set that reproduce the set by convex hull: $C = \text{conv}(E)$, where $E$ is the set of extreme points of $C$. For a polyhedral set with Minkowski-Weyl representation $P = \text{conv}(A) + \text{cone}(B)$, its extreme points are in $A$. [@Bertsekas2003, 3.3.2] [@Bertsekas2003, 3.3.3] Polyhedral proper separation theorem [@Bertsekas2003, 3.5.1] Properties of generalized inequalities: For a generalized inequality, minimum and maximum elements of a set are unique if they ever exist, while a set can have multiple minimal and maximal elements. Non-decreasing in each argument is equivalent to $K$-nondecreasing where $K$ is the nonnegative cone. First-order conditions for $K$-monotonicity: A differentiable function on a convex domain is $K$-nondecreasing iff its gradient is nonnegative in $K$-dual inequality: $\nabla f(x) \succeq_{K^\star} 0$; It is $K$-increasing if its gradient is positive in $K$-dual inequality: $\nabla f(x) \succ_{K^\star} 0$; First-order conditions for $K$-convexity: A differentiable function on a convex domain is $K$-convex iff the function globally $K$-dominates its linear expansion: $f(y) \succeq_K f(x) + Df(x)(y−x)$, here $f: \mathbb{R}^n \to \mathbb{R}^m$ and $Df(x) \in \mathbb{R}^{m \times n}$. This proposition also holds when both sides are strict. Composition theorem: A $K$-nondecreasing convex function of a $K$-convex function is convex. Note the monotonicity restriction is on the extended-value extension of the outer function. A convex function can only be discontinuous on its relative boundary. A function is affine iff it is simultaneously convex and concave. Operations that preserves (vector-valued) function convexity (or concavity): [@Bertsekas2003, 1.2.4] Characterization of differentiable convex functions: Operations that preserve quasi-convexity: A quasilinear function has convex level sets; the converse does not hold. A quasi-convex function $f(x)$ can be represented by a family of convex functions $\phi_t(x)$, where the $t$-sublevel set of $f$ is the 0-sublevel set of $\phi_t$. Such representation is not unique. Relations: Properties: Transformation properties of conjugation: A function is closed if its epigraph is a closed set; or equivalently,if all its sub-level sets are closed.The conjugate function of any function is closed and convex.Almost every convex function can be expressed as the conjugate of some function. An extended real-valued function is lower semi-continuous at a pointif the function values in the point's neighborhood areeither close to or greater than the function value at that point:$\liminf_{x \to x_0} f(x) \ge f(x_0)$.The conjugate of a function is always lower semi-continuous. The convex closure of a function is the functionwhose epigraph is the closure of the convex hull of the original function.The conjugate of the conjugate (twice conjugate) of a function is its convex closure:$\text{epi}f^{\star\star} = \text{cl}(\text{conv}(\text{epi}f))$, note that $f^{\star\star} \le f$. Fenchel–Moreau theorem:If a function is convex and closed, its twice conjugate is itself: $f^{\star\star} = f$.
The fact that the period of the gravity train that passes through the center of the Earth is equal to the period of an orbit that skims the Earth's surface is not a coincidence. Consider a polar orbit around Earth (i.e., one that passes directly over the coordinate north and south poles). Now, consider only the north-south motion of the satellite by projecting the motion onto a line parallel with the Earth's axis. What kind of motion is this? Circular orbits have a constant speed, so the 1-D motion must be sinusoidal. The gravity train is just a circular orbit where the motion off the Earth's axis is restricted. Perpendicular forces and motions can be treated independently. See the animation below to illustrate. The spinning arrow shows the path of a satellite, while the straight lines show the path of the gravity train for perpendicular tracks. Now, what about gravity trains that don't pass through the Earth's center? First, notice that your expression for the acceleration of the train does not depend on the distance from the Earth's center. As long as the track is symmetric about the Earth's radius, then you will get the same train motion no matter the depth of the track. The ends of the track do not have to connect to the surface. You can also reason that the depth of the gravity train track does not matter by starting with a track that connects to the surface at both ends and then adding a shell around the entire planet to increase its radius. Inside a spherical shell, the gravitational force is zero, so burying the track does nothing to the motion. To start to demonstrate this, let's prove a similar fact about circular orbits: the period of an orbit that skims a planet's surface depends only on the planet's density, not its size.$$F = m\frac{v^2}{R} = \frac{GMm}{R^2}$$where $F$ is the gravitational force, $m$ is the mass of the satellite, $M$ is the mass of the planet, $v$ is the speed of the orbit, $R$ is the radius of the planet, and $G$ is the gravitational constant.$$v^2 = \frac{GM}{R}$$$$\left(\frac{2\pi{}R}{T}\right)^2 = \frac{GM}{R}$$where $T$ is the period of the orbit.$$T = \sqrt{\frac{4\pi{}^2R^3}{GM}}$$$$T = \sqrt{\frac{4\pi{}^2R^3}{G\rho\frac{4}{3}\pi{}R^3}}$$$$T = \sqrt{\frac{3\pi}{G\rho}}$$where $\rho$ is the density of the planet. Now, starting from your expression for the train acceleration:$$a = -\frac{4}{3}\pi\rho{}Gd$$we can derive an equivalent mass-spring system(*) with a spring constant $k$ given by$$k = \frac{F}{d} = \frac{ma}{d} = \frac{4}{3}\rho{}Gm.$$The period of this mass-spring system, and thus of the train, is$$T = 2\pi\sqrt{\frac{m}{k}} = \sqrt{\frac{4\pi^2m}{\frac{4}{3}\pi\rho{}G}} = \sqrt{\frac{3\pi}{\rho{}G}}$$Notice that this is the same period as the satellite. TL;DR: The gravity train is a 1D projection of the 2D circular orbit where the length of the train track is the same as the diameter of the orbit. The time to traverse the track is the same no matter the length or the depth because the period of a surface-skimming orbit around a constant-density planet is independent of the size of the planet. (*) Everything is physics is ultimately a mass on a spring.
1. Example Here is an example. Automatic machine learning greatly facilitate applying machine learning: there is no need to select among algorithms, no need to tune hyperparameters, etc. 2. Hyperparameter Optimization Let $\lambda$ be the hyperparameters of an ML algorithm $A$ with domain $\Lambda.$ Let $\mathcal{L}(A_\lambda, D_\mathrm{train}, D_\mathrm{valid})$ denotes the loss of $A,$ using hyperparameters $\lambda$ trained on $D_\mathrm{train}$ and evaluated on $D_\mathrm{valid}.$ The hyperparameter optimization problem is to find a hyperparameter configuration $\lambda^*$ that minimizes the loss $$\lambda^*=\arg\min_{\lambda\in\Lambda}\mathcal{L}(A_\lambda, D_\mathrm{train}, D_\mathrm{valid}).$$ There can be more than one $A$ then the loss is defined also over the choice among different algorithms. The problem is called combined algorithm selection and hyperparameter optimization (CASH). 2.1 Types of Hyperparameters Continuous: learning rate, etc. Integer: number of units, etc. Categorical: algorithm$\in\lbrace\mathrm{SVM,\ RF,\ NN}\rbrace$ 2.2 Conditional Hyperparameters Conditional hyperparameters $B$ are only active if other hyperparameters $A$ are set a certain way. 2.3 Bayesian Optimization (BO) We can use gradient descent to minimize over parameters but $\mathcal{L}$ is not differentiable over hyperparameters. Conventionally we use graduate student descent. Now, we try to solve $\min\limits_\lambda\mathcal{L}(\lambda)$ without $\frac{\partial\mathcal{L}}{\partial\lambda}$ (since we cannot get and besides evaluate $\mathcal{L}(\lambda)$ is often expensive). Randomly pick some $\lambda_i$'s and evaluate corresponding $\mathcal{L}(\lambda_i).$ Repeat until converged: (1) Fit a model $\mathcal{M}$ using $(\lambda_i, \mathcal{L}(\lambda_i))$'s; (2) Sample next $\lambda_i$'s based on $\mathcal{M}$ and calculate $\mathcal{L}(\lambda_i).$ Conventionally we use Gaussian process. But there is some challenges, for example, sometimes the noise is not Gaussian, GP cost is $O(N^3),$ etc. Here are other methods: population-based methods (using a population of workers which collaborate and/or compete somehow, including genetic algorithms, particle swarm optimization) and multi-fidelity optimization (using cheap approximations of the blackbox). However, they do not work well in high dimensionality and conditional hyperparameters. GP, GA, and PSO have their own hypermeters and there is no guarantee or evidence that they work in all cases. Besides, so far AutoML cannot learn features and give interpretation.
8.5. Implementation of Recurrent Neural Networks from Scratch¶ In this section we implement a language model introduce inSection 8 from scratch. It is based on a character-levelrecurrent neural network trained on H. G. Wells’ The Time Machine. Asbefore, we start by reading the data set first, which is introduced inSection 8.3. %matplotlib inlineimport d2limport mathfrom mxnet import autograd, np, npx, gluonnpx.set_np()batch_size, num_steps = 32, 35train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps) 8.5.1. One-hot Encoding¶ Remember that each token is presented as a numerical index in train_iter. Feeding these indices directly to the neural networkmight make it hard to learn. We often present each token as a moreexpressive feature vector. The easiest presentation is called one-hotencoding. In a nutshell, we map each index to a different unit vector: assume thatthe number of different tokens in the vocabulary is \(N\) (the len(vocab)) and the token indices range from 0 to \(N-1\). Ifthe index of a token is the integer \(i\), then we create a vector\(\mathbf{e}_i\) of all 0s with a length of \(N\) and set theelement at position \(i\) to 1. This vector is the one-hot vector ofthe original token. The one-hot vectors with indices 0 and 2 are shownbelow. npx.one_hot(np.array([0, 2]), len(vocab)) array([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) The shape of the mini-batch we sample each time is (batch size, timestep). The one_hot function transforms such a mini-batch into a 3-Dtensor with the last dimension equals to the vocabulary size. We oftentranspose the input so that we will obtain a (time step, batch size,vocabulary size) output that fits into a sequence model easier. X = np.arange(10).reshape(2, 5)npx.one_hot(X.T, 28).shape (5, 2, 28) 8.5.2. Initializing the Model Parameters¶ Next, we initialize the model parameters for a RNN model. The number ofhidden units num_hiddens is a tunable parameter. def get_params(vocab_size, num_hiddens, ctx): num_inputs = num_outputs = vocab_size normal = lambda shape: np.random.normal( scale=0.01, size=shape, ctx=ctx) # Hidden layer parameters W_xh = normal((num_inputs, num_hiddens)) W_hh = normal((num_hiddens, num_hiddens)) b_h = np.zeros(num_hiddens, ctx=ctx) # Output layer parameters W_hq = normal((num_hiddens, num_outputs)) b_q = np.zeros(num_outputs, ctx=ctx) # Attach a gradient params = [W_xh, W_hh, b_h, W_hq, b_q] for param in params: param.attach_grad() return params 8.5.3. RNN Model¶ First, we need an init_rnn_state function to return the hidden stateat initialization. It returns an ndarray filled with 0 and with ashape of (batch size, number of hidden units). Using tuples makes iteasier to handle situations where the hidden state contains multiplevariables (e.g. when combining multiple layers in an RNN where eachlayers requires initializing). def init_rnn_state(batch_size, num_hiddens, ctx): return (np.zeros(shape=(batch_size, num_hiddens), ctx=ctx), ) The following rnn function defines how to compute the hidden stateand output in a time step. The activation function here uses the tanhfunction. As described in Section 4.1, the mean value of the\(\tanh\) function values is 0 when the elements are evenlydistributed over the real numbers. def rnn(inputs, state, params): # inputs shape: (num_steps, batch_size, vocab_size) W_xh, W_hh, b_h, W_hq, b_q = params H, = state outputs = [] for X in inputs: H = np.tanh(np.dot(X, W_xh) + np.dot(H, W_hh) + b_h) Y = np.dot(H, W_hq) + b_q outputs.append(Y) return np.concatenate(outputs, axis=0), (H,) Now we have all functions defined, next we create a class to wrap these functions and store parameters. # Save to the d2l package.class RNNModelScratch(object): """A RNN Model based on scratch implementations""" def __init__(self, vocab_size, num_hiddens, ctx, get_params, init_state, forward): self.vocab_size, self.num_hiddens = vocab_size, num_hiddens self.params = get_params(vocab_size, num_hiddens, ctx) self.init_state, self.forward_fn = init_state, forward def __call__(self, X, state): X = npx.one_hot(X.T, self.vocab_size) return self.forward_fn(X, state, self.params) def begin_state(self, batch_size, ctx): return self.init_state(batch_size, self.num_hiddens, ctx) Let’s do a sanity check whether inputs and outputs have the correct dimensions, e.g. to ensure that the dimensionality of the hidden state hasn’t changed. vocab_size, num_hiddens, ctx = len(vocab), 512, d2l.try_gpu()model = RNNModelScratch(len(vocab), num_hiddens, ctx, get_params, init_rnn_state, rnn)state = model.begin_state(X.shape[0], ctx)Y, new_state = model(X.as_in_context(ctx), state)Y.shape, len(new_state), new_state[0].shape ((10, 28), 1, (2, 512)) We can see that the output shape is (number steps \(\times\) batch size, vocabulary size), while the state shape remains the same, i.e. (batch size, number of hidden units). 8.5.4. Prediction¶ We first explain the predicting function so we can regularly check theprediction during training. This function predicts the next num_predicts characters based on the prefix (a string containingseveral characters). For the beginning of the sequence, we only updatethe hidden state. After that we begin generating new characters andemitting them. # Save to the d2l package.def predict_ch8(prefix, num_predicts, model, vocab, ctx): state = model.begin_state(batch_size=1, ctx=ctx) outputs = [vocab[prefix[0]]] get_input = lambda: np.array([outputs[-1]], ctx=ctx).reshape(1, 1) for y in prefix[1:]: # Warmup state with prefix _, state = model(get_input(), state) outputs.append(vocab[y]) for _ in range(num_predicts): # Predict num_predicts steps Y, state = model(get_input(), state) outputs.append(int(Y.argmax(axis=1).reshape(1))) return ''.join([vocab.idx_to_token[i] for i in outputs]) We test the predict_rnn function first. Given that we didn’t trainthe network it will generate nonsensical predictions. We initialize itwith the sequence traveller and have it generate 10 additionalcharacters. predict_ch8('time traveller ', 10, model, vocab, ctx) 'time traveller emmmmmmmm' 8.5.5. Gradient Clipping¶ For a sequence of length \(T\), we compute the gradients over these \(T\) time steps in an iteration, which results in a chain of matrix-products with length \(O(T)\) during backpropagating. As mentioned in Section 4.8, it might result in numerical instability, e.g. the gradients may either explode or vanish, when \(T\) is large. Therefore RNN models often need extra help to stabilize the training. Recall that when solving an optimization problem, we take update steps for the weights \(\mathbf{w}\) in the general direction of the negative gradient \(\mathbf{g}_t\) on a minibatch, say \(\mathbf{w} - \eta \cdot \mathbf{g}_t\). Let’s further assume that the objective is well behaved, i.e. it is Lipschitz continuous with constant \(L\), i.e. In this case we can safely assume that if we update the weight vector by \(\eta \cdot \mathbf{g}_t\) we will not observe a change by more than \(L \eta \|\mathbf{g}_t\|\). This is both a curse and a blessing. A curse since it limits the speed with which we can make progress, a blessing since it limits the extent to which things can go wrong if we move in the wrong direction. Sometimes the gradients can be quite large and the optimization algorithm may fail to converge. We could address this by reducing the learning rate \(\eta\) or by some other higher order trick. But what if we only rarely get large gradients? In this case such an approach may appear entirely unwarranted. One alternative is to clip the gradients by projecting them back to a ball of a given radius, say \(\theta\) via By doing so we know that the gradient norm never exceeds \(\theta\) and that the updated gradient is entirely aligned with the original direction \(\mathbf{g}\). It also has the desirable side-effect of limiting the influence any given minibatch (and within it any given sample) can exert on the weight vectors. This bestows a certain degree of robustness to the model. Gradient clipping provides a quick fix to the gradient exploding. While it doesn’t entire solve the problem, it is one of the many techniques to alleviate it. Below we define a function to clip the gradients of a model that iseither a RNNModelScratch instance or a Gluon model. Also note thatwe compute the gradient norm over all parameters. # Save to the d2l package.def grad_clipping(model, theta): if isinstance(model, gluon.Block): params = [p.data() for p in model.collect_params().values()] else: params = model.params norm = math.sqrt(sum((p.grad ** 2).sum() for p in params)) if norm > theta: for param in params: param.grad[:] *= theta / norm 8.5.6. Training¶ Similar to sec_linear_scratch, let’s first define thefunction to train the model on one data epoch. It differs to the modelstraining from previous chapters in three places: Different sampling methods for sequential data (independent sampling and sequential partitioning) will result in differences in the initialization of hidden states. We clip the gradient before updating the model parameters. This ensures that the model doesn’t diverge even when gradients blow up at some point during the training process (effectively it reduces the stepsize automatically). We use perplexity to evaluate the model. This ensures that different tests are comparable. When the consecutive sampling is used, we initialize the hidden state atthe beginning of each epoch. Since the \(i^\mathrm{th}\) example inthe next mini-batch is adjacent to the current \(i^\mathrm{th}\)example, so the next mini-batch can use the current hidden statedirectly, we only detach the gradient so that we only compute thegradients within a mini-batch. When using the random sampling, we needto re-initialize the hidden state for each iteration since each exampleis sampled with a random position. Same to the train_epoch_ch3function ( sec_linear_scratch), we use generalized updater, which could be a Gluon trainer or a scratchedimplementation. # Save to the d2l package.def train_epoch_ch8(model, train_iter, loss, updater, ctx, use_random_iter): state, timer = None, d2l.Timer() metric = d2l.Accumulator(2) # loss_sum, num_examples for X, Y in train_iter: if state is None or use_random_iter: # Initialize state when either it's the first iteration or # using random sampling. state = model.begin_state(batch_size=X.shape[0], ctx=ctx) else: for s in state: s.detach() y = Y.T.reshape(-1) X, y = X.as_in_context(ctx), y.as_in_context(ctx) with autograd.record(): py, state = model(X, state) l = loss(py, y).mean() l.backward() grad_clipping(model, 1) updater(batch_size=1) # Since used mean already. metric.add(l * y.size, y.size) return math.exp(metric[0]/metric[1]), metric[1]/timer.stop() The training function again supports either we implement the model from scratch or using Gluon. # Save to the d2l package.def train_ch8(model, train_iter, vocab, lr, num_epochs, ctx, use_random_iter=False): # Initialize loss = gluon.loss.SoftmaxCrossEntropyLoss() animator = d2l.Animator(xlabel='epoch', ylabel='perplexity', legend=['train'], xlim=[1, num_epochs]) if isinstance(model, gluon.Block): model.initialize(ctx=ctx, force_reinit=True, init=init.Normal(0.01)) trainer = gluon.Trainer(model.collect_params(), 'sgd', {'learning_rate': lr}) updater = lambda batch_size : trainer.step(batch_size) else: updater = lambda batch_size : d2l.sgd(model.params, lr, batch_size) predict = lambda prefix: predict_ch8(prefix, 50, model, vocab, ctx) # Train and check the progress. for epoch in range(num_epochs): ppl, speed = train_epoch_ch8( model, train_iter, loss, updater, ctx, use_random_iter) if epoch % 10 == 0: print(predict('time traveller')) animator.add(epoch+1, [ppl]) print('Perplexity %.1f, %d tokens/sec on %s' % (ppl, speed, ctx)) print(predict('time traveller')) print(predict('traveller')) Finally we can train a model. Since we only use 10,000 tokens in the dataset, so here we need more data epochs to converge. num_epochs, lr = 500, 1train_ch8(model, train_iter, vocab, lr, num_epochs, ctx) Perplexity 1.1, 36914 tokens/sec on gpu(0)time traveller it s against reason said filby what reason saidtraveller it s against reason said filby what reason said Then let’s check the results to use a random sampling iterator. train_ch8(model, train_iter, vocab, lr, num_epochs, ctx, use_random_iter=True) Perplexity 1.2, 36512 tokens/sec on gpu(0)time traveller you can show black is white by argument said filtraveller smiled are you sure we can move freely inspace ri In the following we will see how to improve significantly on the current model and how to make it faster and easier to implement. 8.5.7. Summary¶ Sequence models need state initialization for training. Between sequential models you need to ensure to detach the gradient, to ensure that the automatic differentiation does not propagate effects beyond the current sample. A simple RNN language model consists of an encoder, an RNN model and a decoder. Gradient clipping prevents gradient explosion (but it cannot fix vanishing gradients). Perplexity calibrates model performance across variable sequence length. It is the exponentiated average of the cross-entropy loss. Sequential partitioning typically leads to better models. 8.5.8. Exercises¶ Show that one-hot encoding is equivalent to picking a different embedding for each object. Adjust the hyperparameters to improve the perplexity. How low can you go? Adjust embeddings, hidden units, learning rate, etc. How well will it work on other books by H. G. Wells, e.g. The War of the Worlds. Modify the predict function such as to use sampling rather thanpicking the most likely next character. What happens? Bias the model towards more likely outputs, e.g. by sampling from \(q(w_t|w_{t-1}, \ldots w_1) \propto p^\alpha(w_t|w_{t-1}, \ldots w_1)\) for \(\alpha > 1\). Run the code in this section without clipping the gradient. What happens? Change adjacent sampling so that it does not separate hidden states from the computational graph. Does the running time change? How about the accuracy? Replace the activation function used in this section with ReLU and repeat the experiments in this section. Prove that the perplexity is the inverse of the harmonic mean of the conditional word probabilities.
I have recently studied Fourier and Laplace transformation in maths. I wanted to understand the utility in physics with some examples that requires this change in dimension and the reason why. closed as too broad by Brandon Enright, jinawee, Qmechanic♦ Apr 26 '14 at 9:47 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. This question is pretty broad, but I would summarise thus: there are three, related ways that these integral transforms are important: Both serve as tools for "splitting up" a problem (forwards transform) into simpler problems, analysing the latter, and then using the principle of linear superposition to build (inverse or backwards transform) the whole analytical description up from the simpler solutions. In the used of the Fourier transform, you are analysing a system's response to harmonic excitation (sinusoidal waves) and then building up its response to a general pulse by summing up the responses with the inverse Fourier transform. The Laplace transform is like the Fourier transform with a half-infinite domain ($[0,\,\infty)$ instead of $(-\infty,\,\infty)$ and with a generalised, complex frequency. It is useful for "causal" systems where the excitation has a definite beginning ($t=0$); The transforms transform differential equations in desirable ways: converting the differential operator $\psi\mapsto\mathrm{d}_t\psi$ into a multiplication operator $\psi\mapsto i\,\omega\,\psi$; The Fourier transform is central to quantum mechanics: in particular to the canonical commutation relationship and the Heisenberg uncertainty principle. The FT is the unitary(norm and inner product preserving, i.e. probability-preserving) transformation between position co-ordinates and momentum co-ordinates. The mathematical uncertainty product relationships, the related Paley-Weiner theorem and the special case observation that a function and its FT cannot both have compact support (domain wherein they are nonzero) are all manifestations of the kinds of "mathematical mechanics" that beget the uncertainty principle. Put simply: if you confine a wavefunction (i.e. quantum state) to a small range of positions, its Fourier transform is the same quantum state written in momentum co-ordinates, so the spread over momentums increases as you confine the positions more and more. I say more about the application of Fourier transforms to electromagnetic theory in this answer here, more about the relationship between the CCR and the FT in this answer here and more about the mathematics of the uncertainty product in this answer here and here Just to give 3 simple examples: Someone is playing piano. Every key he hits, will produce not only the desired tone but also a full range of resonants and higher harmonics. Those will show up in fourier space. In image analysis, sometimes you have periodic patterns overlaying your image (e.g. Moiré fringes) that disturb image quality. In Fourier space, those patterns might show in a very confined frequency domain, where they can be filtered to enhance image quality. When working in biomedical physics, you come in touch a lot with projection integrals if it comes to attenuation measurement. Solving those inverse problems is a lot easier in Fourier Space. (See for example X-ray CT and the Fourier Slice Theorem).
I have to compute the surface brightness as a function of the radius from the following set of data: {right ascension ($\alpha$), declination ($\delta$), magnitude (m)}. I also know the center of the set $\left(\alpha_c, \delta_c \right)$. The surface brightness in logarithmic units (mag/arcsec$^2$) can be computed by (1): \begin{equation} \mu = m + 2.5\log_{10}\Omega \end{equation} and the solid angle can be computed as $d\Omega = \frac{dS}{d^2} = \frac{d^2 \sin\theta d\theta d\phi}{d^2}$, where $d$ is the distance to the stars. What I did is to compute the integrated magnitude in a certain the region given by $\alpha_2 > \alpha > \alpha_1$ and $\delta_2 > \delta > \delta_1$. In which case the solid angle should be given by: $\Omega = (\alpha_2 - \alpha_1)(\sin\delta_2 - \sin\delta_1)$ because $\delta$ is the complementary angle of $\theta$. My question is: which radius I should consider if I want to plot $\mu(r)$?. I have consider the angular separation between ($\alpha_c, \delta_c$) and ($\alpha_1, \delta_c$), and also the angular separation between ($\alpha_c, \delta_c$) and ($\alpha_c, \delta_1$) but with neither of this options reproduce previous results. I also count the stars which angular separation is $r < r_{test}$ ( edited: in rings of radius $r_{test}$), compute the integrated magnitude, use $\Omega = 2\pi(1 - \cos(r_{test}))$ and finally compute $\mu(r_{test})$. Both approaches gives more or less the same results, what am I doing wrong here? By wrong, I mean that I am getting larger values for the surface brightness that previous published results.
I am given a graph $G$ with treewidth $k$ and arbitrary degree, and I would like to find a subgraph $H$ of $G$ (not necessarily an induced subgraph) such that $H$ has constant degree and its treewidth is as high as possible. Formally my problem is the following: having chosen a degree bound $d \in \mathbb{N}$, what is the "best" function $f : \mathbb{N} \to \mathbb{N}$ such that, in any graph $G$ with treewidth $k$, I can find (hopefully efficiently) a subgraph $H$ of $G$ with maximal degree $\leq d$ and treewidth $f(k)$. Obviously we should take $d \geq 3$ as there are no high treewidth graphs with maximal degree $<3$. For $d = 3$ I know that you can take $f$ such that $f(k) = \Omega(k^{1/100})$ or so, by appealing to Chekuri and Chuzhoy's grid minor extraction result (and using it to extract a high-treewidth degree-3 graph, e.g., a wall, as a topological minor), with the computation of the subgraph being feasible (in RP). However, this is a very powerful result with an elaborate proof, so it feels wrong to use it for what looks like a much simpler problem: I would just like to find any constant-degree, high-treewidth subgraph, not a specific one like in their result. Further, the bound on $f$ is not as good as I would have hoped. Sure, it is known that it can be made $\Omega(k^{1/20})$ (up to giving up efficiency of the computation), but I would hope for something like $\Omega(k)$. So, is it possible to show that, given a graph $G$ of treewidth $k$, there is a subgraph of $G$ with constant degree and linear treewidth in $k$? I'm also interested in the exact same question for pathwidth rather than treewidth. For pathwidth I don't know any analogue to grid minor extraction, so the problem seems even more mysterious...
How would you describe the set $\{1, 5, 9, 13, 17, 21,\dots\}$ in the style of $x:P(x)=$? I know that the sequence is "the last number + 4" or $4n-3$. $\{n \in \mathbb N\; :\; n \equiv 1 \mod 4\}$ $\{4k-3 \mid k \in \mathbb{N} \}$ or $\{4k+1 \mid k \in \mathbb{N} \}$, depending on whether you consider $0$ a natural number. $$\{n \; \mid \; \exists k \in \mathbb N: n =4k-3\}$$ An arithmetic progression with first term 1 and common difference 4 (No I haven't described it in the form of $P(x)$, but this is how I would describe this number sequence...) $\{n\in\mathbb N:\frac{n+3}4\in\mathbb N\}$ It's not exactly the form you're looking for, but close:$$\{n\in\mathbb N \text{ [or }\mathbb R \text{ or whatever relevant]}: (n-1)/4 \in \mathbb N\}$$If you really need an equality, then you might write$$\{n\in\mathbb N: \mathbf1 _{\mathbb{N}} (n-1)/4 = 1\}$$where $\mathbf1 _{\mathbb{N}}$ is the indicator function of the set $\mathbb N$. You can be as simple as this, given you don't insist on the $\{:\}$ set declaration: $$4\mathbb{N}-3 = \{1,5,9,13,17,\dotsc\}$$ if $0\notin\mathbb{N}$ by your convention; and $$4\mathbb{N}+1 = \{1,5,9,13,17,\dotsc\}$$ if $0\in\mathbb{N}$ by your convention. You can represent your set as the Range set of a Function from $\Bbb N$ to $\Bbb N$ itself defined as: $$f(x)=4x+1$$
A triple of positive integers $(a,b,c)$ is an $abc$-triple if $a$ and $b$ are coprime and $c = a + b$. Define the quality or power of an $abc$-triple as $P(a,b,c) = \frac{\log c}{\log \text{rad}(abc)}$, where $\text{rad}(k)$ denotes the product of distinct prime divisors of $k$. One version of the $abc$-Conjecture is that for each $\varepsilon > 0$, there are finitely many $abc$-triples such that $P(a,b,c) > 1 + \varepsilon$. There are finitely many known triples satisfying $P > 1.4$, the so called good triples, and the largest (quality) is $P(2,3^{10} \cdot 109, 23^{5}) = 1.629911684 \dots$ (discovered by E. Reyssat). Question: Are there any known upper bounds for $P(a,b,c)$ sharper than $\log_{p^{n}} c$, where $p$ is the minimum prime dividing $abc$ and $n$ is the number of distinct prime divisors of $abc$? Question: Is there an absolute upper bound for $P(a,b,c)$ so that no triple has higher quality? Best Answer: If there were such a bound, asymptotic FLT would be in hand. (Thanks Ace of Base!)
In Effective Field Theory video lectures found here, the professor explained power counting in effective field theories and the difficulties of power counting associated with loop diagrams. He then mentions that introducing a cutoff ($\Lambda_{UV}$) to regulate our divergences does not preserve power counting due to the new scale that we are introducing. To see this he uses four-fermi theory with the diagram, $\hspace{6cm}$ We do our power counting (i.e., Taylor expansions) in powers of $m^2/M^2$ and then go on to consider the mass correction through, $\hspace{6cm}$ Using a cutoff this gives a mass correction, \begin{align} a\frac{ m }{ M ^2 } \int _0^{\Lambda_{UV}}\frac{ \,d^4k _E }{ (2\pi)^4} \frac{1}{ k _E ^2 + m ^2 } & = a\frac{ m }{ ( 4 \pi ) ^2 } \left[ \frac{ \Lambda _{ UV } ^2 }{ M ^2 } + \frac{ m ^2 }{ M ^2 } \log \frac{ m ^2 }{ \Lambda _{ UV } ^2 } - \frac{ m ^4 }{ M ^2 \Lambda _{ UV } ^2 } + ... \right] \end{align} If I understand correctly this breaks the power counting because even if $\Lambda_{UV} \sim M$, the first term is an order 1 correction since its not proportional to $m^2/M^2$. So far so good. However, then the professor says that you can still use power counting with a cutoff if you fix the power counting order by order and that this can be done by introducing an intermediate scale, $\Lambda$. But I don't how this fixes anything... With an intermediate scale ($\Lambda$) we have, \begin{equation} a\frac{m}{M^2}\int _{ \Lambda } ^{ \Lambda _{ UV }} \frac{ \,d^4k _E }{ (2\pi)^4 } \frac{1}{ k _E ^2 + m ^2 } = \frac{a\,m}{ (4\pi)^2M^2 } \left\{ \left(\Lambda ^2 + m ^2 \log \frac{ m ^2 }{ \Lambda ^2 + m ^2 } + ... \right) + \left( \Lambda _{ UV } ^2 - \Lambda + m ^2 \log \frac{ \Lambda ^2 + m ^2 }{ \Lambda ^2 _{ UV }} \right) \right\} \end{equation} but how does this fix anything? For more context see my lecture notes here under Effective Field Theory (starts around equation 4.6)
Answer Please see the work below. Work Step by Step We know that $H=kA\frac{\Delta T}{\Delta x}$ We plug in the known values to obtain: $H=(46)(0.90\times 0.40)(\frac{310-295}{0.0045})$ $H=55000W$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Let $ a,b,c$ positive integer such that $ a + b + c \mid a^2 + b^2 + c^2$. Show that $ a + b + c \mid a^n + b^n + c^n$ for infinitely many positive integer $ n$. (problem composed by Laurentiu Panaitopol) So far no idea. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $ a,b,c$ positive integer such that $ a + b + c \mid a^2 + b^2 + c^2$. Show that $ a + b + c \mid a^n + b^n + c^n$ for infinitely many positive integer $ n$. (problem composed by Laurentiu Panaitopol) So far no idea. Claim.$a+b+c\mid a^{2^n}+b^{2^n}+c^{2^n}$ for all $n\geq0$. Proof. By induction: True for $n=0,1$ $\checkmark$. Suppose it's true for $0,\ldots,n$. Note that $$a^{2^{n+1}}+b^{2^{n+1}}+c^{2^{n+1}}=(a^{2^n}+b^{2^n}+c^{2^n})^2-2(a^{2^{n-1}}b^{2^{n-1}}+b^{2^{n-1}}c^{2^{n-1}}+c^{2^{n-1}}a^{2^{n-1}})^2+4a^{2^{n-1}}b^{2^{n-1}}c^{2^{n-1}}(a^{2^{n-1}}+b^{2^{n-1}}+c^{2^{n-1}})$$ and that $$2(a^{2^{n-1}}b^{2^{n-1}}+b^{2^{n-1}}c^{2^{n-1}}+c^{2^{n-1}}a^{2^{n-1}})=(a^{2^{n-1}}+b^{2^{n-1}}+c^{2^{n-1}})^2-(a^{2^n}+b^{2^n}+c^{2^n})$$ is divisible by $a+b+c$ by the induction hypothesis. It seems that there's a partial solution. Suppose that $\mathrm{gcd}(a,a+b+c)=\mathrm{gcd}(b,a+b+c)=\mathrm{gcd}(c,a+b+c)=1$. Then for $n=k\cdot \phi(a+b+c)+1 \, (k=1,2, \ldots )$, where $\phi$ is Euler's function, we have: $$ (a^n+b^n+c^n)-(a^2+b^2+c^2)=a^2 (a^{n-1}-1) + b^2 (b^{n-1}-1) + c^2 (c^{n-1}-1), $$ where all round brackets are divisible by $a+b+c$ according to Euler theorem. Therefore $(a+b+c) \mid (a^n+b^n+c^n)$ for all these $n$. There's one more solution (it isn't mine). One can even prove that $(a + b + c) \mid (a^n + b^n + c^n)$ for all $n=3k+1$ and $n=3k+2$. It's enough to prove that $a + b + c \mid a^n + b^n + c^n$ => $a + b + c \mid a^{n+3} + b^{n+3} + c^{n+3}$. The proof is here: https://vk.com/doc104505692_416031961?hash=3acf5149ebfb5338b5&dl=47a3df498ea4bf930e (unfortunately, it's in Russian but it's enough to look at the formulae). One point which may need commenting: $(ab+bc+ca)(a^{n-2} + b^{n-2} + c^{n-2})$ is always divisible by $(a+b+c)$ (it's necessary to consider 2 cases: $(a+b+c)$ is odd and $(a+b+c)$ is even). If $a,b,c,n\in\Bbb Z_{\ge 1}$, $a+b+c\mid a^2+b^2+c^2$, then $$a+b+c\mid a^n+b^n+c^n$$ is true when $n\nmid 3$, but not necessarily when $n\mid 3$. $$x^2+y^2+z^2+2(xy+yz+zx)=(x+y+z)^2$$ $$\implies x+y+z\mid 2(xy+yz+zx)$$ $$\implies x+y+z\mid (x^k+y^k+z^k)(xy+yz+zx)$$ for all $k\ge 1$ (to see why, check cases when $x+y+z$ is even and when it's odd). $$x^{n+3}+y^{n+3}+z^{n+3}=(x^{n+2}+y^{n+2}+z^{n+2})(x+y+z)$$ $$-(x^{n+1}+y^{n+1}+z^{n+1})(xy+yz+zx)+(x^n+y^n+z^n)xyz$$ for all $n\ge 1$. We know $$x+y+z\mid (x^{n+2}+y^{n+2}+z^{n+2})(x+y+z)$$ $$-(x^{n+1}+y^{n+1}+z^{n+1})(xy+yz+zx)$$ Now let $(x,y,z)=(x_1,y_1,z_1)=(1,3,9)$. $$x_1+y_1+z_1\nmid x_1^3+y_1^3+z_1^3$$ $$x_1+y_1+z_1\nmid \left(x_1^3+y_1^3+z_1^3\right)x_1y_1z_1$$ $$\implies x_1+y_1+z_1\nmid x_1^6+y_1^6+z_1^6$$ Since $x_1+y_1+z_1$ is coprime to $x_1,y_1,z_1$, we get $$x_1+y_1+z_1\nmid (x_1^6+y_1^6+z_1^6)x_1y_1z_1,$$ and so $x_1+y_1+z_1\nmid x_1^9+y_1^9+z_1^9$, etc. Therefore $x+y+z$ cannot generally (for all $x,y,z\in\mathbb Z_{\ge 1}$) divide $x^{3m}+y^{3m}+z^{3m}$ for any given $m\ge 1$. However, we easily get $x+y+z$ always divides $x^n+y^n+z^n$ for $n$ not divisible by $3$, because $x+y+z\mid (x+y+z)xyz$ and $x+y+z\mid \left(x^2+y^2+z^2\right)xyz$, because $x+y+z\mid x^2+y^2+z^2$ (given), so $x+y+z\mid x^4+y^4+z^4, x^5+y^5+z^5$, so $x+y+z\mid \left(x^4+y^4+z^3\right)xyz, \left(x^5+y^5+z^5\right)xyz$, so $x+y+z\mid x^7+y^7+z^7, x^8+y^8+z^8$, etc. Here's a more intuitive way to get the idea of considering powers of $2$. Added (below): in the same way we can prove that any $n=6k\pm1$ works. Note that $a+b+c\mid(a+b+c)^2-(a^2+b^2+c^2)=2(ab+bc+ca)$. By The Fundamental Theorem of Symmetric Polynomials (FTSP), $a^n+b^n+c^n$ is an integer polynomial in $a+b+c$, $ab+bc+ca$ and $abc$. If $3\nmid n$, no term has degree divisible by $3$ so each term has at least one factor $a+b+c$ or $ab+bc+ca$. If we can find infinitely many $n$ such that the terms without a factor $a+b+c$ have a coefficient that is divisible by $2$, then we are done because $a+b+c\mid2(ab+bc+ca)$. This suggests taking a look to the polynomial $a^n+b^n+c^n$ over $\Bbb F_2$. Note that over $\Bbb F_2$, $a^{2^n}+b^{2^n}+c^{2^n}=(a+b)^{2^n}+c^{2^n}=(a+b+c)^{2^n}$ is divisible by $(a+b+c)$. Because the polynomial given by FTSP over $\Bbb F_2$ is the reduction modulo $2$ of that polynomial over $\mathbb Z$ (this is a consequence of the uniqueness given by the FTSP), this shows that the coefficients of those terms that have no factor $a+b+c$ is divisible by $2$, and we are done because $3\nmid2^n$. (In fact all coefficients except that of $(a+b+c)^{2^n}$ are divisible by $2$.) Added later Ievgen's answer inspired me to generalise the above approach to $n=6k\pm1$. Consider again $a^n+b^n+c^n$ as an integer polynomial in $abc,ab+bc+ca,a+b+c$ (which we can do by FTSP). Because $3\nmid n$, no term has the form $(abc)^k$. It remains to handle the terms of the form $m\cdot(ab+bc+ca)^k(abc)^l$. If $a+b+c$ is odd, then $a+b+c\mid ab+bc+ca$ and we're done. If $a+b+c$ is even, at least one of $a,b,c$ is even so $2\mid abc$, and hence $a+b+c\mid m\cdot(ab+bc+ca)^k(abc)^l$. (Note that $l>0$ because $n$ is odd.) For any positive integer x this is true: $x \leqslant x^2$ (From $1 \leqslant x$ for any positive ineger x ). So $a + b + c \leqslant a^2 + b^2 + c^2$. But for 2 positive integers $x$, $y$ $x$ is divisible by $y$ only if $x \geqslant y$. So $a + b + c \geqslant a^2 + b^2 + c^2$ if $a + b + c \mid a^2 + b^2 + c^2$. From these 2 inequalities: $a + b + c = a^2 + b^2 + c^2$ So $a + b - a^2 - b^2 = c^2 - c$ Evaluation for the right part $c^2 - c \geqslant 0$ (because $x \leqslant x^2$ for any positive integer x). Evaluation for the left part $a + b - a^2 - b^2 \leqslant 0$ (adding 2 inequalities $a - a^2 \leqslant 0$ and $b - b^2 \leqslant 0$) So the left part is $\leqslant 0$ and the right part $\geqslant 0$. But they are equal so $a + b - a^2 - b^2 = c^2 - c = 0$ and $c^2 = c$. c is positive so we can divide both part of the last equation by c and get $c = 1$. Similarly $b = 1$ and $a = 1$. So $a + b + c = 3$ and $a^n + b^n + c^n = 3$ for any positive $n$. 3 is divisible by 3 so $a + b + c \mid a^n + b^n + c^n$ for infinitely many positive integer n.
Under this menu option you will find other information related to me, my collaborators, life in general and so on. In the long run I hope to include pictures, tutorials and any other piece of information ... ... s could be added, or changed, depending on the processing needs. The image processing pipeline is responsible, among other things, for converting the data captured by the scanner into 3D data. Since t ... ... his project. The idea is to segment the image into objects after which another algorithm is used to detect the actual object based on both 2D and 3D features, after which the object can be manipulated ... ... h systems almost always employ conventional control strategies. Biological systems, on the other hand, learn. In the beginning they are functional only at a very basic level from which they improve the ... ... Network Editor tool, amongst other things, the physical network adapter of the bridged connection can be chosen. Apart from the network configuration tool, there seem to be other interesting looking tools ... ... representation spaces complement each other and how the chosen representations benefit from the combination in terms of both robustness and accuracy. The model used for establishing the correspondences ... ...image representation spaces complement each other and how the chosen representations benefit from the combination in terms of both robustness and accuracy. The model used for establishing the correspondences ... ... Especially cases containing surfaces with very little spatial information benefit from the constraints. On the other hand, if enough spatial information is available, then the improvements will be less ... ... each other, resulting in a more accurate and dense disparity representation of the scene.PublishedPublished in Machine Vision and Applications, 2010Here is a video from to the DRIVSCO project, ... ... national hospital or medical clinic can be analysed by sending it to Cedai via Internet. "The system analyses, amongst other things, if the spermatozoa have a normal morphology and wheather these move a ... ... my colleagues, related to machine vision, to anyone interested in the area. The idea is to include images, videos, program code, pdfs, and so on, in order for others to test with the techniques that w ... ... my papers...in my papers, on the other hand, I reference those papers upon which my work is based on. Thanx!!The optical flow codes are as follows:Late linerisation optical-flow for large displacements. ... ... understand the code itself! The codes are released under LGPL license. If you use the codes, please reference my papers...in my papers, on the other hand, I reference those papers upon which my work i ... ... at I use, with code reuse in mind.LicenseThe codes are released under LGPL license. If you use the codes, please do reference my papers...in my papers, on the other hand, I reference those pap ... ... be an efficient fusion scheme that allows symbolic- and low-level cues to complement each other, resulting in a more accurate and dense disparity representation of the scene.Journal: Machine Vision ... ... &:= \Omega_1 = \Big\ \\outside(\Gamma) &:= \Omega_1 = \Big\ \end\]where \( \Gamma \) is the interface, \( \Omega_1 \) is the first segment and \( \Omega_2 \) is the second segment. In other words, the bou ... ... each pixel then this kind of visualization quickly becomes unreadable. Another possibility is to codify the movement direction using colours while intensity codifies magnitude of the movement. Typically ...
There exists the following metric counterpart of the Harris result: Theorem (Banakh, Vovk, Wojcik): Each metric space $X$ can be (canonically) identified with the space $\pi_0(\circledast(X))$ of path-components of some complete metric space $\circledast(X)$. The space $\circledast(X)$ is a closed subspace of the complete oriented graph with vertices in the set $X$. The above theorem follows from Theorem 7.3 and Proposition 7.4 of this paper published in Fund. Math. 212 (2011), 145-173. More generally, the Theorem holds for weakly first-countable spaces containing no countable connected subspaces. To formulate the general and more precise version of the Theorem, I need to recall some definition and results from this paper of Banakh, Vovk and Wojcik, which will be cited as [BVW].So, I thank Jeremy Brazas for opportunity to advertise this my paper [BVW] :) Unfortunately, writing the paper [BVW] we did not know about the paper of Harris. After looking at the Harris' paper, I discovered that his construction of the space $S(X)$ (with $\pi_0(S(X))\cong X$) is similar to our construction of $\circledast(X)$ with $\pi_0(\circledast(X))\cong X$. But our space $\circledast(X)$ always is complete metric. As a result, our construction works only for premetric (= weakly first-countable) spaces. Let us recall that a topological space $X$ is weakly first-countable if to each point $x\in X$ one can assign a decreasing sequence $(B_n(x))_{n\in\mathbb N}$ of subsets containing $x$ such that a subset $U\subset X$ is open if and only if for each point $x\in U$ there exists $n\in\mathbb N$ such that $B_n(x)\subset U$. By Proposition 3.7 in [BVW], a topological space $X$ is weakly first-countable if and only if the topology of $X$ is generated by a premetric $p$. A premetric on a set $X$ is any function $p:X\times X\to[0,\infty)$ such that $p(x,x)=0$ for all $x\in X$. The topology generated by a premetric $p$ on $X$ consists of all sets $U\subset X$ such that for every $x\in X$ there exists $\varepsilon>0$ such that the $\varepsilon$-ball $B(x;\varepsilon):=\{y\in X:p(x,y)<\varepsilon\}$ is contained in $U$. A premetric space is a pair $(X,p)$ consisting of a set $X$ and a premetric $p$. To each premetric space $X$ we shall assign some special complete metric space $\circledast(X)$, called the cobweb of the premetric space $X$. The cobweb space is a closed subset of the complete oriented graph $\Gamma X$ over the set $X$. The complete oriented graph $\Gamma X$ is the set $X\cup\{(x,y,t)\in X\times X\times(0,1):x\ne y\}$ endowed with the path metric $d$ (in which every oriented edge $[x,y]=\{x,y\}\cup\{(x,y,t):0<t<1\}$ has length 1. For a premetric space $X$ endowed with a premetric $p$ the cobweb $\circledast(X)$ is defined as the closed subset $$\circledast(X):=X\cup\{(x,y,t)\in\Gamma(X):t\le 1-p(y,x)\}$$of the graph $\Gamma X$. The compression map $\pi_X:\circledast(X)\to X$ assigns to each point $a\in \circledast(X)$ the point $a$ if $a\in X$ and the point $x$ if $a=(x,y,t)$ for some $x,y\in X$ and $t\in(0,1)$. The following theorem follows from Theorem 7.3 and Proposition 7.4 of [BVW]. General Theorem (Banakh, Vovk, Wojcik) Let $X$ be a premetric space. Then: The cobweb space $\circledast(X)$ is complete metric space of dimension $\le 1$. The compression map $\pi_X:\circledast(X)\to X$ is a continuous quotient surjection with path-connected fibers $\pi_X^{-1}(x)$, $x\in X$. The space $\circledast(X)$ is connected if and only if $X$ is connected. If $X$ contains no countable connected subspaces, then the fibers of the compression map $\pi_X$ coincide with the path-components of $\circledast(X)$ and also with the separablewise components of $\circledast(X)$; We recall that the separablewise component of a point $x$ in a topological space $X$ is the union of all separable connected subspaces of $X$ that contain $x$. It is clear that the path-component of $x$ is contained in the separablewise component of $X$. Since the topology of a weakly first-countable space is generated by a premetric, the General Theorem implies the following corollary answering the question of Jeremy Brazas in (strong) negative. Corollary 1. Each weakly first-countable space is homeomorphic to the space $\pi_0(M)$ of path-components of some complete metric space $M$. Taking for $X$ any connected Hausdorff space, we obtaine the following surprising Example. There exists a connected Polish space $P$ such that $P$ has countably many path-connected components and each path-connected component is closed in $P$. It seems that for separable space $X$ the space $\pi_0(X)$ also can be homeomorphic to $[0,1]$. A possible counteexample can look as follows: Let $$C=\big\{\sum_{i=1}^\infty\frac{x_i}{3^i}:(x_i)_{i\in\mathbb N}\in\{0,2\}^{\mathbb N}\big\}$$ be the standard Cantor set in the unit interval $I:=[0,1]$. Let $\mathcal J$ be the set of connected components of the complement $I\setminus C$. It is clear that each set $J\in\mathcal J$ is an open interval of lentgth $\frac1{3^k}$ for some $k\ge 1$. For $n\in\{0,1\}$ let $\mathcal C_n$ be the subset of $\mathcal C$ consisting of intervals of length $\frac1{3^{2i+n}}$ for some $i\ge 0$.Let $J_n=\bigcup\mathcal J_n$. Now consider the following compact subset $$X:=(C\times[0,1])\cup(\{0\}\times J_0)\cup(\{1\}\times J_1).$$I hope that the space $\pi_0(X)$ of path-components of $X$ is homeomorphic to $[0,1]$.
Consider the situation in which one has to solve a specific instance of a parametrized family of polynomial systems $$ P = \{F(x,p) = (f_1(x,p), \ldots, f_n(x,p)) \mid p \in \mathbb{C}^m\}. $$ To not destroy the solution structure it is desirable to not leave $P$ during the homotopy. This can be accomplished by using the homotopy $$H(x,t) := F(x, (1-t)p + tq)$$ where $p$ and $q$ are parameters in $\mathbb{C}^m$. Note that you have to provide the start solutions for this kind of homotopy. The syntax in HomotopyContinuation.jl to construct such a homotopy is as follows. solve(F, startsolutions; parameters=params, start_parameters=q, target_parameters=p) where p and q are vectors of parameter values for F. params is a vector of variables that specify the parameters of F.Necessarily, length(params), length(p)and length(q) must all be equal. $$F(x,y,a,b) = \begin{bmatrix} x^2-a \\ xy-a+b \end{bmatrix}.$$ For tracking the solution $(x,y) = (1,1)$ from $(a,b) = (1,0)$ to $(a,b) = (2,5)$ we do the following. julia> @polyvar x y a bjulia> F = [x^2 - a, x * y - a + b]julia> startsolution = [[1, 1]]julia> solve(F, startsolution; parameters=[a, b], start_parameters=[1, 0], target_parameters=[2, 5])Result with 1 solutions==================================• 1 non-singular solution (1 real)• 0 singular solutions (0 real)• 1 paths tracked• random seed: 772337
Who should read this document I recently learned about org mode for Emacs. Among other things, it allows to mix text and R source code similar to SWeave, but the markup for text is very simple. To get things working can be complicated, especially if one is not an experienced user. Therefore this documents is intended to help novices with Emacs getting up and running as fast as possible. Emacs is heavily keyboard based. In the following text C- will indicate the Control key pressed together with another key. Likewise, M- is the Altkey ( Meta key was the original name), and S- is set shift key. So C-c indicates Control key pressed simultaneously with c key, M-x indicates Alt key pressed simultaneously with x key. A very minimalistic demo The following example does not try to explain the options, I just want to help you to get up an running as fast as possible. Let us look at an example of org code: #+TITLE: Using org mode with R #+AUTHOR: Erich Neuwirth * Minimalistic demo Just a piece of R code producing outpout #+BEGIN_SRC R :session *R* :exports both 123*456 #+END_SRC It also allows including plots in output #+BEGIN_SRC R :session *R* :results output graphics :file first.png :exports both x <- (0:100)/100 y <- x^2 plot(x,y,type="l",xlab=expression(x),ylab=expression(f(x)==x^2), main="Quadratfunktion") #+END_SRC We can use the modern plotting package ~ggplot2~ #+BEGIN_SRC R :session *R* :results output graphics :file second.png :exports both library(ggplot2) x <- (0:100)/100 y <- x^2 qplot(x,y,geom="line",xlab=expression(x),ylab=expression(f(x)==x^2), main="Quadratfunktion") #+END_SRC And it can produce nice tables #+BEGIN_SRC R :results output org :exports both library(ascii) a <- runif(100) c <- "Quantiles of 100 random numbers" b <- ascii(quantile(a),header=TRUE,include.colnames=TRUE) print(b,type="org") rm(a,b,c) #+END_SRC We even can embed source code to perform computations in-line: 1+2 = src_R[:session *R*]{1+2} And we can embed mathematical formulas $$\Phi(x|\mu,\sigma)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi\sigma}}e^{\frac{(t-\mu)^2}{2\sigma^2}}dt$$ Converting this org file into html produces this page. Converting this org file into TeX and pdf produces this page. If your Emacs is configured correctly, all you have to to is to copy the above text into a file with extension .org and open this file in Emacs. In Emacs then just press the keys C-c C-e b. Putting the cursor in one of the code snippets in the file and pressing the keys C-c C-c will put the results of the computation (with additional markup) directly after the code snippet. All this will, however, not work if Emacs is not configured correctly. Configuring Emacs We describe how to setup Emacs on Windows and on OSX To be able to run this in Vincent Goulet's Emacs (available from his site) for Windows and for OSX. Emacs has to be configured to make org mode with R work. Configuration of Emacs is usually done by statements in a file names .emacs in the directory Emacs considers your home directory. If your installation of Emacs is new, you might not yet have this file. You can open (or create if necessary) this file by starting Emacs and typing C-x C-f. Then the bottom line of the Emacs window will display ~/ and wait for you to enter a file name. Just type .emacs. Now we need to copy some commands into this file. On Windows, Emacs org mode needs soe help to find the correct version of R. Since org mode and ESS (Emacs Speaks Statistics, used by org mode) are under very vivid development, this might change in the near future. On OSX, no special initialization is necessary for this. Therefore we have three slightly different versions of the initialization file for the different OSes. On Windows with 32bit R and a standard installation, then following code is needed in .emacs ;; At first, we make sure that our modifications in .emacs ;; are applied _after_ default.el is read/ (setq inhibit-default-init t) (load "default.el") ;; We ensure that Emacs can copy from and to the clipboard (setq x-select-enable-clipboard t) ;; Now we set up Emacs to find R ;; The path to R might need to be changed (setq-default inferior-R-program-name "C:/Program Files/R/R-2.15.3/bin/i386/Rterm.exe") (setenv "PATH" (concat "C:\\Program Files\\R\\R-2.15.3\\bin\\i386" ";" (getenv "PATH"))) ;; Configuring org mode to know about R and set some reasonable default behavior (require 'ess-site) (require 'org-install) (org-babel-do-load-languages 'org-babel-load-languages '((R . t) ) ) (add-hook 'org-babel-after-execute-hook 'org-display-inline-images) (add-hook 'org-mode-hook 'org-display-inline-images) (setq org-confirm-babel-evaluate nil) (setq org-export-html-validation-link nil) (setq org-export-allow-BIND t) (setq org-support-shift-select t) (setq org-src-fontify-natively t) For 64bit R in a standard installation you need ;; At first, we make sure that our modifications in .emacs ;; are applied _after_ default.el is read/ (setq inhibit-default-init t) (load "default.el") ;; We ensure that Emacs can copy from and to the clipboard (setq x-select-enable-clipboard t) ;; Now we set up Emacs to find R ;; The path to R might need to be changed (setq-default inferior-R-program-name "C:/Program Files/R/R-2.15.3/bin/x64/Rterm.exe") (setenv "PATH" (concat "C:\\Program Files\\R\\R-2.15.3\\bin\\x64" ";" (getenv "PATH"))) ;; Configuring org mode to know about R and set some reasonable default behavior (require 'ess-site) (require 'org-install) (org-babel-do-load-languages 'org-babel-load-languages '((R . t) ) ) (add-hook 'org-babel-after-execute-hook 'org-display-inline-images) (add-hook 'org-mode-hook 'org-display-inline-images) (setq org-confirm-babel-evaluate nil) (setq org-export-html-validation-link nil) (setq org-export-allow-BIND t) (setq org-support-shift-select t) (setq org-src-fontify-natively t) For OSX on a Mac you need ;; At first, we make sure that our modifications in .emacs ;; are applied _after_ default.el is read/ (setq inhibit-default-init t) (load "default.el") ;; We ensure that Emacs can copy from and to the clipboard (setq x-select-enable-clipboard t) ;; Configuring org mode to know about R and set some reasonable default behavior (require 'ess-site) (require 'org-install) (org-babel-do-load-languages 'org-babel-load-languages '((R . t) ) ) (add-hook 'org-babel-after-execute-hook 'org-display-inline-images) (add-hook 'org-mode-hook 'org-display-inline-images) (setq org-confirm-babel-evaluate nil) (setq org-export-html-validation-link nil) (setq org-export-allow-BIND t) (setq org-support-shift-select t) (setq org-src-fontify-natively t) Copy the code for your OS into you Emacs window displaying the (initially possibly empty) .emacs file. Then save the file with C-x C-s and close Emacs with C-x C-c. You are now ready to run our demo. Running the demo Now start Emacs again, type C-x C-f and then type OrgWithR.org to open a new file. Copy the text from the first code window of this document into Emacs and type C-x C-s to save the file. Type C-c C-e and in the options window that appears type b. After a few seconds the html file will be displayed in a browser. Type C-c C-e and in the options window that appears type d. After a few seconds the pdf file will be displayed in a browser. Put the cursor into a code segment (somewhere between #+begin_src and #+end_src and type C-c C-c. The results of running the code will appear just below the code segment. Close Emacs and open the file OrgWithR.org you just created. You might be surprised that you see only the first few lines, like this. #+TITLE: Using org mode with R #+AUTHOR: Erich Neuwirth * Minimalistic demo... Putting the cursor (called point in Emacs speak) into the line Minimalistic Demo... and pressing TAB will expand the file. Pressing TAB in this line again will collapse the text. This way, you can hide or show selected parts of your document while working with it. Further tips On Windows you are in for a surprise. Emacs by default does not use C-c/C-x/C-v for Copy/Cut/Paste. The Options menu, however, has an item to use the standard windows keys for these operations. How do you learn Emacs and org mode? Emacs itself has a very nice tutorial accessible from the opening screen. Org mode tutorials can be found at http://orgmode.org/worg/org-tutorials/index.html. A tutorial on using org mode with R can be found at http://orgmode.org/worg/org-contrib/babel/languages/ob-doc-R.html If you also want spell checking, just download and install Aspell. The Windows version is available at http://aspell.net/win32/. You need to install the dictionaries separately from this site. On Macs with OSX you can use cocoAspell available from http://cocoaspell.leuski.net/. The installer from there does include the English dictionary, but it is not completely activated. To activate it, open a Terminal window and run the following commands: cd /Library/Application\ Support/cocoAspell/aspell6-en-6.0-0 sudo ./configure sudo make install If you need to install other dictionaries, get the archive files from ftp://ftp.gnu.org/gnu/aspell/dict, unpack the archive(s) and run the ./configure and sudo make install commands in the unpacked directory.
3.6. Implementation of Softmax Regression from Scratch¶ Just as we implemented linear regression from scratch, we believe that multiclass logistic (softmax) regression is similarly fundamental and you ought to know the gory details of how to implement it from scratch. As with linear regression, after doing things by hand we will breeze through an implementation in Gluon for comparison. To begin, let’s import our packages. import d2lfrom mxnet import autograd, np, npx, gluonfrom IPython import displaynpx.set_np() We will work with the Fashion-MNIST dataset just introduced, cuing up an iterator with batch size 256. batch_size = 256train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size) 3.6.1. Initialize Model Parameters¶ Just as in linear regression, we represent each example as a vector. Since each example is a \(28 \times 28\) image, we can flatten each example, treating them as \(784\) dimensional vectors. In the future, we’ll talk about more sophisticated strategies for exploiting the spatial structure in images, but for now we treat each pixel location as just another feature. Recall that in softmax regression, we have as many outputs as there are categories. Because our dataset has \(10\) categories, our network will have an output dimension of \(10\). Consequently, our weights will constitute a \(784 \times 10\) matrix and the biases will constitute a \(1 \times 10\) vector. As with linear regression, we will initialize our weights \(W\) with Gaussian noise and our biases to take the initial value \(0\). num_inputs = 784num_outputs = 10W = np.random.normal(0, 0.01, (num_inputs, num_outputs))b = np.zeros(num_outputs) Recall that we need to attach gradients to the model parameters. Moreliterally, we are allocating memory for future gradients to be storedand notifiying MXNet that we want gradients to be calculated withrespect to these parameters in the first place. W.attach_grad()b.attach_grad() 3.6.2. The Softmax¶ Before implementing the softmax regression model, let’s briefly reviewhow operators such as sum work along specific dimensions in an ndarray. Given a matrix X we can sum over all elements (default)or only over elements in the same axis, i.e., the column ( axis=0)or the same row ( axis=1). Note that if X is an array with shape (2, 3) and we sum over the columns ( X.sum(axis=0), the resultwill be a (1D) vector with shape (3,). If we want to keep the numberof axes in the original array (resulting in a 2D array with shape (1,3)), rather than collapsing out the dimension that we summed overwe can specify keepdims=True when invoking sum. X = np.array([[1, 2, 3], [4, 5, 6]])print(X.sum(axis=0, keepdims=True), '\n', X.sum(axis=1, keepdims=True)) [[5. 7. 9.]] [[ 6.] [15.]] We are now ready to implement the softmax function. Recall that softmaxconsists of two steps: First, we exponentiate each term (using exp).Then, we sum over each row (we have one row per example in the batch) toget the normalization constants for each example. Finally, we divideeach row by its normalization constant, ensuring that the result sums to\(1\). Before looking at the code, let’s recall what this looksexpressed as an equation: The denominator, or normalization constant, is also sometimes called the partition function (and its logarithm the log-partition function). The origins of that name are in statistical physics where a related equation models the distribution over an ensemble of particles). def softmax(X): X_exp = np.exp(X) partition = X_exp.sum(axis=1, keepdims=True) return X_exp / partition # The broadcast mechanism is applied here As you can see, for any random input, we turn each element into anon-negative number. Moreover, each row sums up to 1, as is required fora probability. Note that while this looks correct mathematically, wewere a bit sloppy in our implementation because failed to takeprecautions against numerical overflow or underflow due to large (orvery small) elements of the matrix, as we did in sec_naive_bayes. X = np.random.normal(size=(2, 5))X_prob = softmax(X)X_prob, X_prob.sum(axis=1) (array([[0.21324193, 0.33961776, 0.1239742 , 0.27106097, 0.05210521], [0.11462264, 0.3461234 , 0.19401033, 0.29583326, 0.04941036]]), array([1.0000001, 1. ])) 3.6.3. The Model¶ Now that we have defined the softmax operation, we can implement thesoftmax regression model. The below code defines the forward passthrough the network. Note that we flatten each original image in thebatch into a vector with length num_inputs with the reshapefunction before passing the data through our model. def net(X): return softmax(np.dot(X.reshape(-1, num_inputs), W) + b) 3.6.4. The Loss Function¶ Next, we need to implement the cross-entropy loss function, introduced in Section 3.4. This may be the most common loss function in all of deep learning because, at the moment, classification problems far outnumber regression problems. Recall that cross-entropy takes the negative log likelihood of thepredicted probability assigned to the true label \(-\log p(y|x)\).Rather than iterating over the predictions with a Python for loop(which tends to be inefficient), we can use the pick function whichallows us to select the appropriate terms from the matrix of softmaxentries easily. Below, we illustrate the pick function on a toyexample, with 3 categories and 2 examples. y_hat = np.array([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]])y_hat[[0, 1], [0, 2]] array([0.1, 0.5]) Now we can implement the cross-entropy loss function efficiently with just one line of code. def cross_entropy(y_hat, y): return - np.log(y_hat[range(len(y_hat)), y]) 3.6.5. Classification Accuracy¶ Given the predicted probability distribution y_hat, we typicallychoose the class with highest predicted probability whenever we mustoutput a hard prediction. Indeed, many applications require that wemake a choice. Gmail must catetegorize an email into Primary, Social,Updates, or Forums. It might estimate probabilities internally, but atthe end of the day it has to choose one among the categories. When predictions are consistent with the actual category y, they arecorrect. The classification accuracy is the fraction of all predictionsthat are correct. Although we cannot optimize accuracy directly (it isnot differentiable), it’s often the performance metric that we care mostabout, and we will nearly always report it when training classifiers. To compute accuracy we do the following: First, we execute y_hat.argmax(axis=1) to gather the predicted classes (given by theindices for the largest entires each row). The result has the same shapeas the variable y. Now we just need to check how frequently the twomatch. Since the equality operator == is datatype-sensitive (e.g. an int and a float32 are never equal), we also need to convert bothto the same type (we pick float32). The result is an ndarraycontaining entries of 0 (false) and 1 (true). Taking the mean yields thedesired result. # Save to the d2l package.def accuracy(y_hat, y): return float((y_hat.argmax(axis=1) == y.astype('float32')).sum()) We will continue to use the variables y_hat and y defined in the pick function, as the predicted probability distribution and label,respectively. We can see that the first example’s prediction category is2 (the largest element of the row is 0.6 with an index of 2), which isinconsistent with the actual label, 0. The second example’s predictioncategory is 2 (the largest element of the row is 0.5 with an index of2), which is consistent with the actual label, 2. Therefore, theclassification accuracy rate for these two examples is 0.5. y = np.array([0, 2])accuracy(y_hat, y) / len(y) 0.5 Similarly, we can evaluate the accuracy for model net on the dataset (accessed via data_iter). # Save to the d2l package.def evaluate_accuracy(net, data_iter): metric = Accumulator(2) # num_corrected_examples, num_examples for X, y in data_iter: metric.add(accuracy(net(X), y), y.size) return metric[0] / metric[1] Here Accumulator is a utility class to accumulated sum over multiplenumbers. # Save to the d2l package.class Accumulator(object): """Sum a list of numbers over time""" def __init__(self, n): self.data = [0.0] * n def add(self, *args): self.data = [a+b for a, b in zip(self.data, args)] def reset(self): self.data = [0] * len(self.data) def __getitem__(self, i): return self.data[i] Because we initialized the net model with random weights, theaccuracy of this model should be close to random guessing, i.e. 0.1 for10 classes. evaluate_accuracy(net, test_iter) 0.0925 3.6.6. Model Training¶ The training loop for softmax regression should look strikingly familiarif you read through our implementation of linear regression in sec_linear_scratch. Here we refactor the implementation tomake it reusable. First, we define a function to train for one dataepoch. Note that updater is general function to update the modelparameters, which accepts the batch size as an argument. It can beeither a wrapper of d2l.sgd or a Gluon trainer. # Save to the d2l package.def train_epoch_ch3(net, train_iter, loss, updater): metric = Accumulator(3) # train_loss_sum, train_acc_sum, num_examples if isinstance(updater, gluon.Trainer): updater = updater.step for X, y in train_iter: # compute gradients and update parameters with autograd.record(): y_hat = net(X) l = loss(y_hat, y) l.backward() updater(X.shape[0]) metric.add(float(l.sum()), accuracy(y_hat, y), y.size) # Return training loss and training accuracy return metric[0]/metric[2], metric[1]/metric[2] Before showing the implementation of the training function, we define a utility class that draw data in animation. Again, it aims to simplify the codes in later chapters. # Save to the d2l package.class Animator(object): def __init__(self, xlabel=None, ylabel=None, legend=[], xlim=None, ylim=None, xscale='linear', yscale='linear', fmts=None, nrows=1, ncols=1, figsize=(3.5, 2.5)): """Incrementally plot multiple lines.""" d2l.use_svg_display() self.fig, self.axes = d2l.plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [self.axes,] # use a lambda to capture arguments self.config_axes = lambda : d2l.set_axes( self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): """Add multiple data points into the figure.""" if not hasattr(y, "__len__"): y = [y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] if not self.fmts: self.fmts = ['-'] * n for i, (a, b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() display.display(self.fig) display.clear_output(wait=True) The training function then runs multiple epochs and visualize the training progress. # Save to the d2l package.def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater): trains, test_accs = [], [] animator = Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0.3, 0.9], legend=['train loss', 'train acc', 'test acc']) for epoch in range(num_epochs): train_metrics = train_epoch_ch3(net, train_iter, loss, updater) test_acc = evaluate_accuracy(net, test_iter) animator.add(epoch+1, train_metrics+(test_acc,)) Again, we use the mini-batch stochastic gradient descent to optimize theloss function of the model. Note that the number of epochs( num_epochs), and learning rate ( lr) are both adjustablehyper-parameters. By changing their values, we may be able to increasethe classification accuracy of the model. In practice we’ll want tosplit our data three ways into training, validation, and test data,using the validation data to choose the best values of ourhyperparameters. num_epochs, lr = 10, 0.1updater = lambda batch_size: d2l.sgd([W, b], lr, batch_size)train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, updater) 3.6.7. Prediction¶ Now that training is complete, our model is ready to classify some images. Given a series of images, we will compare their actual labels (first line of text output) and the model predictions (second line of text output). # Save to the d2l package.def predict_ch3(net, test_iter, n=6): for X, y in test_iter: break trues = d2l.get_fashion_mnist_labels(y) preds = d2l.get_fashion_mnist_labels(net(X).argmax(axis=1)) titles = [true+'\n'+ pred for true, pred in zip(trues, preds)] d2l.show_images(X[0:n].reshape(n,28,28), 1, n, titles=titles[0:n])predict_ch3(net, test_iter) 3.6.8. Summary¶ With softmax regression, we can train models for multi-category classification. The training loop is very similar to that in linear regression: retrieve and read data, define models and loss functions, then train models using optimization algorithms. As you’ll soon find out, most common deep learning models have similar training procedures. 3.6.9. Exercises¶ In this section, we directly implemented the softmax function based on the mathematical definition of the softmax operation. What problems might this cause (hint - try to calculate the size of \(\exp(50)\))? The function cross_entropyin this section is implemented according to the definition of the cross-entropy loss function. What could be the problem with this implementation (hint - consider the domain of the logarithm)? What solutions you can think of to fix the two problems above? Is it always a good idea to return the most likely label. E.g. would you do this for medical diagnosis? Assume that we want to use softmax regression to predict the next word based on some features. What are some problems that might arise from a large vocabulary?
I know that a band-limited signal is a signal which have a limited set of frequencies. What is then the relation between band limiting and $[-\pi \; \pi]$ since $-\pi$ and $\pi$ are related to angles and frequencies are in Hertz ? Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community $\pi$ is much more that something related to angles. Frequencies in Hertz are reciprocal to time in seconds. But for an abstract function $f: \mathbb{R}\to \mathbb{R}$, taken as a model of generic continuous-variable signals, ie without unit in the first ordinal variable, it does makes sense to be band-limited with a unitless constant bounding the interval in frequency. The characterization of band-limited signals is given by the Paley-Wiener theorem. Now first of all concept of bandlimited signal exists in the context of analog signals not digital ones. Digital signals are always (by their definition and existance) bandlimited. Second, frequency in analog signals is expressed either as $f$ in Hertz (cycle per second, repetation per second) or as $\Omega$ in radians per second, where they are related as $$\Omega = 2 \pi f$$ So for example a frequency of $f=1$ kHz would equivalently make a radian-frequency of $\Omega=2 \pi 1k \approx 6283$ rad/sec. When a bandlimited analog signal $x_a(t)$ is sampled by a period of $T_s$ (and digitized afterwards in practice), the resulting discrete-time sequence $x[n]$ will have a discrete Fourier transform expression of $$ H(e^{j\omega}) = \frac{1}{T_s} \sum_k X_a( \frac{\omega + 2\pi k}{T_s})$$ such that it will be frequency normalized to the discrete-time frequency range of $-\pi \leq \omega < \pi$. Where $\omega$ is the discrete-time frequency in radians per sample. Where $X_a$ is the continuous-time Fourier transform (frequency content) of the analog signal $x_a(t)$. for $k=0$ the base period of $H(e^{j \omega})$ becomes $$ H(e^{j \omega}) = \frac{1}{T_s} X_a( \frac{\omega}{T_s}) $$ where the frequency normalization becomes more apparent as setting $ \Omega = \frac{\omega}{T_s} $ yields the those analog radian-frequencies for $X_a(\Omega)$ when $\omega$ of $H(e^{j \omega})$ is selected from $\omega \in [-\pi,\pi)$
Let $F$ be a totally real number field with integers $\mathcal{O}_F$ and $B$ a quaternion algebra over $F$ split at exactly one infinity place.Fix $n\geq 1$ and like in the special case $F=\mathbb{Q}, B=M_2(\mathbb{Q}), \Gamma_1(p^n)$ one can associate to a finite place $\mathfrak{p}$ of $F$, where $B$ is split a geometric object: $M_{n,H}(B)\:=B^\times\backslash (B\otimes_\mathbb{Q} A)^\times/ U_1(\mathfrak{p}^n)\times H$, where $A$ is the ring of rational adeles, $U_1(\mathfrak{p}^n)$ is the standard compact open congruence subgroup of $(B\otimes F_\mathfrak{p})^\times$ (depending on a choice of orders) and $H$ is 'sufficiently small'. In the $F\neq \mathbb{Q}$ case this is called a Shimura curve. Carayol has shown that this curve has good reduction at $\mathfrak{p}$: It admits a smooth proper model over the $\mathfrak{p}$-integers of $F$. But: it has no cusps, therefore no q-expansion principle is available. And the points of this curve have no direct moduli interpretation, so there does not live a universal family of 'abelian varieties+level structure + X' on it. One can still, though, by the usual adelic machinery, define good automorphic forms on these curves. By the lack of cusps, I guess we can call them all "cusp forms". In the "classical" (modular curve) case $Y_{\Gamma,\mathbb{Z}}$, there is another way to define modular forms: let $\tau\colon\mathcal{E}\to Y_{\Gamma,\mathbb{Z}}$ denote the universal family of elliptic curves over it and denote $\omega:=\tau_*\Omega_{\mathcal{E}/Y}^1$. This line bundle extends to the cusps and modular forms can be defined as sections of the higher tensor powers of this extended bundle. Question 1: Is there something like this construction for Shimura curves? For example, from what I understand, Carayol proved the existence of good models for this curves also via a moduli interpretation (of unitary Shimura varieties). Now for the title: If you have q-expansions at hand, you can prove that the classical (complex) pairing of cusp forms with the complex Hecke algebra, gives rise to an integral p-adic perfect pairing: $T_{\Gamma,\mathbb{Z}_p}\times S_k(\Gamma,\mathbb{Z}_p(\zeta_N))\to \mathbb{Z}(\zeta_N)$. For Shimura curves I only know of an adelic way to introduce the Hecke operators. Question 2: Is there a similar pairing as above?
( This article is based on a mathematics workshop for 12 year olds given by Warwick Evans and Alison Clark Jeavons in Cambridge in February 2000. The photos were taken at the ATE / NRICH Mathematics Superweek holiday for 10 to 13 year olds at Southam in August 2000. ) In this article we shall find out why there are only five regular polyhedra, that is solids where all the faces are regular polygons (triangles, squares, pentagons and hexagons). A shape is called 'regular' when all the sides are the same length, all the angles are the same, all the faces are the same and the pattern in which the faces meet at each vertex (the vertex form) is the same. We shall also explore the properties of the semi-regular polyhedra, the solids where the faces are two or more different regular polygons coming together in the same pattern at each vertex. In the pictures Ellie is holding a dodecahedron which is one of the five regular polyhedra and Vicky is holding a great rhombicosidodecahedron which is a semi-regular polyhedron with square, hexagonal and octagonal faces. All you need to follow this article is very simple arithmetic, to know what an angles is and to use simple logical thinking. Though it is not essential you will be able to visualise the solids much more easily if you can use a construction kit (such as the plastic shapes which fit together made by Polydron as shown in the photos) to make the solids while reading this article. The regular polyhedra are called the Platonic solids and the semi-regular ones the Archimedean solids after two famous Greek mathematicians. Using the plastic polygons made by Polydron, we can discover the three regular tessellations made by triangles, squares and hexagons. Next we can introduce a notation to describe the vertex formed by each tessellation. This notation, describing the number of edges of each polygon meeting at a vertex of a regular or semi-regular tessellation or solid, was devised by the Swiss mathematician Ludwig Schlafli (1814-1895). He was a schoolteacher who did mathematical research in his spare time. Each vertex in the hexagonal tessellation is surrounded by three 6-sided polygons and we say that the vertex form is 666. The square tessellation has vertex form 4444.In a similar way, each of the vertices in the triangular tessellation is surrounded by six 3-sided polygons and has the vertex form 333333. The interior angles of the triangle, square and hexagon are $60^{\circ}$, $90^{\circ}$ and $120^{\circ}$ respectively and it can be seen that the sum of the angles at a vertex in any of the three tessellations is $360^{\circ}$. We can see that these are the only possible regular plane tessellations because pentagons, with interior angles of $108^{\circ}$, cannot fit together around a vertex and for polygons with more than six sides the interior angles are more than $120^{\circ}$ so it is impossible to fit three or more together around a vertex. In any solid, the number of faces at a vertex must be more than two. We begin with triangles. Using only triangles, each vertex must have fewer than 6 faces at a vertex (otherwise we end up with a plane tessellation). We can join three 3-gons (i.e. triangles) to make a vertex with vertex form 333. Carry on constructing a solid so that each vertex has form 333 and we arrive at the tetrahedron with 4 triangular faces. We can join four 3-gons to make a vertex with vertex form 3333 and this solid is the octahedron with 8 triangular faces. We can join five 3-gons (i.e. triangles) to make a vertex with form 33333 and this solid is the icosahedron with 20 triangular faces. Now move on to squares. To make a solid, we must have more than 2 faces at a vertex and fewer than 4 [Why fewer than 4?]. This leaves us with the vertex form 444 and we construct the cube (or hexahedron) with 6 square faces. Next comes the pentagon. Each vertex of any solid must have more than 2 faces. Once you convince yourself that we cannot have a vertex of form 5555 we are left with the regular solid with vertex form 555 called the dodecahedron with 12 pentagonal faces. We cannot make a regular solid with any polygon with six or more sides [Why?]. We have shown that there are five and only five regular solids, and we can begin to complete the following table. In the photo you will see that Ross has constructed the five regular polyhedra (the Platonic Solids named after Plato). Name Vertex Form n(Faces) = F n(Vertices) = V n(Edges)=E Angle Deficiency Total Angle Deficiency Tetrahedron 333 4 Octahedron 3333 8 Icosahedron 33333 20 Cube 444 6 Dodecahedron 555 12 After careful counting of vertices and edges - it isn't as easy as it sounds - we can complete the next two columns. Try to do this for yourself then check your results . The conjecture that F + V - E = 2 should come from examining the numbers in the F, V and E columns. This is the famous (and so useful) Euler's Theorem . Here it is only a conjecture made from looking at our table but it is in fact true - finding a proof is left to the reader. The next column - Angle Deficiency - supplies the core theme of this activity. Consider the tetrahedron. Its vertex form is 333 and so the sum of the angles at each of its vertices is $60^{\circ}+60^{\circ}+60^{\circ}=180 ^{\circ}$ and we say that the angle deficiency is $360^{\circ}-180^{\circ}$ that is $180^{\circ}$. It is what you get if you flatten the polyhedron at a vertex and measure the missing angle. Can you see that the angle sum at the vertex of any solid is bound to be less than $360^{\circ}$? [Why?] Definition: The angle deficiency at a vertex is $360^{\circ}$ minus (the angle sum at the vertex). We can fill in the next column in the table with the angle deficiency for each Platonic solid provided we know the interior angle of each $n$-gon. A little diversion here takes us back to the plane, it links ideas of angle deficiency with curvature and it proves the formula for the interior angle of a regular $n$-gon which is $(180-360/n)^{\circ}$. From this formula we see that the interior angles of 3-, 4-, and 5-gons are 60$^{\circ}$, 90$^{\circ}$ and 108$^{\circ}$ respectively. If you walk all the way around a circle back to your starting point you turn through a total angle of $360^{\circ}$ or (using another measure) $2\pi$ radians. Imagine walking around a regular $n$-gon starting from one of the vertices. Now the curvature, instead of being evenly spread around the edge, is concentrated at the vertices. At each vertex you make the same turn to walk along the next edge. When you get back to your starting point you turn through the same angle to face the direction you were facing at the start and altogether you have turned through a total angle of $360^{\circ}$. Each of these turns is therefore $(360/n)^{\circ}$, the exterior angle at the vertex. The interior angle is therefore $(180-360/n)^{\circ}$. The final column in the table - total angle deficiency - is just the sum of the angle deficiencies at every vertex of a particular solid. Since, in each case, the vertex form is the same at each vertex, we can just multiply the number of vertices (column V) by the angle deficiency we have just calculated. We end up with the following table. Name Vertex Form n(Faces) = F n(Vertices) = V n(Edges)=E Angle Deficiency Total Angle Deficiency Tetrahedron 333 4 4 6 180 720 Octahedron 3333 8 6 12 120 720 Icosahedron 33333 20 12 30 60 720 Cube 444 6 8 12 90 720 Dodecahedron 555 12 20 30 36 720 Now let us relax the condition of regularity. A semi-regular solid is one which is made up of more than one type of polygon but in which all vertices have the same vertex form. Suppose we choose the vertex form 366 so that each vertex is surrounded by one 3-gon and two 6-gons. Will this choice generate a semi-regular solid and, if so, how many hexagons and how many triangles do we need? We aim to use Euler's theorem and the Total Angle Deficiency theorem to help us fill in another row in our table. We already have the second and the last columns. Name Vertex Form n(Faces) = F n(Vertices) = V n(Edges)=E Angle Deficiency Total Angle Deficiency 366 720 Now we go through the following steps: Name Vertex Form n(Faces) = F n(Vertices) = V n(Edges)=E Angle Deficiency Total Angle Deficiency 366 8 12 18 60 720 But how many of the 8 faces are triangles and how many are hexagons? What is the shape called? Count the triangles first. Since each vertex has one 3-gon and there are 12 vertices, we can argue that the number of triangles must be $12\times 1= 12$ but in doing so we have counted each triangle 3 times over, once at each of its vertices since each triangle has 3 vertices. So the number of triangles must be $12/3=4$. Count the hexagons next. We could just say $8-4=4$ but let's double check. Since each vertex has two 6-gons and there are 12 vertices, we can argue that the number of hexagons must be $12\times 2=24$ but in doing so we have counted the number of hexagons 6 times since each hexagon has 6 vertices. So the number of hexagons must be $24/6=4$. Name Vertex Form n(Faces) = F n(Vertices) = V n(Edges)=E Angle Deficiency Total Angle Deficiency Truncated Tetrahedron 663 4 3-gons 4 6-gons 12 18 60 720 In this photo Ross is holding one of these solids and if you look at it carefully you might be able to see that it could be obtained by "chopping off" the vertices of a tetrahedron. It is called a truncated tetrahedron. Some vertex forms produce prisms, such as 344 the triangular prism which has three square faces and two triangular faces. Other vertex forms produce anti-prisms. In addition to the prisms and anti-prisms you will produce all the Archimedean solids (named after Archimedes) whose details are on the completed table below. You may like to fill in your own table and then check your results with this table at the end of this article. Ben has made a truncated cube with vertex form 388. Can you see that it could be made by cutting off a tetrahedron from each vertex of a cube? Duncan and Suzanne have each made a rhombicuboctahedron with vertex form 3444. By using 'skeleton' pieces for the faces rather than solid plastic polygons you can look inside the polyhedra to study them. The first known mention of the thirteen "Archimedean solids" is in a manuscript from the fifth book of the "Collection" of the Greek mathematician Pappus of Alexandria, who lived in the beginning of the fourth century AD. You will find illustrations of the polyhedral solids and much interesting information about these solids and their geometrical and practical construction on the World Wide Web site maintained by Tom Gettys and further information at http://mathworld.wolfram.com/ArchimedeanSolid.html Name of Solid Vertex Form Number of Faces Number of Vertices Number of Edges Angle Deficiency Total Angle Deficiency Tetrahedron 3 3 3 4 4 6 180 720 Cube 4 4 4 6 8 12 90 720 Octahedron 3 3 3 3 8 6 12 120 720 Dodecahedron 5 5 5 12 20 30 36 720 Icosahedron 3 3 3 3 3 20 12 30 60 720 Truncated Tetrahedron 3 6 6 8=4+4 12 18 60 720 Truncated Cube 3 8 8 14=8+6 24 36 30 720 Truncated Octahedron 4 6 6 14=6+8 24 36 30 720 Truncated Dodecahedron 3 10 10 32=20+12 60 90 12 720 Truncated Icosahedron 5 6 6 32=12+20 60 90 12 720 Cuboctahedron 3 4 3 4 14=8+6 12 24 60 720 Icosidodecahedron 3 5 3 5 32=20+12 30 60 24 720 Snub Dodecahedron 3 3 3 3 5 92=80+12 60 150 12 720 Rhombicuboctahedron 3 4 4 4 26=8+18 24 48 30 720 Great Rhombicosidodecahedron 4 6 10 62=30+20+12 120 180 6 720 Rhombicosidodecahedron 3 4 5 4 62=20+30+12 60 120 12 720 Great Rhombicuboctahedron 6 4 8 26=8+12+6 48 72 15 720 Snub Cube 3 3 3 3 4 38=32+6 24 60 30 720 Sadly Warwick died in July 2000. He was much loved by his colleagues and students as an inspiring teacher with original ideas and a wonderful way of helping people to understand and to enjoy mathematics. He had a great zest for life.
the spectrum of the Gamma and Alpha decays are both discrete, i.e. the $\alpha$-particles and the $\gamma$-rays take on only discrete values when emitted from a decaying nucleus. Why is it then, that the $\beta^{\pm}$ can take on continuous values? The main thing that distinguishes the beta decay from the other two is, that it is a three body problem, i.e. the nucleus does not only decay into an electron/positron, but also into a electron-neutrino/antineutrino. I don't see yet how this immediately implies that the spectrum of the electrons is continuous though. The way I understand the two body decays is, that the initial nucleus spontaneously decays into the two bodies, i.e. a smaller nucleus and a gamma or alpha particle. As the energy levels in both the nuclei are quantized, only certain values for the energy of the photon and the Helium core are allowed, sine Energy and momentum need to stay conserved. Does the third particle change things in the way that there basically are two things (i.e. the electron and the neutrino in the beta decay) that are not restricted by an inner energy level hierarchy in the way the nuclei are, thus allowing the energy given by the nuclei to be split arbitrarily (and continuously) between the electron and the neutrino? If this is the wrong explanation, please correct me.
I am trying to graphically simulate a series of springs in 2D. Now one of the forces I am stuck with calculating is the damping force. The given formula is $F = -k_d v$. I know that $v$ is the velocity of the vectors, but I can't seem to find how to calculate $k_d$. For a viscous damper, the decay in the free oscillation amplitude is exponential (it is geometric for hysteric damping and linear for Coulomb damping). So if you have the time history of the amplitude of your decay and you know it is a viscous damper (which is the equation you gave) then you can measure the amplitude $A$ at two consecutive peaks and calculate: $$\gamma = \ln \left(\frac{A_{t_n}}{A_{t_{n+1}}}\right) $$ you can then find the damping coefficient to give this decay as: $$\zeta = \frac{\gamma}{\sqrt{4 \pi^2 + \gamma^2}}$$ where then of course $\zeta = k_d/(2\sqrt{k m})$. So given a spring with unknown damping coefficient but known stiffness, you can attach a known mass to it and measure it's response to a disturbance and determine from that the damping coefficient. Since you are just going for aesthetics, you pick your damping constants arbitrarily. I would actually recommend that you play with it and see how it influences the solution, it's actually pretty cool to visualize. All you do is pick values for $\zeta \in [0.0, 2.0]$ where the upper bound is really limitless but not much will change when it is greater than $2.0$. Then you can compute your $k_d$ based on $k$ and $m$. Depending on your time integration, you may find that $\zeta = 0$ will be unstable. You might need something nominal to stabilize the scheme. When $\zeta = 1$, it is called critically damped and you should not see much oscillation at all (it will be driven to steady state without oscillation). I say you won't see much because as a system of springs and with numerical integration, it won't be exactly critically damped. "Damping" in a system or model is the physical means by which energy can be dissipated. The model you cite, $F=-k_dv$, models by an approximation 'viscous' damping which is often used to model energy losses of surfaces sliding against one another - friction. In viscous damping the force opposes the direction of and is linearly proportional to the velocity. But if you research further, more deeply, you'll soon learn there are all kinds of models that scientists and engineers have proposed to closer approximate energy loss in a system. But getting back to your original question - how to determine $k_d$. I don't believe there is any analytical way to derive it. The elastic stress and strain in a spring create heat which is energy loss, but that is just too complex to begin trying to model. You either have to (1) guess and adjust it so that your damped oscillations in simulation match the data or (2) Use the data and the model together to fit the model parameters using for example least squares or (3) get it from the spring supplier as ACuriousMind suggested But you'll find that most spring manufacturers do not supply such a parameter.
The JKY ABMC Model (taken from Jabbour, et al. 2001) parameterizes the binomial model (in a risk-neutral world) such that, $u = e^{r\Delta t} + e^{r\Delta t}\sqrt{e^{\sigma^2\Delta t} - 1}$ $d = e^{r\Delta t} - e^{r\Delta t}\sqrt{e^{\sigma^2\Delta t} - 1}$ JKY continue and say that this is equivalent to, $u = 1 + \sigma\sqrt{\Delta t} + R\Delta t + \mathcal O(\Delta t^\frac{3}{2})$ $d = 1 - \sigma\sqrt{\Delta t} + R\Delta t + \mathcal O(\Delta t^\frac{3}{2})$ I'm having trouble seeing this rigourously. Specifically, I can find that the first term $e^{r\Delta t} = 1 + R\Delta t + \mathcal O(\Delta t^2)$ from the Taylor expansion of $e^x$, but I'm having troubling seeing how the second term contributes to the $\pm\sigma\sqrt{\Delta t}$ and how it leads to the restriction of the error to $\mathcal O(\Delta t^\frac{3}{2})$ Thank you
I have a few aligned lists of coefficients. One set of coefficients has three columns and the other has four. At the moment I display them with two align environments: \begin{align} \gamma_1 &= 8/15 & \gamma_2 &= 5/12 & \gamma_3 &= 3/4 \\ \zeta_1 &= 0 & \zeta_2 &= -17/60 & \zeta_3 &= -5/12 \\ \beta_1 &= 4/15 & \beta_2 &= 1/15 & \beta_3 &= 1/6\end{align}\begin{align} a_0 &= 0 & a_1 &= 8/15 & a_2 &= 2/3 & a_3 &= 1\end{align} The alignment and spacing between elements in each row is exactly as I want but there is too much space between the three column coefficients and the four column coefficients. Essentially, I want the alignment to reset at a certain point so that the next line of the align environment has four columns and is centered. I learned about the aligned environment as I searched for an answer. The question asked there is essentially what I am asking, but the code provided there does not do what I want. I have not successfully been able to produce code to do exactly what I want. This is the closest I can get: \begin{align} \begin{aligned} \gamma_1 &= 8/15 & \gamma_2 &= 5/12 & \gamma_3 &= 3/4 \\ \zeta_1 &= 0 & \zeta_2 &= -17/60 & \zeta_3 &= -5/12 \\ \beta_1 &= 4/15 & \beta_2 &= 1/15 & \beta_3 &= 1/6 \\ \end{aligned} \\ \begin{aligned} a_0 &= 0 & a_1 &= 8/15 & a_2 &= 2/3 & a_3 &= 1 \end{aligned}\end{align} This has two problems: The coefficients are compressed rather than expanded as they are with two aligns. Each alignedenvironment has one equation number. I'd prefer to refer to each set of coefficients (gamma, zeta, beta, and a) directly. Likely I could manipulate the spacing at the bottom and/or top of the align environment and use my original code, but that's sloppy and I'm confident a simple way to do what I want exists but I'm not aware of it. Also, the a0 and a3 coefficients are required to be 0 and 1 respectively by definition, so I can leave one out, but I want to include both of them for clarity.
Least Median of Squares regressionproblem: \[\begin{align}\min\>&z\\&\delta_i=1 \Rightarrow \> –z \le r_i \le z\\&\sum_i \delta_i = h\\&r_i = y_i - \sum_j X_{i,j} \beta_j\\&\delta_i \in \{0,1\} \end{align}\] Here \(X\) and \(y\) are data. Modern solvers like Cplex and Gurobi support indicator constraints. This allows us to directly pass on the above model to a solver without resorting to other formulations (such as binary variables with big-M constraints or SOS1 structures). This particular model is interesting: it has an unbounded LP relaxation. E.g. the start of the Cplex log shows: Nodes Cuts/ 0 0 unbounded 0 . . . Gurobi has some real problems on this model: Optimize a model with 48 rows, 97 columns and 188 nonzeros Root relaxation: unbounded, 77 iterations, 0.00 seconds Nodes | Current Node | Objective Bounds | Work 0 0 postponed 0 - - - - 0s ... 139309129 6259738 0.02538 65 23 0.26206 - - 28.3 28775s This model does not seem to finish (the model has only 47 discrete variables so this log indicates something is seriously wrong: this model should solve in a manner of seconds). Gurobi was never able to establish a lower bound (column BestBd). Notice that Gurobi translates the implications to SOS1 variables (see [1] for the SOS1 version of this model). There is a simple work around: add the bound \(z \ge 0\). Now things solve very fast. We have a normal bound and no more any of these “postponed” nodes. Root relaxation: objective 0.000000e+00, 52 iterations, 0.00 seconds Nodes | Current Node | Objective Bounds | Work 0 0 0.00000 0 24 - 0.00000 - - 0s . . . The bound \(z\ge 0\) can be deduced from the constraints \(–z \le r_i \le z\) (we know \(h\) of them have to hold). Presolvers are not able to find this out automatically. See [2] for some other interesting observations on indicator constraints. Update: Gurobi has identified the problem and it has been fixed (available in next version). References Integer Programming and Least Median of Squares Regression, http://yetanothermathprogrammingconsultant.blogspot.com/2017/11/integer-programming-and-least-median-of.html Belotti, P., Bonami, P., Fischetti, M. et al., On handling indicator constraints in mixed integer programming,Comput Optim Appl (2016) 65: 545.
Given prob space $(\Omega, \mathscr{F}, P)$ and a Wiener process $(W_t)_{t \geq 0}$, define filtration $\mathscr{F}_t = \sigma(W_u : u \leq t)$ Let $(B_t)_{t \geq 0}$ where $B_t = W_t^3 - 3tW_t$. Show that $E[B_t|\mathscr{F}_s] = B_s$ whenever $s < t$. I think this all comes down to manipulation since there are martingales somewhere My attempt: Splitting up into $E[W_t^3|\mathscr{F}_s] - 3E[tW_t|\mathscr{F}_s]$ doesn't do anything since those guys aren't martingales? So, I tried splitting it up into: $E[W_t(W_t^2 - 3t)|\mathscr{F}_s]$ $= E[W_t(W_t^2 - t -2 t)|\mathscr{F}_s]$ $= E[W_t(W_t^2 - t) -2 tW_t)|\mathscr{F}_s]$ $= E[W_t(W_t^2 - t)|\mathscr{F}_s] -2E[ tW_t|\mathscr{F}_s]$ $W_t$ is not $\mathscr{F}_s$-measurable, so we can't take that out... $tW_{1/t}$ is Brownian and thus a martingale, but I don't know about $tW_t$... $cW_{t/c^2}$ is Brownian and thus a martingale, but I don't think we can set c = t... Help please?
Let $\{N(t), t\geq 0\}$ be a Poisson process with rate $\lambda$, $S_n$ the instant of the $n$-th arrival and $T_n$ the $n$-th interarrival time, that is, $T_n = S_n - S_{n-1}$, $n \geq 1$. Now consider the following result: Theorem. Given that $N(t) = n$, the $n$ arrival times $S_1, S_2, \dots, S_n$ have the same distribution as the order statistics corresponding to $n$ independent random variables uniformly distributed on the interval $(0,t)$. I would like to know how to calculate $\mathbb{E}[S_4 | N(1) = 2]$ using the theorem above. I have already solved it using the memorylessness property of the exponential distribution, since $T_i \sim Exponential(\lambda)$, and it went like: you can call $S_4 = 1 + T_3 + T_4$, then \begin{align*} \mathbb{E}[S_4 | N(1) = 2] &= \mathbb{E}[1 + T_3 + T_4] \\ &= 1 + \mathbb{E}[T_3 + T_4] \\ &= 1 + \frac{2}{\lambda}, \end{align*} since $(T_3 + T_4) \sim Gamma(2, \lambda)$, so I know the result I should get. My attempt: we can write $S_4 = (T_1 + T_2) + T_3 + T_4 = S_2 + T_3 + T_4$, so it follows\begin{align*} \mathbb{E}[S_4 | N(1) = 2] &= \mathbb{E}[S_2 + T_3 + T_4 | N(1) = 2] \\ &= \mathbb{E}[S_2 | N(1) = 2] + \mathbb{E}[T_3 + T_4] \\ &= \frac{1}{2} + \frac{2}{\lambda},\end{align*}since increments are independent, $(S_2|N(1) = 2) = \max \{U_1, U_2\}, U_i \sim Uniform(0,1)$, and $(T_3 + T_4) \sim Gamma(2, \lambda)$. What have I done incorrectly? I'd get the correct result if I wrote instead $S_4' = S_1 + S_2 + T_3 + T_4$, but that's absurd since \begin{align*} S_4' &= S_1 + S_2 + T_3 + T_4 \\ &= (T_1) + (T_1 + T_2) + T_3 + T_4 \\ &= T_1 + (T_1 + T_2 + T_3 + T_4) \\ &= T_1 + S_4. \end{align*} In addition to that, in these MIT freely available online class notes from a "Discrete Stochastic Processes" course, on page $92$, we have equation $(2.46)$: \begin{align} \mathbb{E}[S_i|N(t) = n] = \frac{it}{n+1}, \end{align} which in my attempt would yield a completely different result: \begin{align} \mathbb{E}[S_2|N(1) = 2] = \frac{2}{3}. \end{align} How to proceed?
An accurate plant model is the linchpin of control system development using Model-Based Design. With a well-constructed plant model, engineers can verify the functionality of their control system, conduct closed-loop model-in-the-loop tests, tune gains via simulation, optimize the design, and run what-if analyses that would be difficult or risky to do on the actual plant. Despite these advantages, engineers are sometimes reluctant to commit the time and resources required to create and validate a plant model. Concerns include how much time it will take to run a simulation, how much domain and tool knowledge will be required to build and validate the model, and what type of equipment will be needed to acquire hardware test data for building and validating the model. This article describes a workflow for creating a permanent magnet synchronous machine (PMSM) plant model using MATLAB ® and Simulink ® and commonly available lab equipment. The workflow involves three steps: Execute tests Identify model parameters from test data Verify parameters via simulation We used the plant model to build and tune a closed-loop PMSM control system model. We ran step response and coast-down tests using the controller model in simulation and on hardware using an xPC Target™ turnkey real-time testing system. We found close agreement between the simulation and hardware results, with normalized root mean square deviation (NRMSD) below 2% for key signals such as rotor velocity and motor phase currents (Figure 1). The Plant Model and Its Parameters The PMSM plant model, developed with SimPowerSystems™, includes the motor and a load—in this example, an acrylic disc. The model has nine parameters that define its behavior: one (disc inertia) associated with the load and eight associated with the motor (Figure 2). We conducted five tests to characterize these parameters: the bifilar pendulum test, the back EMF test, the friction test, the coast-down test, and the DC voltage step test (Table 1). In this article, we will focus on the coast-down test and the DC voltage step tests. These tests demonstrate progressively more sophisticated methods of parameter identification, and illustrate extracting parameter values via curve fitting and parameter estimation, respectively. Test Parameters Identified Identification Method Bifilar pendulum test Disk inertia (\(H_d\)) Calculation Back EMF test Number of poles (\(P\)) Flux linkage constant (\(A_{pm}\)) Torque constant (\(Kt\)) Calculation Friction test Viscous damping coefficient (\(b\)) Coulomb friction (\(J_0\)) Curve fitting Coast-down test Rotor inertia (\(H\)) Curve fitting DC voltage step test Resistance (\(R\)) Inductance (\(L\)) Parameter estimation For each test, we describe the test setup and then explain how we conducted the test, acquired the data, extracted the parameter value, and verified it. Characterizing Rotor Inertia with the Coast-Down Test To characterize the rotor inertia (\(H\)) we spin the rotor up to an initial speed (\(\omega_{r0}\)) and measure the rotational speed (\(\omega\)) as the rotor coasts to a stop. Using this measured result, the rotor inertia can be identified by curve fitting the equation for \(\omega_r\) to the measured rotational speed during the period of time when the motor is coasting to a stop. The differential equation [1] describes the mechanical behavior of the motor. The coast-down test is set up so that the load torque (\(T_{load}\)) is always \(0\). Once the motor is up to an initial, steady-state speed, the motor is turned off, so that the electromagnetic driving torque (\(T_{em}\)) is also \(0\). Under these conditions the solution to [1] is given by the equation for \(\omega_r\) [2], where \(\omega_r\) is the rotational speed of the rotor shaft (\(\omega_{r0}\)) is the initial rotational speed of the rotor shaft \(J_0\) and \(b\) are the Coulomb friction and viscous damping coefficient, respectively, characterized from a separate friction test \(T_{em}\) is the electromagnetic driving torque (0 during this test) \(T_{load}\) is the load torque (0 during this test) \[\begin{equation}\tag{1}\frac{d\omega_r}{dt}=\frac{1}{H}(T_{em}-b\omega_r-J_0-T_{load})\end{equation}\] If \(T_{em}=0\) \(T_{load} = 0\) Then \[\begin{equation}\tag{2}\omega_r=(\omega_{r0}+\frac{J_0}{b})e^{-\frac{b}{H}t}-\frac{J_0}{b}\end{equation}\] Conducting the Test and Acquiring the Data In the lab we created an open-loop Simulink test model to drive the motor to an initial speed of 150 radians per second, at which time the motor drive was turned off and the rotor coasted to a stop. Throughout the test the model captured the output of the rotational speed sensor. Using Simulink Coder™ and xPC Target, we deployed this model to an xPC Target turnkey real-time system. We executed the model using xPC Target, and imported the rotor speed data into MATLAB for analysis. Extracting and Verifying the Parameter Values After running the tests, we plotted the measured speed data in MATLAB and used Curve Fitting Toolbox™ to fit equation [2] for the rotor angular velocity (\(\omega_r\)) to the measured speed data while the rotor was coasting to a halt. Using the value of \(H\) from the curve fit, we evaluated equation [2] from the point at which the motor started coasting and plotted the results with the original test data (Figure 3). As Figure 3 shows, equation [2] with the value of \(H\) from the curve fit closely predicts the motor speed during the coast-down test. We used a model to verify our parameter identification result. Using the rotor inertia value obtained from the coast-down test (3.2177e-06 Kg m^2 in our PMSM model), we ran a simulation of the coast-down test in Simulink. We then compared and plotted the simulated results with the measured results (Figure 4). The results matched closely, with a normalized root mean square deviation (NRMSD) of about 2%. Characterizing Resistance and Inductance with the DC Voltage Step Test In the DC voltage step test a DC voltage is applied across the motor phase A and phase B connections and the resulting current is measured. Electrically, under these conditions, a three-phase PMSM behaves like a circuit with two series resistors and two series inductors (Figure 5). The measured current (\(i\)) is used to find the resistance and inductance parameter values. During the test the rotor is held motionless to avoid complicating the analysis with back EMF waveforms, which tend to oppose the current flow. To avoid burning out the motor with the rotor motionless, a current limiting resistor (\(R_{limit}\)) is added and a step pulse rather than a steady DC voltage is used. Conducting the Test and Acquiring the Data We again used xPC Target and an xPC Target turnkey real-time system to conduct the test. In Simulink we developed a model that produced a series of 24-volt pulses roughly 2.5 milliseconds in duration. We deployed this model to our xPC Target system using Simulink Coder, and applied the voltage pulse across the phase A and phase B terminals of the PMSM. We measured the applied voltage and the current flowing through the motor using an oscilloscope, and using Instrument Control Toolbox™ we read the measured data into MATLAB, where we plotted the results (Figure 6). Extracting and Verifying the Parameter Values Extracting the phase resistance from the measured data required only the application of Ohm’s law (\(R = V/I\)) using the steady-state values for voltage and current. For the PMSM we calculated the resistance as 23.26 volts / 2.01 amps = 11.60 ohms. By subtracting 10 ohms (the value of the current limiting resistor), and dividing the result by 2 to account for the two-phase resistances in series, we calculated the motor phase resistance to be 0.8 ohms. Characterizing the inductance required a more sophisticated approach. At first glance, it looks as if we could have used curve fitting, as we did when characterizing the rotor inertia. However, due to the internal resistance of the DC supply, the measured DC voltage decays from an initial value of 24 volts at the start of the test, when the current into the circuit is 0, to a steady-state value of 23.26 volts after the current is flowing in the circuit. Because the input voltage is not a pure step signal, the results from curve fitting the solution to the series RL circuit equation would not be accurate. To overcome this difficulty we opted for a more robust approach using parameter estimation and Simulink Design Optimization™. The advantage of this approach is that it requires neither a pure step input nor curve fitting. We modeled the motor’s equivalent series RL circuit with Simulink and Simscape™ (Figure 7). Simulink Design Optimization applied the measured voltage as an input to the model, and with the value of the limiting resistor (R_limit) and the motor phase resistance (R_hat) already known, estimated the value of the inductance (L_hat) to make the current predicted by the model match the measured current data as closely as possible. To verify the values that we had obtained for phase resistance (0.8 ohms) and inductance (1.15 millihenries), we plugged the values into our PMSM model and stimulated the model with the same input that we used to stimulate the actual motor. We compared the simulation results with our measured results (Figure 8). The results matched closely, with an NRMSD of about 3%. Using the Plant Model to Design the Controller After identifying and verifying all key parameters, our PMSM plant model was ready to use in the development of the motor controller. We used Simulink Design Optimization to tune the proportional and integral gains of the controller’s outer loop, the velocity regulator. We ran closed-loop simulations to verify the functionality of the controller model, and used Simulink Coder to generate code from the model, which we deployed to an xPC Target turnkey real-time target machine. As a final controller verification step, we ran step response and coast-down simulations in Simulink and hardware tests using the deployed controller code on an xPC Target turnkey real-time system. We compared simulation and hardware test results for rotor velocity and phase current, and once again found close agreement between the model and the hardware, with NRMSD below 2% in both cases (Figure 9). Summary Development of the PMSM plant model highlighted two parameter identification tests. Data was acquired via a sensor for the coast-down test, and with Instrument Control Toolbox via an oscilloscope for the DC voltage step test. We extracted data via curve fitting for the coast-down test and parameter estimation for the DC voltage step test. We verified all parameter values by comparing simulation results against measured test data, which enabled us to produce a plant model that we could trust as we developed and tuned the controller. All this work can be done early in the development process, well before embedded code is generated for the control system, enabling engineers to find and eliminate problems with the requirements and the design before hardware testing begins. These benefits typically far outweigh the costs associated with creating the plant model, particularly if the model can be reused on other projects. We would like to acknowledge the contribution of Professor Heath Hofmann of the University of Michigan, who recommended test procedures for characterizing a PMSM and allowed us to use his lab facilities for the initial phase of this project.
We usually call equations like $$\frac{d}{dt} \frac{\partial L}{\partial \dot{q_i}} - \frac{\partial L}{\partial q_i} = 0$$ "equations of motion," because they are equations that tell us how the variables of our system (here $q_i$) evolve in time. Indeed, in general, the solution to $n$ second order differential equations involves $2n$ integration constants (or initial conditions) in the solution. However, most people would not call these integration constants "conservation laws." In general usage, a "conserved quantity" $Q$ is a function of the configuration variables (here $q_i$ and $\dot q_i$) that does not change in time when the configuration variables evolve according to the equations of motion: $$\frac{d}{dt} Q(q_i, \dot q_i) = 0.$$ Note that $Q(q_i, \dot q_i)$ does not depend on $t$ explicitly; it only depends on $t$ insofar as $q_i$ and $\dot q_i$ do. However, an initial condition depends on $q_i$, $\dot q_i$, and $t$. You need to know $t$ in order to know "how far to turn back the clock" to find the initial position and velocity. A slick "proof" of Noether's theorem goes as follows. Say you have some differentiable group of transformations that leave your Lagrangian invariant. Imagine changing a path in configuration space by an infinitesimal group action, using a tiny number $\varepsilon$. For example, an infinitesimal translation in the $x$-direction in 3D space ($i = 1, 2, 3$) would be given by $$q_1 \to q_1 + \varepsilon$$$$q_2 \to q_2$$$$q_3 \to q_3$$$$\dot q_i \to \dot q_i$$ and an infinitesimal rotation in the $xy$-plane would be given by $$q_1 \to q_1 + \varepsilon q_2$$$$q_2 \to q_2 - \varepsilon q_1$$$$\dot q_1 \to \dot q_1 + \varepsilon \dot q_2$$$$\dot q_2 \to \dot q_2 - \varepsilon \dot q_1$$$$q_3 \to q_3$$$$\dot q_3 \to \dot q_3$$ Under these transformations, the Lagrangian $L(q_i, \dot q_i)$ will not change its value. In other words, the change in the Lagrangian can be expressed as $$\delta L(q_i, \dot q_i) = \varepsilon A(q_i, \dot q_i)$$ where $A = 0$ if the group action is a symmetry. Here is the slick part: now imagine that the parameter $\varepsilon$ is time-dependent, i.e. $\varepsilon(t)$. For our above two actions, the transformations would then become $$q_1 \to q_1 + \varepsilon$$$$\dot q_1 \to \dot q_i + \dot \varepsilon$$$$q_{2} \to q_{2}$$$$q_{3} \to q_{3}$$$$\dot q_{2} \to \dot q_{2}$$$$\dot q_{3} \to \dot q_{3}$$ and $$q_1 \to q_1 + \varepsilon q_2$$$$q_2 \to q_2 - \varepsilon q_1$$$$\dot q_1 \to \dot q_1 + \varepsilon \dot q_2 + \dot \varepsilon q_2$$$$\dot q_2 \to \dot q_2 - \varepsilon \dot q_1 - \dot \varepsilon q_1$$$$q_3 \to q_3$$$$\dot q_3 \to \dot q_3$$ (where the extra term above comes from the product rule when differentiating by $t$). Now, $\varepsilon(t)$ and $\dot \varepsilon(t)$ are both tiny numbers that change paths in configuration space. That means that, just doing a first order Taylor expansion, the change in $L$ under these transformations can be expressed as $$\delta L = \varepsilon A + \dot \varepsilon B$$ where the $A$ is the same $A$ as before, meaning $A = 0$ if the transformation is a symmetry. Now, on actual paths, $\delta S = 0$ for any tiny variation we make to our path. (That is just the principle of least action.) That includes our tiny group action variation. Therefore, on actual paths, $$0 = \delta S = \int \delta L dt = \int \dot \varepsilon B dt = - \int \varepsilon \dot B dt.$$ (In the last step we integrated by parts and imposed boundary conditions $\varepsilon = 0$ on the boundary of integration.) Therefore, if $\delta S$ is to be $0$ for any $\varepsilon$, we must have $$\dot B = 0$$ so $B$ is a conserved quantity. Note that if our transformation wasn't a symmetry, then $A \neq 0$ and $$\dot B = A$$ meaning that $B$ would change in time and not be a conserved quantity. This concludes the proof that symmetries give conservation laws, and also instructs you how to find said conserved quantities. Now this is all nice and interesting. Symmetries imply conservation laws. In a sense, we have understood where "conserved quantities" come from (symmetries). Conserved quantities are very useful in physics because they usually make analyzing the system much easier. For example, even in intro physics, the conservation of momentum and energy are always used to make solving for the motion of a particle much easier. In more complicated examples, like for example a gas of many particles, the evolution of the system is far too complicated to ever hope to describe. However, if you know a few conserved quantities (like energy, for example) you can still get a pretty good idea of how the system behaves. In quantum field theory, quantum fields are also governed by Lagrangians. However, it is often difficult to figure out exactly what the Lagrangian of quantum fields should be based off of experimental data. Something that is straightforward to ascertain from experimental data, however, are conserved quantities, like charge, lepton number, baryon number, weak hyper change, and many others. Experimentalists can figure out what these conserved quantities are, and then theorists will cook up Lagrangians with symmetries that have the right conserved quantities. This greatly aids theorists in figuring out the fundamental laws of physics. Considerations of symmetries and conserved quantities historically played a large role in piecing together the standard model, and continue to play a crucial role in theorists trying to figure out what lies beyond it. EDIT: So, to answer your question proper, any system of differential equations will have integration constants (A.K.A. initial conditions). However, from equations of motion derived from a Lagrangian (and all known physical laws can be written with Lagrangians) we have extra symmetries that have important physical meaning. Furthermore, the exact solutions to differential equations are usually impossible to solve for any moderately complex system. Therefore, finding initial conditions is usually a waste of time, while Noether's theorem is easy to use.
1. Perturbations As already mentioned, the Jahn–Teller effect has its roots in group theory. The essence of the argument is that the energy of the compound is stabilised upon distortion to a lower-symmetry point group. This distortion may be considered to be a normal mode of vibration, with the corresponding vibrational coordinate $q$ labelling the "extent of distortion". There is one condition on the vibrational mode: it cannot transform as the totally symmetric irreducible representation of the molecular point group, as such a vibrational mode cannot bring about any distortion in the molecular geometry (it may lead to a change in equilibrium bond length, but not in the shape of the molecule).$\require{begingroup} \begingroup\newcommand{\En}[1]{E_n^{(#1)}}\newcommand{\ket}[1]{| #1 \rangle}\newcommand{\n}[1]{n^{(#1)}}\newcommand{\md}[0]{\mathrm{d}}\newcommand{\odiff}[2]{\frac{\md #1}{\md #2}}$ In the undistorted geometry (i.e. $q = 0$), the electronic Hamiltonian is denoted $H_0$. The corresponding unperturbed electronic wavefunction is $\ket{\n{0}}$, and the electronic energy is $\En{0}$. We therefore have $$H_0 \ket{\n{0}} = \En{0}\ket{\n{0}} \tag{1}$$ Here, the Hamiltonian, wavefunction, and energy are all functions of $q$. We can expand them as Taylor series about $q = 0$: $$\begin{align}H &= H_0 + q \left(\odiff{H}{q}\right) + \frac{q^2}{2}\left(\frac{\md^2 H}{\md q^2}\right) + \cdots \tag{2} \\\ket{n} &= \ket{\n{0}} + q\ket{\n{1}} + \frac{q^2}{2}\ket{\n{2}} + \cdots \tag{3} \\E_n &= \En{0} + q\En{1} + \frac{q^2}{2}\En{2} + \cdots \tag{4}\end{align}$$ In the new geometry (i.e. $q \neq 0$), the Schrodinger equation must still be obeyed and therefore $$H\ket{n} = E_n \ket{n} \tag{5}$$ By substituting in equations $(2)$ through $(4)$ into equation $(5)$, one can compare coefficients of $q$ to reach the results: $$\begin{align}\En{1} &= \left< \n{0} \middle| \odiff{H}{q} \middle| \n{0} \right> \tag{6} \\\En{2} &= \left< \n{0} \middle| \frac{\md^2 H}{\md q^2} \middle| \n{0} \right> + 2\sum_{m \neq n}\frac{\left|\left<m^{(0)} \middle|(\md H/\md q)\middle|\n{0} \right>\right|^2}{\En{0} - E_m^{(0)}} \tag{7}\end{align}$$ The derivation of equations $(6)$ and $(7)$ will not be discussed further here. 1 Distortions that arise due to the $\En{1}$ term are called first-order Jahn–Teller distortions, and distortions that arise from the $\En{2}$ term are called second-order Jahn–Teller distortions. 2. The first-order Jahn–Teller effect Recall that $$E_n = \En{0} + \color{red}{q\En{1}} + \cdots \tag{8}$$ Therefore, if $\En{1} > 0$, then stabilisation may be attained with a negative value of $q$; if $\En{1} < 0$, then stabilisation may be attained with a positive value of $q$. These simply represent distortions in opposite directions along a vibrational coordinate. A well-known example is the distortion of octahedral $\ce{Cu^2+}$: there are two possible choices, one involving axial compression, and one involving axial elongation. These two distortions arise from movement along the same vibrational coordinate, except that one has $q > 0$ and the other has $q < 0$. In order for there to be a first-order Jahn–Teller distortion, we therefore require that $$\En{1} = \left<\n{0}|(\md H/\md q)| \n{0}\right> \neq 0 \tag{9}$$ Within group theory, the condition for the integral to be nonzero is that the integrand must contain a component that transforms as the totally symmetric irreducible representation (TSIR). Mathematically, $$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_{(\md H/\md q)} \otimes \Gamma_n \tag{10}$$ We can simplify this slightly by noting that the Hamiltonian, $H$, itself transforms as the TSIR. Therefore, $\md H/\md q$ transforms as $\Gamma_q$, and the requirement is that $$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_q \otimes \Gamma_n \tag{11}$$ In all point groups, for any non-degenerate irrep $\Gamma_n$, we find that $\Gamma_n \otimes \Gamma_n = \Gamma_{\text{TSIR}}$. Therefore, if $\Gamma_n$ is non-degenerate, then $$\Gamma_n \otimes \Gamma_q \otimes \Gamma_n = \Gamma_q \neq \Gamma_{\text{TSIR}} \tag{12}$$ and the molecule is stable against a first-order Jahn–Teller distortion. Therefore, all closed-shell molecules ($\Gamma_n = \Gamma_{\text{TSIR}}$) do not undergo first-order Jahn–Teller distortions. However, what will happen if $\Gamma_n$ is degenerate? Now, the product $\Gamma_n \otimes \Gamma_n$ contains other irreps apart from the TSIR. 2 If the molecule possesses a vibrational mode that transforms as one of these irreps, then the direct product $\Gamma_n \otimes \Gamma_q \otimes \Gamma_n$ will contain the TSIR. In a rather inelegant article, 3 Hermann Jahn and Edward Teller worked out the direct products for every important point group and found that: stability and degeneracy are not possible simultaneously unless the molecule is a linear one... In other words, if a non-linear molecule has a degenerate ground state, then it is susceptible towards a (first-order) Jahn–Teller distortion. Take, for example, octahedral $\ce{Cu^2+}$. This has a $\mathrm{^2E_g}$ term symbol (see this question) - which is doubly degenerate. The symmetric direct product $\mathrm{E_g \otimes E_g = A_{1g} \oplus E_g}$. Therefore, if we have a vibrational mode of symmetry $\mathrm{E_g}$, then distortion along this vibrational coordinate will occur to give a more stable compound. Recall that the vibrational mode cannot transform as the TSIR, so we can neglect the $\mathrm{A_{1g}}$ term. What does an $\mathrm{e_g}$ vibrational mode look like? Here is a diagram: 4 It's an axial elongation, which happens to match what we know of Cu(II). However, there is a catch. The vibrational mode is doubly degenerate (the other $\mathrm{e_g}$ mode is not shown), and any linear combination of these two degenerate vibrational modes also transforms as $\mathrm{e_g}$. Therefore, the exact form of the distortion can be any linear combination of these two degenerate modes. It can also involve negative coefficients, i.e. it might feature axial compression instead of elongation; there is no way to find that out using arguments purely based on symmetry. Therefore, in this case, it is simply a coincidence that the $\mathrm{e_g}$ mode displayed is exactly the same as the form of the distortion seen in Cu(II). Nevertheless, it is encouraging to see that the axial elongation indeed transforms as $\mathrm{e_g}$ - which lends credence to the group theoretical analysis above. On top of that, there's also no indication of how much distortion there is. That depends on (amongst other things) the value of $\En{1}$, and all we have said is that it is nonzero - we have not said how large it is. This is what is meant by "impossible to predict the extent or the exact form of the distortion". 3. The second-order Jahn–Teller effect Pearson has written an article on second-order Jahn–Teller effects. 5 For the second-order term, the energy correction is of the form $$E_n = \En{0} + q\En{1} + \color{red}{\frac{q^2}{2}\En{2}} \cdots \tag{13}$$ Here, the $q^2$ term means that $\En{2}$ has to be negative if we want to see a second-order Jahn–Teller distortion. Unlike the first-order case, if $\En{2} > 0$, there will not be any distortion. The second-order correction to the energy comprises two terms. The first term, $\left<\n{0}|(\md^2 H/\md q^2)|\n{0}\right>$, is always positive. (The proof of this is left to the reader. Hint: use the fact that $\md H/\md q$ is hermitian.) It may be interpreted as a restoring force that tries to bring the nuclei back to their original positions, and it is related to the fact that if the electronic state remains unperturbed (i.e. $\ket{n} = \ket{\n{0}}$), the unperturbed nuclear positions represent the most stable nuclear configuration. The second term has the form $$\sum_{m \neq n}\frac{\left|\left<m^{(0)} \middle|(\md H/\md q)\middle|\n{0} \right>\right|^2}{\En{0} - E_m^{(0)}} \tag{14}$$ which may seem like a slight monstrosity, but it is actually much easier to analyse than it looks. The summation over $m$ indicates that we are going to count every single electronic state $\ket{m}$ that is not the ground state $\ket{n}$. Since the numerator is a square modulus, it is either zero or positive, and since $\En{0} < E_m^{(0)}$, the denominator is always negative; therefore, this term is necessarily either zero or negative. If $\En{2}$ is to be negative, then we need the second term to dominate the first. For this to occur, there are two prerequisites: The denominator must be small, i.e. $\ket{m}$ must be a low-lying excited state such that the energy gap $\Delta E = \En{0} - E_m^{(0)}$ is small; The numerator must not be zero, i.e. there must be a low-lying excited state of the appropriate symmetry $\ket{m}$ such that $\Gamma_m \otimes \Gamma_q \otimes \Gamma_n$ contains $\Gamma_{\text{TSIR}}$. The first condition usually means that it will suffice to consider the first few excited states. In many of the examples I know of, the excited state that mixes with the ground state is the first excited state. In such cases one can even simply get rid of the sum and set $\ket{m}$ to be the first excited state. These symmetry requirements are much less restrictive than previously, and second-order Jahn–Teller distortions tend to be much more widely seen than first-order distortions. A small selection of compounds in which second-order Jahn–Teller distortions are important are: p-block hydrides, $\ce{PbO}$, $\ce{Hg^2+}$, $\ce{WMe6}$, $\ce{R2Sn=SnR2}$, and anti-aromatic compounds such as cyclobutadiene. Let us use octahedral $\ce{Hg^2+}$ as an illustration. In undistorted $O_\mathrm{h}$ symmetry, $\ce{Hg^2+}$ has a closed-shell $\mathrm{d^{10}}$ configuration and therefore its electronic ground state is $\Gamma_n = \mathrm{A_{1g}}$. However, upon excitation of one electron from the 5d orbitals (specifically the $\mathrm{e_g}$ set) to the 6s orbital (which transforms as $\mathrm{a_{1g}}$), the term symbol changes to $$\Gamma_m = \mathrm{E_g \otimes A_{1g} = E_g} \tag{15}$$ Therefore, a vibrational mode transforming as $\mathrm{E_g}$ will facilitate a second-order distortion, since $$\Gamma_m \otimes \Gamma_q \otimes \Gamma_n = \mathrm{E_g \otimes E_g \otimes A_{1g}} \tag{16}$$ contains the TSIR. Again, there is no way of knowing the exact form or the extent of the distortion; we only know that it transforms as $\mathrm{E_g}$. In the case of Hg(II), the distortion is manifested as an axial compression to give a "2 short, 4 long" coordination geometry, which is often described as "linear". A second factor that favours the distortion is the extremely small 5d–6s gap in mercury, due to relativistic 5d destabilisation and 6s stabilisation. To see the importance of the small $\Delta E$, consider $\ce{Zn^2+}$, which has a larger 3d–4s gap; linear Zn(II) compounds are rare, while linear Hg(II) compounds are the norm. Most of the time, the relevant excited state $\ket{m}$ arises from promotion of an electron from the HOMO to the LUMO. It is easy to show that if this is the case, $$\Gamma_m \otimes \Gamma_n = \Gamma_{\text{HOMO}} \otimes \Gamma_{\text{LUMO}} \tag{17}$$ The second-order Jahn–Teller distortion can then be viewed as a reduction in symmetry, such that the HOMO and the LUMO, which transformed as different irreps in the undistorted geometry, now transform as the same irrep and therefore mix with each other. In the case of $\ce{Hg^2+}$: This interpretation using the symmetry of individual orbitals, however, only works when the relevant excited state $\ket{m}$ is derived by excitation of an electron! In some (admittedly very rare) cases, it is possible that both $\ket{n}$ and $\ket{m}$ are derived from the same electronic configuration. This is the case for cyclobutadiene, and the Jahn–Teller effect in cyclobutadiene cannot be rationalised using orbital mixing. $\endgroup$ Notes and references (1) For more details, look up perturbation theory in your quantum mechanics book of choice. In such treatments, the perturbation is usually formulated slightly differently: e.g. $H$ is taken as $H_0 + \lambda V$, and the eigenstates and eigenvalues are expanded as a power series instead of a Taylor series. Notwithstanding that, the principles remain the same. (2) There is a subtlety in that the symmetric direct product must be taken. For example, in the $D_\mathrm{\infty h}$ point group, we have $\Pi \otimes \Pi = \Sigma^+ + [\Sigma^-] + \Delta$. The antisymmetric direct product $\Sigma^-$ has to be discarded. (3) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy. Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142. n.b. Considering that I don't have a more elegant proof for it, I don't have much of a right to call it inelegant. (4) Albright, T. A.; Burdett, J. K.; Whangbo, M.-H. Orbital Interactions in Chemistry, 2nd ed.; Wiley: Hoboken, NJ, 2013. (5) Pearson, R. G. The second-order Jahn–Teller effect. J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4.
Is the universal covering of a connected open subset $U$ of ℝ n diffeomorphic to an open subset of ℝ n (standard differentiable structure)? If not true in general, is there any condition on $U$ which guarantees a positive answer? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community Is the universal covering of a connected open subset $U$ of ℝ n diffeomorphic to an open subset of ℝ n (standard differentiable structure)? If not true in general, is there any condition on $U$ which guarantees a positive answer? The answer is no, and there is a counter-example in dimension $4$. A theorem of Whitney and Massey states that the total space of a disc-bundle over a non-orientable surface $\Sigma$ embeds in $S^4$ if and only if the normal Euler class of the disc bundle is one of the integers: $$\{2\chi -4, 2\chi, \cdots, 4-2\chi\}$$ where $\chi$ is the Euler characteristic of the surface. So for example, if $\Sigma = \mathbb RP^2$, $\chi = 1$. So normal Euler classes $-2$ and $+2$ appear for embeddings $\mathbb RP^2 \to S^4$. These come from the standard embeddings of $\mathbb RP^2$ in $S^4$. The universal cover of this total space is the pull-back of that bundle along the covering map $S^2 \to \mathbb RP^2$. But the total space of this bundle is orientable so it can't embed in $S^4$ as it's Euler class is not zero -- the pull-back bundle must be isomorpic to the tangent bundle of $S^2$. And this does not embed in $S^4$. The open subset $U$ is parallelizable and hence so is its universal cover. A classical theorem of Morris Hirsch says that any open parallelizable $n$-manifold can be immersed into $\mathbb R^n$. Now one could ask whether any open parallelizable $n$-manifold embeds into $\mathbb R^n$. This is formally more general than the original question, so it might be easier to produce a counterexample in this case. Also this more general question strikes me as more natural. I don't have an answer, only a heuristically inspired hunch. If we think of the figure eight, we can thicken it slightly to an open connected set in the plane. The universal cover is the universal TV antenna times an open interval. But this can be put into the plane by narrowing the branches of the thickened UTVA as one moves out from the center, and since one can do this arbitrarily fast, even tiny branches very far out can be prevented from colliding. Now there is more room in higher dimensions, so the above kind of argument should actually be easier to carry through than in the plane. Perhaps if one excludes torsion in the fundamental group at least, the countability would be enough if the dimension is at least 3. Just visualize the countably many generating loops, and wiggle them very slightly (there is room enough) so they don't intersect. Then hopefully one can proceed as with the figure eight above. That there could be countably many branches at the forks does not seem to be an essential difficulty. The above argument does not work generally in the plane, but for the plane the desired statement follows from (a special case of) the uniformization theorem of complex analysis: Every simply connected open Riemann surface is conformally equivalent (and thus diffeomorphic) to the whole plane or the open upper half plane. EDIT: I think it must be more complicated than this. Otherwise any open connected subset with torsionfree fundamental group, of a manifold of dimension at least three, would have a universal cover diffeomorphic to an open connected set in the same manifold. Surely this is wrong? (By the way, there are open connected sets in Euclidean space with torsion in their fundamental groups). EDIT: I doubt there is room enough to make this work in dimension 3, maybe in dimension 4. Consider the standard embedding of the unit interval in $\mathbb R^2$ viz. $I=[0,1]\times \{0\} \subset \mathbb R^2$. Let $C$ denote the Cantor subset $C \subset I$ and define $U= \mathbb R^2 - C$, an open subset of $\mathbb R^2$. I seem to remember that $\pi_1(U)$ has cardinality at least the continuum and so the fibers of the universal covering $\tilde{U} \to U$ are such big discrete sets that I would guess that $\tilde{U} $ can't be embedded in $\mathbb R^2$. EDIT Thanks to Petya and Ryan for explaining that $\pi_1(U)$ is actually countable and that what "I seem to remember" is false. Sincere apologies to all for my misleading answer. For the sake of atonement, here is another argument for the countability of $\pi_1(U)$. Since $U$ is locally connected, locally compact and second countable, any connected covering (or even étalé space) of $U$ is second countable by the theorem of Poincaré-Volterra. Hence the fibers of the covering, being discrete, are countable. But these fibers are equipotent to $\pi_1(U)$ , which must thus be countable. This argument seems to be valid for any open subset of $\mathbb R^n$. An open subset of (standard) $\mathbb R^n$ has a flat metric, so its universal covering space is a simply connected Euclidean space form. The only one such thing is $\mathbb R^n$.
In this great question by Nathaniel Johnston, and in its answers, we can learn the following remarkable inequality: For all $v,w \in \mathbb{R}^n$ we have \begin{align*} \|v^2\| \, \|w^2\| - \langle v^2, w^2 \rangle \le \|v\|^2 \|w\|^2 - \langle v,w \rangle^2; \quad (*) \end{align*} here, $\langle\cdot,\cdot\rangle$ denotes the standard inner product on $\mathbb{R}^n$, $\|\cdot\|$ denotes the Euclidean norm and $v^2,w^2 \in \mathbb{R}^n$ denote the elementwise squares of $v$ and $w$. Both sides of $(*)$ are nonnegative by the Cauchy-Schwarz inequality, and the LHS gives a non-zero bound for the right RHS, in general. What strikes me is that the RHS and the LHS of $(*)$ have different (linear) symmetry groups: the RHS does not change if we apply any orthogonal matrix $U \in \mathbb{R}^{n \times n}$ to both $v$ and $w$, while this is not true for the LHS. Hence, we can strengthen $(*)$ to \begin{align*} \sup_{U^*U = I}\Big(\|(Uv)^2\| \, \|(Uw)^2\| - \langle (Uv)^2, (Uw)^2 \rangle\Big) \le \|v\|^2 \|w\|^2 - \langle v,w \rangle^2. \quad (**) \end{align*} Unfortunately, I have no idea how to evaluate the LHS of $(**)$. Question. Can we explicitely evaluate the LHS of $(**)$? Or, more generally, is there a version of $(*)$ for which both sides are invariant under multiplying both $v$ and $w$ by (identical) orthogonal matrices? Admittedly, this question is a bit vague since it might depend on one's perspective which expressions one considers to be "explicit" and which inequalities one considers to be a "version" of $(*)$. Nevertheless, I'm wondering whether some people share my intuition that there should be a more symmetric version of $(*)$. Edit. Maybe it is worthwhile to add the following motivating example: If we choose $n = 2$ and $v = (1,1)/\sqrt{2}$, $w = (1,-1)/\sqrt{2}$, then those vectors are orthogonal and the RHS of $(*)$ equals $1$, while the LHS of $(*)$ vanishes since $v^2$ and $w^2$ are linearly dependent. However, the LHS of $(**)$ is also equal to $1$; to see this, choose $U = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1 \\ 1 & 1\end{pmatrix}$and observe that $Uv = (0,1)$, $Uw = (1,0)$.
The passage from any small category C to its set-valued functor categoy$\hat{\mathbf C}:=\mathrm{Fun}(C^{\ast},\mathrm{\bf Set})$ i.e.the full Yoneda-embedding $Y\colon \mathrm{\bf C} \to \hat{\mathrm{\bf C}}$ into the presheaf category can be considered as an universal completion-process. A functor category such as $\mathrm{Fun}(\mathrm{\bf C}^{\ast},\mathrm{\bf Set})$ is a category which is ``almost as good as the target category $\mathrm{\bf Set}$''. In particular such a functor category is a topos and has an injective subobject classifier $\Omega$. In the simplest case of $\mathrm{\bf Set}$, which can be considered as the special case $\mathrm{\bf C}=\mathrm{\bf 1}$, the subobject classifieris a two-element set $\{0,1\}$ and has the property, that it is acogenerator. (An object $C$ of a category is called cogenerator, iffor any two distinct morphism $f,g\colon X\to Y$ there is a morphism$s\colon Y\to C$ such that $s\circ f\neq s\circ g$). A cogenerator $C$ allowsto separate the morphisms in the category and so to ``resolve'' the category, if $C$ happens to be an injective object. It seem to be natural to ask whether the subobject classifier of any set-valued functor category is a cogenerator. This can not be the case, since in the special case $\mathrm{\bf C}={\mathbb Z}_2$ (category with one object generated by an non-trivial involution) which leads to the functor category $\hat{\mathbb Z}_2=\mbox{Fun}({\mathbb Z}_2^{\ast},\mathrm{\bf Set})$ of sets with a ${\mathbb Z}_2$-action, the subobject classifier $\Omega$ is a two-element set $\{0,1\}$ with the trivial ${\mathbb Z}_2$-action. This object $\Omega$ is not a cogenerator of $\hat{\mathbb Z}_2$ since the the two-element set with the nontrivial ${\mathbb Z}_2$-action gives an object $X$ of $\hat{\mathbb Z}_2$, whose nontrivial automorphism cannot be separated from the identity by any morphism from $X$ to $\Omega$. Hence there are conditions needed before the subobject classifier of a set-valued functor category can be a cogenerator. Under which precise conditions for the category $\mathrm{\bf C}$ is the subobject classifier $\Omega$ of its free-cocompletion $\hat{\mathrm{\bf C}}$ a cogenerator?
I) A Lagrangian variational principle for Euler's equations for a rigid body $$ \tag{1} (DL)_i ~=~M_i, \qquad i\in\{1,2,3\}, $$ is e.g. explained in Ref. 1. Here the angular momentum $L_i$, $i\in\{1,2,3\}$, along the three principal axes of inertia is tied to the angular velocity $\omega_i$, $i\in\{1,2,3\}$, by the formula $$\tag{2} L_i~:=~I_i \omega_i, \qquad i\in\{1,2,3\}, \qquad (\text{no sum over }i).$$ The covariant time-derivative $D$ of a vector $\eta_i$, $i\in\{1,2,3\}$, is defined as $$\tag{3} (D\eta)_i ~:=~ \dot{\eta}_i+(\omega\times\eta)_i, \qquad i\in\{1,2,3\}. $$ The angular velocity vector $\omega$ plays the role of a non-Abelian gauge connection/potential. II) To see the $so(3)$ Lie algebra, we map an infinitesimal rotation vector $\alpha$ into an antisymmetric real $3\times3$ matrix $r(\alpha)\in so(3)$ as $$\tag{4} \alpha_i \quad\longrightarrow\quad r(\alpha)_{jk}~:=~\sum_{i=1}^3\alpha_i \varepsilon_{ijk}. $$ The $so(3)$ Lie-bracket is given by (minus) the vector cross product $$\tag{5} [r(\alpha),r(\beta)]~=~r(\beta\times \alpha). $$ Similarly, for the corresponding $SO(3)$ Lie group, a finite rotation vector $\alpha$ maps into an orthogonal $3\times3$ rotation matrix $R(\alpha)\in SO(3)$ as explained in this Phys.SE post. Infinitesimally, for an infinitesimal rotation $|\delta\alpha| \ll 1$, the correspondence is $$\tag{6} R(\delta\alpha)_{jk} ~=~\delta_{jk} + r(\delta\alpha)_{jk} + {\cal O}(\delta\alpha^2). $$ III) A finite non-Abelian gauge transformation $\omega\longrightarrow\omega^{\alpha}$ takes the form $$\tag{7} r(\omega^{\alpha})~=~R(-\alpha) \left(\frac{d}{dt}-r(\omega)\right)R(\alpha), \qquad \alpha\in \mathbb{R}^3.$$ An infinitesimal non-Abelian gauge transformation $\delta$ takes the form $$\tag{8} r(\delta\omega)~=~\frac{d}{dt}r(\delta\alpha)-[r(\omega),r(\delta\alpha)],$$ or equivalently $$\tag{9} \delta\omega_i~=~(D\delta\alpha)_i, \qquad i\in\{1,2,3\}, $$ where $\delta\alpha$ denotes an infinitesimal rotation vector corresponding to an $so(3)$ Lie algebra element $r(\delta\alpha)$. We call (7)-(9) gauge transformations for semantic reasons, because of their familiar form, but note that (most of) them are not unphysical/spurious transformations. We stress that the angular velocity $\omega$ is a physical variable. IV) Finally we are ready to discuss the action principle. The finite rotation vector $\alpha(t)\in \mathbb{R}^3$ plays the role of independent dynamical variables for the action principle. One may think of virtual rotation paths $\alpha:[t_i,t_f]\to \mathbb{R}^3$ as a reparametrization of virtual angular velocity paths $\omega:[t_i,t_f]\to \mathbb{R}^3$. The action reads $$\tag{10} S[\alpha,\omega]~=~\int_{t_i}^{t_f} \! dt ~L $$ with Lagrangian $$\tag{11} L~=~\frac{1}{2} L^{\alpha}\cdot \omega^{\alpha} + M\cdot \alpha , $$ where $$\tag{12} L^{\alpha}_i~:=~I_i \omega^{\alpha}_i, \qquad i\in\{1,2,3\}, \qquad (\text{no sum over }i).$$ The Lagrangian (11) consists of rotational kinetic energy plus a source term from the torque $M$. Here $\omega^{\alpha}$ is the actual angular velocity vector, while $\omega$ here (in contrast to above) is a fixed non-dynamical reference vector, which is not varied. It is sort of a gauge-fixing choice. Infinitesimal variation yields $$ \tag{13}\delta L ~\stackrel{(11)}{=}~ L^{\alpha}\cdot \delta\omega^{\alpha}+ M\cdot \delta\alpha ~\stackrel{(9)}{=}~ L^{\alpha}\cdot \left(\frac{d}{dt}\delta\alpha+ (\omega^{\alpha}\times\delta\alpha)\right) + M\cdot \delta\alpha ,$$ which (after integration by parts and appropriate boundary conditions) leads to Euler's equations (1) for the angular velocity vector $\omega^{\alpha}$. References: J.E. Marsden and T.S. Ratiu, Introduction toMechanics and Symmetry, 1998.
I chose to implement Scalable Ambient Obscurance [MML12] for my project, which is a fast improved screen space ambient occlusion method. This method improves performance and quality of a method introduced previously by McGuire et al., the Alchemy Ambient Obscurance algorithm [MOBH11]. Ambient occlusion methods attempting to approximate the effect of decreased ambient/environment light illuminating inside crevasses on an object. The effect that's being implemented can be seen as the illumination of an object on a cloudy day, like the statue below. The areas we're interested in here are the shadows beneath the collar, in the creases in the neck, where the jacket meets the shirt and in the facial features. For comparison the lion head from the Crytek Sponza scene is shown, displaying the value of the ambient obscurance light modulation term. Note especially the darkening within the crevasses on the model, giving a similar effect to that on the photo of the statue. Slides of [MML12, MOBH11] The geometric construction of the effect follows from an approximation of how ambient occlusion works. A hemisphere is placed at the point aligned with the surface normal and we determine how much of this hemisphere has objects in it that could be blocking environment light incident on this point. Due to this approximation the accuracy of the method is heavily dependent on the hemisphere capturing the local effects properly. [MML12, MOBH11] The Scalable AO (and Alchemy AO) methods choose some number of points around the hemisphere and compute a vector from the point we're computing the AO value for. This gives us information about the distance to the point and the difference in depth between the points, eg. if this point \(P\) would block ambient light from reaching \(C\) where the distance gives us an idea of how much it effects \(C\). For performance reasons this computation is done in screen space using information from the scene's depth buffer to reconstruct camera space positions where we sample some pixel to find \(C\) and then sample on a circle of pixels around it to find neighboring points \(P_i\). [MML12, MOBH11] The ambient occlusion at a point \(C\) is computed by taking \(s\) samples distributed on a hemisphere around the point and recovering their camera space position \(P_i\). We then find \(v_i = P_i - C\) and sum up the occlusion contributions of each sample. On a simplified level the dot product \(v_i \cdot \hat{n_C}\) tells us how far in front of the surface \(P_i\) is. A small amount of bias is also applied in the form of \(\beta\) to prevent self shadowing, similar to what's done in shadow mapping bias. This contribution is then divided by the length of \(v_i\) to reduce the shadowing contribution of points further away from the point being computed, the \(\varepsilon\) term is a small value used to prevent division by zero. See [MOBH11] for full details of the derivation of the ambient occlusion estimator used, shown below.$$ A(C) = \text{max}\left(0, 1 - \frac{2 \sigma}{s} \sum_{i = 1}^{s} \frac{\text{max}(0, \vec{v_i} \cdot \hat{n_C} + z_C \beta)} {\vec{v_i} \cdot \vec{v_i} + \varepsilon}\right)^\kappa $$ My project takes advantage of a few recent OpenGL features like MultiDrawIndirect, shader storage buffer objects and texture arrays. By using these features it becomes possible to draw the entire sponza scene below using a single call to MultiDrawIndirect. This is achieved by packing all textures used by the objects in the scene into as few texture arrays as possible and storing information about material properties and texture location and index in a SSBO. Additionally all model data for objects in the scene are packed into a single buffer which contains the vertex and index data. The end result of this is that we can pass a per-instance parameter specifying which material to use and the object can access its material properties and texture data through the SSBO and texture arrays passed, allowing us to draw all objects in the scene with a single draw call. The Crytek Sponza model itself is not very complex by modern standards but has some nice texture work that make it look very nice, including some techinques we learned about in class. The images rendered include bump mapping, alpha mapping and specular mapping effects along with a typical diffuse texture map. My scalable AO implementation is not quite scalable AO as described in the paper since I've chosen to be a bit lazy and not bother with reconstructing camera space positions from the depth buffer and to instead just render the positions out to an RGB32F texture. This allows me to not worry about the various precision issues discussed in the paper however it also impacts the performance of my method a decent bit due to the increased bandwidth usage. I also was curious what it would look like using the bump mapped normals to compute AO and provide an option to toggle this, however I didn't find the results very nice. McGuire et al. also recommend a new AO estimator over the Alchemy AO estimator used originally but I couldn't get this new estimator to behave very well. This new estimator is supposed to smooth the transition into shadow and is discussed at the end of [MML12] and in their slides from HPG. To make it easier to compare against the original paper I rendered the Crytek Sponza scene used by McGuire et al. to demonstrate their results in [MML12]. My implementation is not as fast as the paper's since I've left some parameters tune-able and am passing the camera space positions directly in an RGB32F buffer, thus consuming much more bandwidth vs. reconstructing positions from a 32F depth buffer. There are also areas where the quality of my implementation could be improved, the blur and smoothness of the ambient occlusion looks much better in [MML12]'s implementation. The ambient occlusion effect can be a bit subtle in some areas, the best way to see the difference is to open the full and no AO images in separate tabs and flip between them to compare. [MML12] McGuire, M., Mara, M., and Luebke, D.: Scalable Ambient Obscurance. In High Performance Graphics 2012. [MOBH11] McGuire, M., Osman, B., Bukowski, M., and Hennessy, P.: The Alchemy Screen-Space Ambient Obscurance Algorithm. In High Performance Graphics 2011.
6.1. From Dense Layers to Convolutions¶ The models that we’ve discussed so far are fine options if you’redealing with tabular data. By tabular we mean that the data consistsof rows corresponding to examples and columns corresponding to features.With tabular data, we might anticipate that pattern we seek couldrequire modeling interactions among the features, but do not assumeanything a priori about which features are related to each other or inwhat way. Sometimes we truly may not have any knowledge to guide the constructionof more cleverly-organized architectures. and in thise cases, amultilayer perceptron is often the best that we can do. However, once westart dealing with high-dimensional perceptual data, these structure-less netwroks can grow unwieldy. For instance, let’s return to our running example of distinguishing catsfrom dogs. Say that we do a thorough job in data collection, collectingan annotated sets of high-quality 1-megapixel photographs. This meansthat the input into a network has 1 million dimensions. Even anaggressive reduction to 1,000 hidden dimensions would require a dense (fully-connected) layer to support \(10^9\) parameters.Unless we have an extremely large dataset (perhaps billions?), lots ofGPUs, a talent for extreme distributed optimization, and anextraordinary ammount of patience, learning the parameters of thisnetwork may turn out to be impossible. A careful reader might object to this argument on the basis that 1 megapixel resolution may not be necessary. However, while you could get away with 100,000 pixels, we grossly underestimated the number of hidden nodes that it typically takes to learn good hidden representations of images. Learning a binary classifier with so many parameters might seem to require that we collect an enormous dataset, perhaps comparable to the number of dogs and cats on the planet. And yet Yet both humans and computers are able to distinguish cats from dogs quite well, seemingly contradicting these conclusions. That’s because images exhibit rich structure that is typically exploited by humans and machine learning models alike. 6.1.1. Invariances¶ Imagine that you want to detect an object in an image. It seemsreasonable that whatever method we use to recognize objects should notbe overly concerned with the precise location of the object shouldn’tin the image. Ideally we could learn a system that would somehow exploitthis knowledge. Pigs usually don’t fly and planes usually don’t swim.Nonetheless, we could still recognize a flying pig were one to appear.This ideas is taken to an extreme in the children’s game ‘Where’sWaldo’, an example is shown in Fig. 6.1.1. The game consistsof a number of chaotic scenes bursting with activity and Waldo shows upsomewhere in each (typically lurking in some unlikely location). Thereader’s goal is to locate him. Despite his characteristic outfit, thiscan be surprisingly difficult, due to the large number of confounders. Back to images, the intuitions we have been discussion could be made more concrete yielding a few key principles for building neural networks for computer vision: Our vision systems should, in some sense, respond similary to the same object regardless of where it appears in the image (Translation Invariance) Our visions systems should, in some sense, focus on local regions, without regard for what else is happening in the image at greater distances. (Locality) Let’s see how this translates into mathematics. 6.1.2. Constraining the MLP¶ To start off let’s consider what an MLP would look like with \(h \times w\) images as inputs (represented as matrices in math, and as 2D arrays in code), and hidden representations similarly organized as \(h \times w\) matrices / 2D arrays. Let \(x[i,j]\) and \(h[i,j]\) denote pixel location \((i,j)\) in an image and hidden representation, respectively. Consequently, to have each of the \(hw\) hidden nodes receive input from each of the \(hw\) inputs, we would switch from using weight matrices (as we did previously in MLPs) to representing our parameters as four-dimensional weight tensors. We could formally express this dense layer as follows: The switch from \(W\) to \(V\) is entirely cosmetic (for now) since there is a one-to-one correspondence between coefficients in both tensors. We simply re-index the subscripts \((k,l)\) such that \(k = i+a\) and \(l = j+b\). In other words, we set \(V[i,j,a,b] = W[i,j,i+a, j+b]\). The indices \(a, b\) run over both positive and negative offsets, covering the entire image. For any given location \((i,j)\) in the hidden layer \(h[i,j]\), we compute its value by summing over pixels in \(x\), centered around \((i,j)\) and weighted by \(V[i,j,a,b]\). Now let’s invoke the first principle we established above— translationinvariance. This implies that a shift in the inputs \(x\) shouldsimply lead to a shift in the activations \(h\). This is onlypossible if \(V\) doesn’t actually depend on \((i,j)\), i.e., wehave \(V[i,j,a,b] = V[a,b]\). As a result we can simplify thedefinition for \(h\). This is a convolution! We are effectively weighting pixels \((i+a, j+b)\) in the vicinity of \((i,j)\) with coefficients \(V[a,b]\) to obtain the value \(h[i,j]\). Note that \(V[a,b]\) needs many fewer coefficients than \(V[i,j,a,b]\). For a 1 megapixel image it has at most 1 million coefficients. This is 1 million fewer parameters since it no longer depends on the location within the image. We have made significant progress! Now let’s invoke the second principle - locality. As motivated above,we believe that we shouldn’t have to look very far away from\((i,j)\) in order to glean relevant information to assess what isgoing on at \(h[i,j]\). This means that outside some range\(|a|, |b| > \Delta\), we should set \(V[a,b] = 0\).Equivalently, we can rewrite \(h[i,j]\) as This, in a nutshell is the convolutional layer. When the local region(also called a receptive field) is small, the difference as comparedto a fully-connected network can be dramatic. While previously, we mighthave required billions of parameters to represent just a single layer inan image-processing network, we now typically need just a few hundred.The price that we pay for this drastic modification is that our featureswill be translation invariant and that our layer can only take localinformation into account. All learning depends on imposing inductivebias. When that bias agrees with reality, we get sample-efficient modelsthat generalize well to unseen data. But of course, if those biases donot agree with reality, e.g. if images turned out not to be translationinvariant, 6.1.3. Convolutions¶ Let’s briefly review why the above operation is called a convolution.In mathematics, the convolution between two functions, say\(f, g: \mathbb{R}^d \to R\) is defined as That is, we measure the overlap beween \(f\) and \(g\) when both functions are shifted by \(x\) and ‘flipped’. Whenever we have discrete objects, the integral turns into a sum. For instance, for vectors defined on \(\ell_2\), i.e., the set of square summable infinite dimensional vectors with index running over \(\mathbb{Z}\) we obtain the following definition. For two-dimensional arrays, we have a corresponding sum with indices\((i,j)\) for \(f\) and \((i-a, j-b)\) for \(g\)respectively. This looks similar to definition above, with one majordifference. Rather than using \((i+a, j+b)\), we are using thedifference instead. Note, though, that this distinction is mostlycosmetic since we can always match the notation by using\(\tilde{V}[a,b] = V[-a, -b]\) to obtain\(h = x \circledast \tilde{V}\). Also note that the originaldefinition is actually a cross correlation. We will come back to thisin the following section. 6.1.4. Waldo Revisited¶ Let’s see what this looks like if we want to build an improved Waldo detector. The convolutional layer picks windows of a given size and weighs intensities according to the mask \(V\). We expect that wherever the ‘waldoness’ is highest, we will also find a peak in the hidden layer activations. There’s just a problem with this approach: so far we blissfully ignoredthat images consist of 3 channels: red, green and blue. In reality,images are quite two-dimensional objects but rather as a 3rd ordertensor, e.g., with shape \(1024 \times 1024 \times 3\) pixels. Onlytwo of these axes concern spatial relationships, while the 3rd can beregarded as assigning a multidimensional representation to each pixellocation. We thus index \(\mathbf{x}\) as \(x[i,j,k]\). The convolutional mask has to adapt accordingly. Instead of \(V[a,b]\) we now have \(V[a,b,c]\). Moreover, just as our input consists of a 3rd order tensor it turns outto be a good idea to similarly formulate our hidden representations as3rd order tensors. In other words, rather than just having a 1Drepresentation corresponding to each spatial location, we want to have amultidimensional hidden representations corresponding to each spatiallocation. We could think of the hidden representation as comprising anumber of 2D grids stacked on top of each other. These are sometimescalled channels or feature maps. Intuitively you might imaginee thatat lower layers, some channels specialize to recognizing edges, We cantake care of this by adding a fourth coordinate to \(V\) via\(V[a,b,c,d]\). Putting all together we have: This is the definition of a convolutional neural network layer. There are still many operations that we need to address. For instance, we need to figure out how to combine all the activations to a single output (e.g. whether there’s a Waldo in the image). We also need to decide how to compute things efficiently, how to combine multiple layers, and whether it is a good idea to have many narrow or a few wide layers. All of this will be addressed in the remainder of the chapter. 6.1.5. Summary¶ Translation invariance in images implies that all patches of an image will be treated in the same manner. Locality means that only a small neighborhood of pixels will be used for computation. Channels on input and output allows for meaningful feature analysis. 6.1.6. Exercises¶ Assume that the size of the convolution mask is \(\Delta = 0\). Show that in this case the convolutional mask implements an MLP independently for each set of channels. Why might translation invariance not be a good idea after all? Does it make sense for pigs to fly? What happens at the boundary of an image? Derive an analogous convolutional layer for audio. What goes wrong when you apply the above reasoning to text? Hint - what is the structure of language? Prove that \(f \circledast g = g \circledast f\).
What follows is not rigorous, but hopefully has the main idea. First, you probably want to justify$$X_t-X_s \sim N\left(0, \int_s^t f(u)^2 du \right), $$which can be done by approximating $f$ by a simple function $g = a_1 1_{(s,t_1)} + a_21_{(t_1,t_2)} + \ldots + a_n1_{(t_{n-1},t)}$ and then using$$\int_s^t g(u)dW_u = a_1(W_{t_1} - W_s) + a_2 (W_{t_2}-W_{t_1}) + \ldots a_n (W_{t}-W_{t_{n-1}}) \sim N(0, a_1^2t_1 + a_2^2 (t_2-t_1)+\ldots a_n^2(t_n-t_{n-1})) = N\left(0, \int_s^t g(u)^2 du \right)$$for $W_{t_i}-W_{t_{i-1}} \sim N(0,t_i-t_{i-1})$ all independent. The trick then is to show that $X_s$ and $X_t-X_s$ are uncorrelated in order to help us take the expectation $\mathbb{E}[\exp(a_1X_s + a_2 (X_t-X_s))]$. Now,$$\mathbb{E}[(X_t-X_s)^2] = \mathbb{E}[X_t^2] - 2\mathbb{E}[X_tX_s] + \mathbb{E}[X_s^2],$$hence,$$\mathbb{E}[X_sX_t] = \frac{1}{2}\left(\mathbb{E}[X_t^2] + \mathbb{E}[X_s^2] - \mathbb{E}[(X_t-X_s)^2] \right) = \frac{1}{2} \left( \int_0^t f(u)du + \int_0^s f(u)du - \int_s^t f(u) du \right) = \int_0^sf(u)du = \mathbb{E}[X_s^2]$$and so $\mathbb{E}[X_s(X_t-X_s)]=0,$ hence $X_s$ and $X_t-X_s$ are uncorrelated. Now, since two uncorrelated normally distributed random variables $Y_1 \sim N(0,\sigma_1^2)$ and $Y_2 \sim N(0,\sigma_2^2)$ satisfy $\mathbb{E}[\exp(a_1Y_1 + a_2 Y_2)] = \exp(\frac{a_1}{2} Y_1 + \frac{a_2}{2} Y_2) = \mathbb{E}[\exp(a_1Y_1)]\mathbb{E}[\exp(a_2Y_2)]$, we have $$\mathbb{E}[a_1X_s + a_2(X_t-X_s)] = \mathbb{E}[\exp(a_1X_s)] \mathbb{E}[\exp(a_2(X_t-X_s))]$$ for all $a_1$, $a_2$. Taking $n$ derivatives with respect to $a_1$ and $m$ derivatives with respect to $a_2$, and then setting $a_1=0=a_2$, we get $$\mathbb{E}[X_s^n(X_t-X_s)^m] = \mathbb{E}[X_s^n]\mathbb{E}[(X_t-X_s)^m].$$ Using this, we now know that for any polynomial functions $p$ and $q$, we have $\mathbb{E}[p(X_s)q(X_t-X_s)] = \mathbb{E}[p(X_s)]\mathbb{E}[q(X_t-X_s)].$ Since polynomial functions are weakly dense, the probability distribution function $\rho_{X_s,X_t-X_s}$ of $X_s$ and $X_t-X_s$ is the product of the probability density functions $\rho_{X_s}$ of $X_s$ and $\rho_{X_t-X_s}$ of $X_t-X_s$. Hence, $X_s$ and $X_t-X_s$ are independent.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
The Fundamental Theorem of Finitely Generated Abelian groups is like the Fundamental Theorem of Arithmetic: it describes a "canonical way" of expressing a finitely generated abelian group as a direct sum (in fact, two different ways), in a way that is "essentially unique", and where two groups are isomorphic if and only if they have the same "canonical way of being described." The analogy with the Fundamental Theorem of Arithmetic is that the latter tells you that there is a unique way (up to order) of expressing a positive integer as a product of powers of distinct primes; it does not tell you that there is only one way of expressing a positive integer as a product. So, the fact that we can write $36$ as $6\times 6$, with neither factor a prime power, does not contradict the Fundamental Theorem of Arithmetic. The Fundamental Theorem of Arithmetic is reflected in the fact that we can write $36$ as a product of powers of distinct primes (namely $2^2\times 3^2$) and that this is the only way to express $36$ as a product of powers of distinct primes (up to order). But it says nothing about other ways of expressing $36$ as a product. You can also use the Fundamental Theorem of Arithmetic to say that every positive integer can be written as $n=q_1q_2\cdots q_m$, where $q_1\leq q_2\leq\cdots\leq q_m$ are all primes, and this expression is unique in that if $n=p_1p_2\cdots p_n$ with $p_1\leq p_2\leq\cdots\leq p_n$ primes, then $m=n$ and $p_i=q_i$ for each $i$. Even though you have two different expressions, each one is "unique within its domain". The Fundamental Theorem for Finitely Generated Abelian groups says that you have two different "canonical decompositions": one into cyclic groups of prime power order, and one into numbers that divide each other: Every finitely generated abelian group $G$ can be written as$$G\cong \mathbb{Z}^r \oplus \mathbb{Z}_{p_1^{a_1}}\oplus\cdots\oplus \mathbb{Z}_{p_k^{a_k}}$$where $r,k$ are nonnegative integers, $p_1,\ldots,p_k$ are primes, and $a_1,\ldots,a_k$ are positive integers. Moreover, this expression is unique in the sense that if$$G\cong\mathbb{Z}^s\oplus\mathbb{Z}_{q_1^{b_1}}\oplus\cdots\oplus \mathbb{Z}_{q_{\ell}^{b_{\ell}}}$$with $s,\ell$ nonnegative integers, $q_1,\ldots,q_{\ell}$ primes, and $b_1,\ldots,b_k$ positive integers, then $r=s$, $k=\ell$, and there is a permutation $\sigma$ of $\{1,\ldots,k\}$ such that $p_i=q_{\sigma(i)}$ and $a_i=b_{\sigma(i)}$ for all $i$. Every finitely generated abelian group $G$ can be written as$$G\cong \mathbb{Z}^r\oplus\mathbb{Z}_{n_1}\oplus\cdots\oplus\mathbb{Z}_{n_t}$$where $r,t$ are nonnegative integers, $n_1,\ldots,n_t$ are positive integers greater than $1$, and $n_t|n_{t-1}|\cdots|n_1$; moreover, the expression is unique in the sense that if $G$ can also be written as$$G\cong \mathbb{Z}^s\oplus\mathbb{Z}_{m_1}\oplus\cdots\oplus \mathbb{Z}_{m_u}$$where $s,u$ are nonnegative integers, $m_1,\ldots,m_u$ are positive integers greater than $1$, and $m_u|m_{u-1}|\cdots|m_1$, then $r=s$, $t=u$, and $m_i=n_i$ for each $i$. For $\mathbb{Z}_6$, the first format of the decomposition says that we can write it as $\mathbb{Z}_2\oplus\mathbb{Z}_3$, and that this is the only way to write it as a direct sum of cyclic groups of prime power order (except for the trivial $\mathbb{Z}_3\oplus\mathbb{Z}_2$, which is really "the same way"). The second part says that we can also write it as $\mathbb{Z}_6$, and that this is the only way to write it as a direct sum of cyclic groups in such a way that the order of each one divides the order of the previous one. That is, we have two different "unique factorizations", depending on which format you want to use.
I have a line $L$ in the plane expressed as the points in $L = \{(x,y) \in {\mathbb{R}}^2 : x \cos \theta + y \sin \theta = r \; \wedge \; 0 > \theta > \pi/2 \}$ (note that the line cannot be fully horizontal or fully vertical). This line can possibly intersect the $y$-axis in $w-m \leq y \leq w+m$ for a fixed frame width $w > 0$ and a margin $0 \leq m << w$ (the line is generally "stuck" to a certain distance from the origin). Let's call the point of intersection $Q$. I need to rotate line L in either direction around Q with angle $\phi$ (generally quite a small rotation; $\phi < \lvert\pi/20\rvert$). After the rotation I need to translate the line by a vector $\mathbf{t}$ perpendicular to the now rotated line. Again the distance $\lVert \mathbf{t} \rVert$ is generally small but its direction is always in the direction of the previous rotation. Question: Was is the relationship between the original line $L$'s parameters $\theta$ and $r$ and the new rotated and translated line's parameters ${\theta}_\text{new}$ and $r_\text{new}$? EDIT - Feb 8th 2012:Major changes. Original posing of the question was entirely wrong. The geometrical situation is now quite different and not quite as trivial as hinted at below in the comments.
Take $x>0$ large, $t\in \mathbb R$, $q\in \mathbb N$ and a non-principal character $\chi $ mod $q$. If you want, take $t\leq x$. How do I bound \[ \sum _{n\leq x}\frac {\chi (n)}{n^{it}}?\] My guess was that this is $\ll \sqrt {qt}$, based on thinking about the $q=1$ case, which is the relatively well known statement \[ \sum _{n\leq x}\frac {1}{n^{it}}=\text {main term }+\mathcal O\left (1\right ).\] Euler-Maclaurin and Polya-Vinogradov shows the sum in question to be $\ll t\sqrt q$, which is too weak. But in the $q=1$ case EM may be combined with Van der Corput's summation formula to get an $\mathcal O(1)$ bound. If I try to replicate that argument, I get stuck since I don't know how to bound (for $a\in \mathbb N$ with $(a,q)=1$) \[ \int _x^\infty \frac {e(ua/q)du}{u^{it}}\] One idea would be with complex analysis: Perron's formula says for some $c>1$ and any $T>0$ the sum is (the error terms really only being "essentially" as small as stated) $$\int _{c\pm iT}\frac {L_{\chi }(s+it)ds}{s}+\mathcal O\left (x/T\right )$$ where by the Residue Theorem the main term is, for some $0<c'<1$, $$\int _{c'\pm iT}\frac {L_{\chi }(s+it)x^sds}{s} +\int _{c'+iT}^{c+iT}\frac {L_{\chi }(s+it)x^sds}{s}+(\text { similar integral }).$$ Taking absolute values gives a total error something like $$\ll x/T+\sqrt (T+|t|)$$ which is also too large. But explicitly computing the vertical integral via the functional equation seems to get better bounds, and give my required result.
I'm trying to play with bond-future options. Bond future is a future contract on a basket of bonds. The short-side will deliver the so-called bond cheapest-to-deliver (CTD). A bond-future option is therefore an option on this basket. Let's simplify things such that: the option is directly struck on the CTD; CTD is a zero-coupon bond; the option is European, $t < T_{opt} \leq T_{for} < T_{ctd} $ thus paying at option's expiration $T_{opt}$: $$ \left( P(T_{opt},T_{for},T_{ctd}) - K \right)^+ $$ where: $T_{for}$ is the underlying forward maturity, $T_{ctd}$ the CTD bond maturity and $P(T_{opt},T_{for},T_{ctd})$ is the $T_{opt}$-value of the bond forward maturing in $T_{for}$. If $T_{opt} \equiv T_{mat} = T$ then the bond-future option reduces to a standard option on the CTD bond, paying at $T$: $$ \left( P(T,T_{ctd}) - K \right)^+ $$ where $P(T,T_{ctd}) $ is the price at the future date $T$ of the CTD bond and I have applied the identity $P(T_{opt} = T,T_{for} = T,T_{ctd}) \equiv P(T,T_{ctd})$. Now, it's known the caplet (or floorlet) representation for options on (zero-coupon) bond (see, for example, equation 2.26 of Brigo-Mercurio "Interest Rate Models - Theory and Practice: With Smile, Inflation and Credit"). My question is: does it exist any such representation for bond-future options in terms of options on the forward rates? Thanks in advance. gab Addendum: if it helps the relation between bond and forward rate is (should be ;) ): $$ P(t,T_{for},T_{ctd}) = \frac{1}{1 + \tau(T_{for},T_{ctd}) F(t,T_{for},T_{ctd})} $$ where $F(t,T_{for},T_{ctd})$ denotes the time-$t$ value of the forward rate for accrual period $[T_{for};T_{ctd}]$.
TL;DR Your Maxwell–Boltzmann diagram up there is not sufficient to describe the variation of rate with $E_\mathrm{a}$. Simply evaluating the shaded area alone does not reproduce the exponential part of the rate constant correctly, and therefore the shaded area should not be taken as a quantitative measure of the rate (only a qualitative one). There is a subtle issue with the way you've presented your drawing. However, we'll come to that slightly later. First, let's establish that the "proportion of molecules with sufficient energy to react" is given by $$P(\varepsilon) = \exp \left(-\frac{\varepsilon}{kT}\right) \tag{1}$$ Therefore, for a reaction $\ce{X <=> Y}$ with uncatalysed forward activation energy $E_\mathrm{f}$ and uncatalysed backward activation energy $E_\mathrm{b}$, the rates are given by $$k_\mathrm{f,uncat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f}}{kT}\right) \tag{2} $$ $$k_\mathrm{b,uncat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b}}{kT}\right) \tag{3} $$ The equilibrium constant of this reaction is given by $$K_\mathrm{uncat} = \frac{k_\mathrm{f,uncat}}{k_\mathrm{b,uncat}} = \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{4}$$ As you have noted, the change in activation energy due to the catalyst is the same. I would be a bit careful with using "$\mathrm{d}E$" as the notation for this, since $\mathrm{d}$ implies an infinitesimal change, and if the change is infinitesimal, your catalyst isn't much of a catalyst. So, I'm going to use $\Delta E$. We then have $$k_\mathrm{f,cat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f} - \Delta E}{kT}\right) \tag{5} $$ $$k_\mathrm{b,cat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b} - \Delta E}{kT}\right) \tag{6} $$ and the new equilibrium constant is $$\begin{align}K_\mathrm{cat} = \frac{k_\mathrm{f,cat}}{k_\mathrm{b,cat}} &= \frac{A_\mathrm{f}\exp[-(E_\mathrm{f} - \Delta E)/kT]}{A_\mathrm{b}\exp[-(E_\mathrm{b} - \Delta E)/kT]} \tag{7} \\[0.2cm]&= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \frac{\exp(\Delta E/kT)}{\exp(\Delta E/kT)} \tag{8} \\[0.2cm]&= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{9}\end{align}$$ Equations $(9)$ and $(4)$ are the same, so there is no change in the equilibrium constant. The question then arises as to how eq. $(1)$ is obtained. The simplest way is to invoke a Boltzmann distribution, which almost by definition gives the desired form. However, since you have a Maxwell–Boltzmann curve, I guess I should talk about it a bit more. The fraction of molecules with energy $E_\mathrm{a}$ or greater is simply the shaded area under the curve, i.e. one can obtain it by integrating the curve over the desired range. $$P(\varepsilon) = \int_{E_\mathrm{a}}^\infty f(\varepsilon)\,\mathrm{d}\varepsilon \tag{10}$$ where the Maxwell–Boltzmann distribution of energies is given by (see Wikipedia) $$f(\varepsilon) = \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \tag{11}$$ At first glance, we would expect this to be directly proportional to the exponential part of the rate constant, i.e. $\exp(-E_\mathrm{a}/kT)$. Alas, it is not that simple. If you try to work out the integral $$\int_{E_\mathrm{a}}^{\infty} \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \,\mathrm{d}\varepsilon \tag{12}$$ you don't get anything close to the form of $\exp(-E_\mathrm{a}/kT)$. Instead, you get some "error function" rubbish, and some nasty square roots and exponentials. (You can use WolframAlpha to verify this.) Why is this so? Well, it turns out that there are other terms that also depend on $\varepsilon$ and therefore need to go inside that integral (they aren't constants and can't be taken out). The simplest example is that faster molecules tend to collide more often, so even though the right-hand tail of the diagram seems to contribute very little to the "proportion of molecules with sufficient energy", it actually contributes more significantly to the overall rate because these molecules collide more often. In collision theory this is described using the "relative velocity" of the particles $v_\mathrm{rel}$. There is also another complication, in that the Maxwell–Boltzmann distribution, the direction of the particles is not accounted for. (For more insight please refer to Levine Physical Chemistry 6th ed., p 467.) Therefore, there has to be yet another term that takes into account the direction of movement of the particles. The idea is that a head-on collision between two molecules is more likely to overcome the activation barrier than is a $90^\circ$ collision. The term that compensates for this is the "collision cross-section" $\sigma$. If you go through the maths (and I don't really intend to type it out here, it's rather long, but I will give some references) then you will find that at the end you will recover the form $\exp(-\varepsilon/kT)$. Once you have arrived at this, it's very straightforward to see that the increases in rate of both the forward and backward reaction cancel each other out. Now, as for the promised references, Pilling and Seakins's Reaction Kinetics pp 61-2 have a short outline of the proof. Atkins's Physical Chemistry 10th ed. has a slightly longer proof on pp 883-4.
I am reading "Introduction to Quantum Mechanics" by David Griffiths and I am having trouble understanding part of a derivation of $\frac{d\langle x\rangle }{dt}$ in section 1.5 - Momentum - of the text. The Author gives EQN 1.29 as $$ \frac{d\langle x\rangle }{dt} = \frac{i \hbar}{2m} \int _{-\infty} ^{\infty} x \frac{\partial }{\partial x} \left[ \frac{\partial \Psi}{\partial x}\Psi^* - \frac{\partial \Psi^*}{\partial x}\Psi \right] dx $$ He then does integration by parts, saying as a foot note, Under the integral sign, then, you can peel a derivative off one factor in a product, and slap it onto the other one - it'll cost you a minus sign, and you pick up boundary term. and gets EQN 1.30: $$ \frac{d\langle x\rangle }{dt} = -\frac{i\hbar}{2m} \int _{-\infty} ^{\infty} \left( \Psi^* \frac{\partial \Psi}{\partial x} - \frac{\partial \Psi^* }{\partial x}\Psi \right) dx $$ He repeats an integration by parts to derive 1.31: $$ \frac{d\langle x\rangle }{dt} = -\frac{i\hbar}{m} \int _{-\infty} ^{\infty} \Psi^* \frac{\partial \Psi}{\partial x} dx $$ I am not sure how this is integration by parts. In all the integration by parts I have ever done, two terms have been yielded. He mentions a second term saying: I used the fact that $\frac{\partial x}{\partial x} = 1$, and threw away the boundary term, on the ground that $\Psi$ goes to zero at $\pm$ infinity. I saw this equation posted on Stack Exchange for a similar question: $$ \int\left(\frac{\partial}{\partial x}f(x)\right)\ g(x)\ \text dx=\int\ f(x)\left(-\frac{\partial}{\partial x}g(x)\right)\ \text dx, $$ Is this true generally? How is this integration by parts? Why can we throw away the other term? How does integration by parts lead to 1.31?
There is some super-heated $NH_3$ in an isolated tank with initial pressure and temperature of $1600 \ kPa$ and $50^{\circ} C$ respectively. Gas leaks out from a valve on tank very slowly such that some of gas exits reversibly. In the final state, pressure is $1400 \ kPa$ and $NH_3$ is still super-heated. The problem says that using mass and entropy balance show that $s_1=s_2$, where $s$ is the molar entropy of gas in tank. Using the total entropy balance for inside of tank as control volume, we get $$(m_2 s_2 -m_1 s_1)_{C.V.}=-\dot{m_e} s_e \Delta t$$ What is $s_e$ now? It's not constant! How can I reduce this equation to $s_2=s_1$.
What is the instantaneous P&L of a variance swap. Is it $(\sigma^{2}_{t}-\sigma^{2}_{implied})dt$? Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community A variance swap has a set of fixing times, and the volatility between those times has no specified effect. Therefore you end up wanting to apply a model. For a model-free approximation, though, your formula works up to a constant. definition of a variance swap is $ \int^{T+\Delta}_T \mathbb{E}_t[v_s] ds $ where $v_s$ is the variance and $\mathbb{E}_t[v_s]$ is the expectation of the variance of time s at time t. therefore, pnl is: $ (\int^{T+\Delta}_T \mathbb{E}_t[v_s] ds - \int^{T+\Delta}_{T} \mathbb{E}_{t-\delta}[v_s] ds)*d\delta $
80 1 Hello Everybody, Instead of solving the geodesic equations for the Schwarzschild metric, in many books (nearly in all books that I consulted), conserved quantities are looked at instead. So take for eg. Carroll, he looks at the killing equation and extracts the equation [tex] K_\mu \frac{dx^\mu}{d \lambda}= constant, [/tex] and he then writes:"In addition we have another constant of the motion for geodesics", and he writes the normalization condition: [tex] \epsilon = -g_{\mu \nu} \frac{dx^\mu}{d \lambda} \frac{dx^\mu}{d \lambda}. [/tex] Now I don't understand why this set of equations is equivalent to the geodesic equations. And I do not understand why we are allowed to use these equations to extract information about the geodesics. Maybe the questions are the same, but I hope you get my point. Any help would be greatly appreciated!! Instead of solving the geodesic equations for the Schwarzschild metric, in many books (nearly in all books that I consulted), conserved quantities are looked at instead. So take for eg. Carroll, he looks at the killing equation and extracts the equation [tex] K_\mu \frac{dx^\mu}{d \lambda}= constant, [/tex] and he then writes:"In addition we have another constant of the motion for geodesics", and he writes the normalization condition: [tex] \epsilon = -g_{\mu \nu} \frac{dx^\mu}{d \lambda} \frac{dx^\mu}{d \lambda}. [/tex] Now I don't understand why this set of equations is equivalent to the geodesic equations. And I do not understand why we are allowed to use these equations to extract information about the geodesics. Maybe the questions are the same, but I hope you get my point. Any help would be greatly appreciated!!
Genus¶ This file contains a moderately-optimized implementation to compute the genus of simple connected graph. It runs about a thousand times faster than the previous version in Sage, not including asymptotic improvements. The algorithm works by enumerating combinatorial embeddings of a graph, and computing the genus of these via the Euler characteristic. We view a combinatorial embedding of a graph as a pair of permutations \(v,e\) which act on a set \(B\) of \(2|E(G)|\) “darts”. The permutation \(e\) is an involution, and its orbits correspond to edges in the graph. Similarly, The orbits of \(v\) correspond to the vertices of the graph, and those of \(f = ve\) correspond to faces of the embedded graph. The requirement that the group \(<v,e>\) acts transitively on \(B\) is equivalent to the graph being connected. We can compute the genus of a graph by \(2 - 2g = V - E + F\) where \(E\), \(V\), and \(F\) denote the number of orbits of \(e\), \(v\), and \(f\) respectively. We make several optimizations to the naive algorithm, which are described throughout the file. class sage.graphs.genus. simple_connected_genus_backtracker¶ Bases: object A class which computes the genus of a DenseGraph through an extremely slow but relatively optimized algorithm. This is “only” exponential for graphs of bounded degree, and feels pretty snappy for 3-regular graphs. The generic runtime is\(|V(G)| \prod_{v \in V(G)} (deg(v)-1)!\) which is \(2^{|V(G)|}\) for 3-regular graphs, and can achieve \(n(n-1)!^{n}\) for the complete graph on \(n\) vertices. We can handily compute the genus of \(K_6\) in milliseconds on modern hardware, but \(K_7\) may take a few days. Don’t bother with \(K_8\), or any graph with more than one vertex of degree 10 or worse, unless you can find an a priori lower bound on the genus and expect the graph to have that genus. Warning THIS MAY SEGFAULT OR HANG ON: DISCONNECTED GRAPHS DIRECTED GRAPHS LOOPED GRAPHS MULTIGRAPHS EXAMPLES: sage: import sage.graphs.genus sage: G = graphs.CompleteGraph(6) sage: G = Graph(G, sparse=False) sage: bt = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: bt.genus() #long time 1 sage: bt.genus(cutoff=1) 1 sage: G = graphs.PetersenGraph() sage: G = Graph(G, sparse=False) sage: bt = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: bt.genus() 1 sage: G = graphs.FlowerSnark() sage: G = Graph(G, sparse=False) sage: bt = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: bt.genus() 2 genus( style=1, cutoff=0, record_embedding=False)¶ Compute the minimal or maximal genus of self’s graph. Note, this is a remarkably naive algorithm for a very difficult problem. Most interesting cases will take millenia to finish, with the exception of graphs with max degree 3. INPUT: style– integer (default: 1); find minimum genus if 1, maximum genus if 2 cutoff– integer (default: 0); stop searching if search style is 1 and genus\(\leq\) cutoff, or if style is 2 and genus\(\geq\) cutoff. This is useful where the genus of the graph has a known bound. record_embedding– boolean (default: False); whether or not to remember the best embedding seen. This embedding can be retrieved with self.get_embedding(). OUTPUT:the minimal or maximal genus for self’s graph. EXAMPLES: sage: import sage.graphs.genus sage: G = Graph(graphs.CompleteGraph(5), sparse=False) sage: gb = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: gb.genus(cutoff=2, record_embedding=True) 2 sage: E = gb.get_embedding() sage: gb.genus(record_embedding=False) 1 sage: gb.get_embedding() == E True sage: gb.genus(style=2, cutoff=5) 3 sage: G = Graph(sparse=False) sage: gb = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: gb.genus() 0 get_embedding()¶ Return an embedding for the graph. If min_genus_backtrackhas been called with record_embedding = True, then this will return the first minimal embedding that we found. Otherwise, this returns the first embedding considered. EXAMPLES: sage: import sage.graphs.genus sage: G = Graph(graphs.CompleteGraph(5), sparse=False) sage: gb = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: gb.genus(record_embedding=True) 1 sage: gb.get_embedding() {0: [1, 2, 3, 4], 1: [0, 2, 3, 4], 2: [0, 1, 4, 3], 3: [0, 2, 1, 4], 4: [0, 3, 1, 2]} sage: G = Graph(sparse=False) sage: G.add_edge(0,1) sage: gb = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: gb.get_embedding() {0: [1], 1: [0]} sage: G = Graph(sparse=False) sage: gb = sage.graphs.genus.simple_connected_genus_backtracker(G._backend.c_graph()[0]) sage: gb.get_embedding() {} sage.graphs.genus. simple_connected_graph_genus( G, set_embedding=False, check=True, minimal=True)¶ Compute the genus of a simple connected graph. Warning THIS MAY SEGFAULT OR HANG ON: DISCONNECTED GRAPHS DIRECTED GRAPHS LOOPED GRAPHS MULTIGRAPHS DO NOT CALL WITH check = FalseUNLESS YOU ARE CERTAIN. EXAMPLES: sage: import sage.graphs.genus sage: from sage.graphs.genus import simple_connected_graph_genus as genus sage: [genus(g) for g in graphs(6) if g.is_connected()].count(1) 13 sage: G = graphs.FlowerSnark() sage: genus(G) # see [1] 2 sage: G = graphs.BubbleSortGraph(4) sage: genus(G) 0 sage: G = graphs.OddGraph(3) sage: genus(G) 1 REFERENCES:
I am reading Steven Shreve's book "Stochastic Calculus for Finance 2 Continuous-Time Models", page 304. My intuition is that when the stock price gets closer to the barrier, it will be more and more likely that the price will exceed the barrier in a near future, hence it has a large probability to become worthless. This leads to the consequence that the price of the option should be closer and closer to zero. But I can not justify this intuition from the formula on page 304. Can someone explain this? Thanks a lot. The formula is $$V(0)=S(0)I_1-KI_2-S(0)I_3+KI_4$$ where $$\quad I_1=\frac{1}{\sqrt{2\pi T}}\displaystyle\int_{k}^be^{\sigma w-rT+\alpha w-\frac{1}{2}\alpha^2T-\frac{1}{2T}w^2}dw$$ $$I_2=\frac{1}{\sqrt{2\pi T}}\displaystyle\int_{k}^be^{-rT+\alpha w-\frac{1}{2}\alpha^2T-\frac{1}{2T}w^2}dw$$ and $$\quad I_3=\frac{1}{\sqrt{2\pi T}}\displaystyle\int_{k}^be^{\sigma w-rT+\alpha w-\frac{1}{2}\alpha^2T-\frac{2}{T}b^2+\frac{2}{T}bw-\frac{1}{2T}w^2}dw$$ $$I_4=\frac{1}{\sqrt{2\pi T}}\displaystyle\int_{k}^be^{-rT+\alpha w-\frac{1}{2}\alpha^2T-\frac{2}{T}b^2+\frac{2}{T}bw-\frac{1}{2T}w^2}dw$$
Current browse context: astro-ph.GA Change to browse by: References & Citations Bookmark(what is this?) Astrophysics > Astrophysics of Galaxies Title: CO Multi-line Imaging of Nearby Galaxies (COMING). VII. Fourier decomposition of molecular gas velocity fields and bar pattern speed (Submitted on 3 Jan 2019) Abstract: The $^{12}$CO $(J=1\rightarrow0)$ velocity fields of a sample of 20 nearby spiral galaxies, selected from the CO Multi-line Imaging of Nearby Galaxies (COMING) legacy project of Nobeyama Radio Observatory, have been analyzed by Fourier decomposition to determine their basic kinematic properties, such as circular and noncircular velocities. On average, the investigated barred (SAB and SB) galaxies exhibit a ratio of noncircular to circular velocities of molecular gas larger by a factor of 1.5-2 than non-barred (SA) spiral galaxies at radii within the bar semimajor axis $a_\mathrm{b}$ at 1 kpc resolution, with a maximum at a radius of $R/a_\mathrm{b}\sim0.3$. Residual velocity field images, created by subtracting model velocity fields from the data, reveal that this trend is caused by kpc-scale streaming motions of molecular gas in the bar region. Applying a new method based on radial velocity reversal, we estimated the corotation radius $R_\mathrm{CR}$ and bar pattern speed $\Omega_\mathrm{b}$ in seven SAB and SB systems. The ratio of the corotation to bar radius is found to be in a range of $\mathcal{R}\equiv R_\mathrm{CR}/a_\mathrm{b}\sim0.8\mathrm{-}1.6$, suggesting that intermediate (SBb-SBc), luminous barred spiral galaxies host fast and slow rotator bars. Tentative negative correlations are found for $\Omega_\mathrm{b}$ vs. $a_\mathrm{b}$ and $\Omega_\mathrm{b}$ vs. total stellar mass $M_\ast$, indicating that bars in massive disks are larger and rotate slower, possibly a consequence of angular momentum transfer. The kinematic properties of SAB and SB galaxies, derived from Fourier decomposition, are compared with recent numerical simulations that incorporate various rotation curve models and galaxy interactions. Submission historyFrom: Dragan Salak [view email] [v1]Thu, 3 Jan 2019 07:48:03 GMT (2711kb)
14.3. Matrix Factorization¶ The first version of matrix factorization model is proposed by Simon Funk in a famous blog post in which he described the idea of factorizing the interaction matrix. It then became widely known due to the Netflix contest. In 2006, Netflix, a media-streaming and video-rental company, announced a contest to improve its recommender systems. The best team that can improve on the Netflix baseline, Cinematch, by 10 percent will win a one million USD prize. This contest attracted a lot of attention and created the hype of recommender system research. Later on, the grand prize was won by the BellKor’s Pragmatic Chaos team, a combined team of BellKor, Pragmatic Theory, and BigChaos (you do not need to worry about these algorithms now). Although it is an ensemble method (a combination of many algorithms) that finally achieved the best score, matrix factorization plays a critical role in the final blend. The technical report the Netflix Grand Prize solution provides a detailed introduction to the employed model. In this section, we will dive into the details of the matrix factorization model and its implementation. 14.3.1. The Matrix Factorization Model¶ Matrix factorization is a class of collaborative filtering models. Simply put, this model factorizes the user-item interaction matrix (e.g., rating matrix) into the product of two lower-rank matrices. Let \(\mathbf{R} \in \mathbb{R}^{m \times n}\) denote the interaction matrix with \(m\) users and \(n\) items, and the values of \(\mathbf{R}\) represent explicit ratings. It will be factorized into a user latent matrix \(\mathbf{P} \in \mathbb{R}^{m \times k}\) and an item latent matrix \(\mathbf{Q} \in \mathbb{R}^{n \times k}\), where \(k \ll m, n\), is the latent factor size. For a given item \(i\), the elements of \(\mathbf{Q}_i\) measure the extent to which the item possesses those characteristics such as the genres and languages of a movie. For a given user \(u\), the elements of \(\mathbf{P}_u\) measure the extent of interest the user has in items’ corresponding characteristics. These factors might measure obvious dimensions as mentioned in those examples or are completely uninterpretable. The predicted ratings can be estimated by where \(\hat{\mathbf{R}}\in \mathbb{R}^{m \times n}\) is the predicted rating matrix which has the same shape as \(\mathbf{R}\). One major problem of this prediction rule is that users/items biases can not be modeled. For example, some users tend to give higher ratings or some items always get lower ratings due to poorer quality. These biases are commonplace in real-world applications. To capture these biases, user specific and item specific bias terms are introduced. Specifically, the predicted rating user \(u\) gives to item \(i\) is calculated by Then, we train the matrix factorization model by minimizing the mean squared error between predicted rating scores and real rating scores. The objective function is defined as follows: where \(\lambda\) denotes the regularization rate. The regularizing term \(\lambda (\| \mathbf{P} \|^2_F + \| \mathbf{Q} \|^2_F + b_u^2 + b_i^2 )\) is used to avoid overfitting by penalizing the magnitudes of the parameters. The \((u, i)\) pairs for which \(\mathbf{R}_{ui}\) is known are stored in the set \(\mathcal{K}=\{(u, i) \mid \mathbf{R}_{ui} \text{ is known}\}\). The model parameters can be learned with an optimization algorithm, such as SGD and Adam. An intuitive illustration of the matrix factorization model is shown below: In the rest of this section, we will explain the implementation of matrix factorization and train the model on the MovieLens dataset. import d2lfrom mxnet import autograd, init, gluon, np, npxfrom mxnet.gluon import nnimport mxnet as mxnpx.set_np() 14.3.2. Model Implementation¶ First, we implement the matrix factorization model described above. Theuser and item latent factors can be created with the nn.Embedding.The input_dim is the number of items/users and the ( output_dim)is the dimension of the latent factors (\(k\)). We can also use nn.Embedding to create the user/item biases by setting the output_dim to one. In the forward function, user and item idsare used to look up the embeddings. class MF(nn.Block): def __init__(self, num_factors, num_users, num_items, **kwargs): super(MF, self).__init__(**kwargs) self.P = nn.Embedding(input_dim=num_users, output_dim=num_factors) self.Q = nn.Embedding(input_dim=num_items, output_dim=num_factors) self.user_bias = nn.Embedding(num_users, 1) self.item_bias = nn.Embedding(num_items, 1) def forward(self, user_id, item_id): P_u = self.P(user_id) Q_i = self.Q(item_id) b_u = self.user_bias(user_id) b_i = self.item_bias(item_id) outputs = (P_u * Q_i).sum(axis=1) + np.squeeze(b_u) + np.squeeze(b_i) return outputs.flatten() 14.3.3. Evaluation Measures¶ We then implement the RMSE (root-mean-square error) measure, which is commonly used to measure the differences between rating scores predicted by the model and the actually observed ratings (ground truth). RMSE is defined as: where \(\mathcal{T}\) is the set consisting of pairs of users anditems that you want to evaluate on. \(|\mathcal{T}|\) is the size ofthis set. We can use the RMSE function provided by mx.metric. def evaluator(net, test_iter, ctx): rmse = mx.metric.RMSE() # Get the RMSE rmse_list = [] for idx, (users, items, ratings) in enumerate(test_iter): u = gluon.utils.split_and_load(users, ctx, even_split=False) i = gluon.utils.split_and_load(items, ctx, even_split=False) r_ui = gluon.utils.split_and_load(ratings, ctx, even_split=False) r_hat = [net(u, i) for u, i in zip(u, i)] rmse.update(labels=r_ui, preds=r_hat) rmse_list.append(rmse.get()[1]) return float(np.mean(np.array(rmse_list))) 14.3.4. Training and Evaluating the Model¶ In the training function, we adopt the \(L_2\) loss with weight decay. The weight decay mechanism has the same effect as the \(L_2\) regularization. # Save to the d2l package.def train_recsys_rating(net, train_iter, test_iter, loss, trainer, num_epochs, ctx_list=d2l.try_all_gpus(), evaluator=None, **kwargs): num_batches, timer = len(train_iter), d2l.Timer() animator = d2l.Animator(xlabel='epoch', xlim=[0, num_epochs], ylim=[0, 2], legend=['train loss','test RMSE']) for epoch in range(num_epochs): metric, l = d2l.Accumulator(3), 0. for i, values in enumerate(train_iter): timer.start() input_data = [] values = values if isinstance(values, list) else [values] for v in values: input_data.append(gluon.utils.split_and_load(v, ctx_list)) train_feat = input_data[0:-1] if len(values) > 1 else input_data train_label = input_data[-1] with autograd.record(): preds = [net(*t) for t in zip(*train_feat)] ls = [loss(p, s) for p, s in zip(preds, train_label)] [l.backward() for l in ls] l += sum([l.asnumpy() for l in ls]).mean() / len(ctx_list) trainer.step(values[0].shape[0]) metric.add(l, values[0].shape[0], values[0].size) timer.stop() if len(kwargs) > 0: # it will be used in section AutoRec. test_rmse = evaluator(net, test_iter, kwargs['inter_mat'], ctx_list) else: test_rmse = evaluator(net, test_iter, ctx_list) train_l = l / (i + 1) animator.add(epoch + 1, (train_l, None, test_rmse)) print('train loss %.3f, test RMSE %.3f' % (metric[0] / metric[1], test_rmse)) print('%.1f examples/sec on %s' % (metric[2] * num_epochs / timer.sum(), ctx_list)) Finally, let’s put all things together and train the model. Here, we set the latent factor dimension to 50. ctx = d2l.try_all_gpus()num_users, num_items, train_iter, test_iter = d2l.split_and_load_ml100k( test_ratio=0.1, batch_size=128)net = MF(50, num_users, num_items)net.initialize(ctx=ctx, force_reinit=True, init=mx.init.Normal(0.01))lr, num_epochs, wd, optimizer = 0.001, 25, 1e-5, 'adam'loss = gluon.loss.L2Loss()trainer = gluon.Trainer(net.collect_params(), optimizer, {"learning_rate": lr, 'wd': wd})train_recsys_rating(net, train_iter, test_iter, loss, trainer, num_epochs, ctx, evaluator) train loss 0.587, test RMSE 1.06120458.2 examples/sec on [gpu(0), gpu(1)] Below, we use the trained model to predict the rating that a user (ID 20) might give to an item (ID 30). scores = net(np.array([20], dtype='int', ctx=d2l.try_gpu()), np.array([30], dtype='int', ctx=d2l.try_gpu()))print(scores) [3.007846] @gpu(0) 14.3.5. Summary¶ The matrix factorization model is widely used in recommender systems. It can be used to predict ratings that a user might give to an item. We can implement and train matrix factorization for recommender systems. 14.3.6. Exercise¶ Vary the size of latent factors. How does the size of latent factors influence the model performance? Try different optimizers, learning rates, and weight decay rates. Check the predicted rating scores of other users for a specific movie. 14.3.7. Reference¶ Koren, Yehuda, Robert Bell, and Chris Volinsky. “Matrix factorization techniques for recommender systems.” Computer 8 (2009): 30-37.
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Search Now showing items 1-5 of 5 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2013-11) We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I'm trying to teach some secondary school students on how to complete the square. The goal is to rewrite: $$y = ax^2 + bx + c \ \ \Rightarrow \ \ y = a(x-h)^2 + k$$ The first thing I did was to ask them to confirm, via FOIL: $$\left( x + \tfrac{b}{2} \right) \left( x + \tfrac{b}{2} \right) \ \ = \ \ x^2 + bx + \tfrac{b^2}{4}$$ With this as a guide, they were able to rewrite something like: $$y = x^2 + 6x - 10 \ \ \Rightarrow \ \ y = (x + 3)^2 -19$$ Problems occur when the leading coefficient does not equal one. So far, I am not able to explain why I need to reduce the leading coefficient to $1$ before completing the square. (Moreover, some of them are having problems with distributions.) For example: \begin{align} y \ \ & = \ \ 2x^2 + 8x - 9 \\ & = \ \ 2(x^2 + 4x - 4.5) \\ & = \ \ 2(x^2 + 4x + 4 - 8.5) \\ & = \ \ 2(x^2 + 4x + 4) - 17 \ \ = \ \ 2(x + 2)^2 - 17 \end{align} This may not really answer your question, and, it may be inappropriate to the level of your class, but, I find the idea of looking at an algebraic manipulation algebraically is at times helpful. The whole point of doing algebra is to engage in the kind of thinking I give below so I think there is something to gain from taking this sort of "abstract" approach. The idea here is to discover how arbitrary $h$ and $k$ and indeed $A$ must relate to $a,b,c$ in order that $$ y = ax^2+bx+c = A(x-h)^2+k. $$ This is itself and algebra problem. Do algebra to do algebra. Consider $$ ax^2+bx+c = A(x^2-2xh+h^2)+k $$ or $$ ax^2+bx+c = Ax^2-2hAx+Ah^2+k $$ requires that these be the same polynomial. Hence, equating coefficients, $$ \begin{array}{cc} x^2: & a=A \\ x: & b=-2Ah \\ 1: & c = Ah^2+k \end{array} $$ So, the $h$ and $k$ we seek are given by: $$ A = a, \ \ h = -\frac{b}{2a}, \ \ k = c - \frac{b^2}{4a}. $$ Therefore, $$ ax^2+bx+c = a\left(x+\frac{b}{2a}\right)^2+c-\frac{b^2}{4a} $$ Then, give numeric examples galore until it sinks in. It's likely as or more important to explain why it is important to complete the square. Of course the danger of my method, and it is a real danger is that students may black-box this solution. Heaven forbid teachers just say well, you set $h = b/2a$ and $k = c-\frac{b^2}{4a}$ and that's it. The universe in which I present this to a class is the same universe in which the students are also held responsible for reproducing the abstract result in addition to the expectation they do numerically specific examples with ease. Ease born of practice. The goals of my approach are: Emphasize that the purpose of completing the square is a process of writing something that is equal to the original expression. Emphasize that there are choicesto make, differentiating between a choice being usefulversus it being true(avoiding the words " correct" or " right" which conflate useful and true). Avoid unnecessary algebra and quantifiers; my students have a hard enough time with $x$; adding in $b$ and $c$ obscures my previous two goals. Complete the square: $x^2 + 8x + 19$ First ask yourself: "is it a perfect square?" If it was, you would be able to find two numbers that add up to 8, multiply to 19, and are the same number. If not, ask yourself what the two numbers would have to be to add to 8 and be the same number. Then recognize that the constant 19 is the wrong constant. Ask yourself: What is the constant you want it to be? Then don't let your dreams be dreams, just write it: $\begin{align*} & x^2 + 8x + 19 \\ = & x^2 + 8x + 16 \end{align*}$ Notice that this is useful (it factors now) but not true (they aren't equal). Fix what you've written so that it is also true. $\begin{align*} & x^2 + 8x + 19 \\ = & x^2 + 8x + 16 & + 3\end{align*}$ Then factor the part of the result that you have engineered to work out. $\begin{align*} & x^2 + 8x + 19 & \\ = & x^2 + 8x + 16 & + 3 \\ = &(x + 4)^2 & + 3\end{align*}$ I have tried a variety of approaches, and I like the abstract skills that this approach allows students to practice. For example, when a student does this: $\begin{align*} & x^2 + 8x + 19 \\ = & x^2 + 8x + 4 + 15 \end{align*}$ I can ask students if that was a good idea and tease out the distinction that there is a difference between something being true and useful. Since one of the primary goals of algebra courses is convincing students that they have the freedom to do whatever they want when it is true (sure, multiply both sides by 10 if you want man; worst case you can undo it later), this synergizes well with my approach to the rest of algebra. I don't know if the following will be of much use to your specific case (our audiences are quite different---I teach this to college freshmen), here is my usual approach: First, we typically start the quarter by getting a good handle on geometric transformations of graphs of functions. Given some function $f:\mathbb{R}\to\mathbb{R}$ with a known graph, the graph of the function defined by $$ x \mapsto B f\left( \frac{x-h}{A} \right) + k $$ is the same as the graph of $f$ shifted right $h$ units (where $h<0$ is a shift to the negative right, or left), scaled horizontally by a factor of $A$, reflected across the $y$-axis if $A < 0$, scaled vertically by a factor of $B$, reflected across the $x$-axis if $B < 0$, and shifted up $k$ units (again, down is negative up). We can then consider the humble parabola defined by $x \mapsto x^2$. This lovely function has a zero at zero, goes through the points $(1,1)$ and $(-1,1)$, and has inverses on the domains $[0,\infty)$ and $(-\infty,0]$ given by the positive and negative square roots, respectively. We might first note that if we transform the graph of $x\mapsto x^2$, the two multiplicative constants can be combined: \begin{equation} B\left( \frac{x-h}{A} \right)^2 + k = \left( \frac{B}{A^2} \right)(x-h)^2 + k, \end{equation} so we may fairly conclude that any transformation of the graph will be given by a function of the form \begin{equation*} x \mapsto C(x-h)^2 + k, \end{equation*} where $C$ represents some kind of scaling (we may as well understand it as a vertical scaling, but it combines both the horizontal and vertical scalings into one gooey mess), and $h$ and $k$ are translations, as usual. In particular, the vertex of this graph is the point $(h,k)$, and the vertex will represent a minimum or maximum for the function, depending on whether $C > 0$ or $C < 0$. We can also expand this mess in order to obtain \begin{align} C\left( x-h \right)^2 + k &= C (x^2 - 2hx + h^2) + k \\ &= \left(C\right)x^2 + \left( -2hC \right)x + \left( h^2C + k\right) \\ &= ax^2 + bx + c, \end{align} which is a more familiar expression to most of my students, as they have mostly seen the material before in high school. The next "obvious" (for certain values of "obvious") question is: can we go the other way? That is, if we are told that $$ f(x) = ax^2 + bx + c,$$ can we understand the graph of $f$ as the graph of our basic parabola subject to some elementary transformations? That is, given arbitrary $a,b,c$ (with $a\ne0$), are there $C,h,k$ such that $$ f(x) = ax^2 + bx + c = C(x-h)+k?$$ The answer is "Yes!", with the details as provided by James S. Cook's answer to this question. In this exposition, the idea is that the geometry motivates exploration of the problem. We want to know how to "complete the square" so that we can find the vertex of the transformed parabola, or so that we can determine an inverse on some domain (i.e. solve a quadratic equation). I present completing the square as a recipe. Given $x^2 + bx$, add $\big(\frac{b}{2}\big)^2$ to both sides. You obtain a different expression, but your new expression can be written neatly as a perfect square: \begin{align*} x^2 + bx &\xrightarrow{\text{add }\left(\frac{b}{2}\right)^2} x^2 + bx + \big(\frac{b}{2}\big)^2 = \big(x + \frac{b}{2}\big)^2 \end{align*} At some point, I emphasize how important it is that we started with $x^2$ without a (non-one) coefficient: the recipe produces a new expression of the form $(x + \text{something})^2$, which, when expanded, will never produce anything whose $x^2$ term has a non-one coefficient. So, maybe you can really play up that it's a "faithful, but simple" method that only works for expressions of the form $x^2 + bx$: The recipe deals exclusively with quadratics whose leading term is $1$, end of story. Note: You face similar problems getting students to factor $2x^2 - 3x + 5$ using the "ac method" or whatever you prefer to call it. Students are used to the process find a pair of numbers that multiply to this, and add to that from the "easy" quadratic case. But, unlike the "easy" case, here we don't just get to jump right to a factorization. Why? Because $(x + \text{thing1})(x + \text{thing2})$ will never produce quadratics whose leading coefficient isn't $1$, and we finally need that. So, we have to add another step or two to the old method. So I will make a bit of a big deal about the jump from expressions like $x^2 + 6x + 1$ to $2x^2 + 6x + 1$: Our version of completing the square requires us to have just $x^2$. Is all hope lost; do we need a brand new version of completing the square? This is where I take the approach suggested by DRF's comment: We reduce a superficially new problem to a known, already solved problem. By adding the minor modification of factoring out the leading coefficient, we get the much vaunted $x^2 + [\text{something}]x$ on which our recipe relies. I'll add that there is a super slick, more general version of completing the square that can handle non-one leading coefficients without factoring/division, given here by André Nicolas (and I think the linked answer is the first place I saw it). It's more appropriate for solving equations by completing the square (instead of rewriting quadratic expressions), but it's so nice that I think it deserves to be more widely-known. The main idea is that $(2ax + b)^2 = 4a^2x^2 + 4abx + b^2$, so given an expression like $ax^2 + bx$, you'll multiply by $4a$ then add $b^2$, at which point you can write your new expression as a perfect square. I have yet to mention this for a remedial algebra / precalculus class, but occasionally I'll get a math ed student and show them this method in addition to the usual approach. My Approach: \begin{align} y \ \ & = \ \ 2x^2 + 8x - 9 \\ & y+9 = \ \ 2(x^2 + 4x + \underline{\quad}\ ) \\ & y+9+8 = \ \ 2(x^2 + 4x+4) \\ & y+17 = \ \ 2(x^2 + 4x + 4) \\ & y+17 = \ \ 2(x + 2)^2 \\ \end{align} Notes - Moving the constant to the left in the first step avoids the errors some students might make dividing this number. Now, when you add the '4' to complete the square, you ask "What did we add to the right side?" "4" "don't forget the number to multiply, here, '2', so we added 8 to the right, and we'll add 8 to the left." Last, the format of that last line is the one I prefer. It's the "Vertex Form" and students can quickly see (-2,-17) is the vertex. I think it is easier to practice doing this without the y (with the equation set to zero). Of course it is same thing but for someone just learning, doing it as just a single equation is easier and concentrates the mind on roots. I would also try to do it very mechanically, writing out each step (like when you are in pre-algebra and first learn how to sort things out with first order equations in X only.) Just some thoughts.
Illinois Journal of Mathematics Illinois J. Math. Volume 59, Number 1 (2015), 235-276. Limit theorems for some critical superprocesses Abstract Let $X=\{X_{t},t\ge0;\mathbb{P}_{\mu}\}$ be a critical superprocess starting from a finite measure $\mu$. Under some conditions, we first prove that $\lim_{t\to\infty}t{ \mathbb{P}}_{\mu}(\Vert X_{t}\Vert \ne0)=\nu^{-1}\langle\phi_{0},\mu\rangle$, where $\phi_{0}$ is the eigenfunction corresponding to the first eigenvalue of the infinitesimal generator $L$ of the mean semigroup of $X$, and $\nu$ is a positive constant. Then we show that, for a large class of functions $f$, conditioning on $\Vert X_{t}\Vert \ne0$, $t^{-1}\langle f,X_{t}\rangle$ converges in distribution to $\langle f,\psi_{0}\rangle_{m}W$, where $W$ is an exponential random variable, and $\psi_{0}$ is the eigenfunction corresponding to the first eigenvalue of the dual of $L$. Finally, if $\langle f,\psi_{0}\rangle_{m}=0$, we prove that, conditioning on $\Vert X_{t}\Vert \ne0$, $(t^{-1}\langle\phi_{0},X_{t}\rangle,t^{-1/2}\langle f,X_{t}\rangle )$ converges in distribution to $(W,G(f)\sqrt{W})$, where $G(f)\sim\mathcal{N}(0,\sigma_{f}^{2})$ is a normal random variable, and $W$ and $G(f)$ are independent. Article information Source Illinois J. Math., Volume 59, Number 1 (2015), 235-276. Dates Received: 17 August 2015 Revised: 16 November 2015 First available in Project Euclid: 11 February 2016 Permanent link to this document https://projecteuclid.org/euclid.ijm/1455203166 Digital Object Identifier doi:10.1215/ijm/1455203166 Mathematical Reviews number (MathSciNet) MR3459635 Zentralblatt MATH identifier 1338.60074 Subjects Primary: 60F05: Central limit and other weak theorems 60J80: Branching processes (Galton-Watson, birth-and-death, etc.) Secondary: 60J25: Continuous-time Markov processes on general state spaces 60J35: Transition functions, generators and resolvents [See also 47D03, 47D07] Citation Ren, Yan-Xia; Song, Renming; Zhang, Rui. Limit theorems for some critical superprocesses. Illinois J. Math. 59 (2015), no. 1, 235--276. doi:10.1215/ijm/1455203166. https://projecteuclid.org/euclid.ijm/1455203166
Below is an approach I've been exploring for connecting the prime counting function with the logarithmic integral and expressing the error term between the two. I find it beguiling, but I've largely run out of ideas for further developing this approach. Hence this question. What I'd really, really appreciate is either 1) references to papers or research covering the general space of ideas I'm exploring here, 2) an explanation for why this approach is going to be a dead end, or especially 3) ideas for manipulating or further exploring my equation (4) below that I might have overlooked. [Edit]I'm specifically interested in techniques I might be unfamiliar with from the study of the divisor summatory function $D_k$, which is at the heart of (4). [End Edit] Preliminary Start by summing Linnik's identity from 2 to n. $\displaystyle\sum_{j=2}^n 1 - \frac{1}{2}\sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor} 1 + \frac{1}{3}\sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor}\sum_{l=2}^{\lfloor\frac{n}{j k}\rfloor} 1 - \frac{1}{4}...=$$\pi(n) + {\frac{1}{2}}\pi(n^{\frac{1}{2}}) + {\frac{1}{3}}\pi(n^\frac{1}{3})+...$ (1) where $\pi(n)$ is the prime counting function. The nested sums on the left only need to be computed up to a depth of $\log_2 n$ before they start equalling 0. Next, approximate the left hand side by replacing sums with integrals, starting integration at 1 rather than 2, and removing floor functions from upper bounds. This immediately gives the logarithmic integral. $\displaystyle\int_{1}^n dx - \frac{1}{2}\int_{1}^n \int_{1}^{\frac{n}{x}} dy dx + \frac{1}{3}\int_{1}^n \int_{1}^{\frac{n}{j}}\int_{1}^{\frac{n}{x y}} dz dy dx - \frac{1}{4}... = li(n) - \log\log n - \gamma$ (2) where $li(n)$ is the logarithmic integral and $\gamma$ is the Euler-Mascheroni constant. (You can find a fairly straightforward sketch of the derivations of both (1) and (2) here) Now, Riemann's explicit formula for the prime counting function, with a touch of term rearrangement and with $\rho$ the Zeta Zeroes, is of course $li(n) - \pi(n) - {\frac{1}{2}}\pi(n^{\frac{1}{2}}) - {\frac{1}{3}}\pi(n^\frac{1}{3})-...=\displaystyle\sum_{\rho} li(n^\rho) + \log 2 - \int_x^{\infty}\frac{dt}{t(t^2-1)\log t}$ (3) Question Define the following $O( \log \log n)$ function $E(n) = \log 2 - \int_n^{\infty}\frac{dt}{t(t^2-1)\log t} + \log \log n + \gamma$ Then, applying both (1) and (2) to (3) gives $\displaystyle \sum_{\rho} li(n^\rho) + E(n) =$ $\displaystyle - \frac{1}{2}(\int_{1}^n \int_{1}^{\frac{n}{x}} dy dx - \sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor} 1 )$ $\displaystyle+ \frac{1}{3}(\int_{1}^n \int_{1}^{\frac{n}{j}}\int_{1}^{\frac{n}{x y}} dz dy dx - \sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor}\sum_{l=2}^{\lfloor\frac{n}{j k}\rfloor} 1)$ $\displaystyle- \frac{1}{4}(\int_{1}^n \int_{1}^{\frac{n}{j}}\int_{1}^{\frac{n}{x y}}\int_{1}^{\frac{n}{x y z }} dw dz dy dx - \sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor}\sum_{l=2}^{\lfloor\frac{n}{j k}\rfloor}\sum_{m=2}^{\lfloor\frac{n}{j k l}\rfloor} 1)$ $\displaystyle +\frac{1}{5}...$ (4) So here is, really, my main question: can anything interesting be done with the right hand side of (4)? I've largely hit a dead-end here. [Edit] What I'm looking for especially is any smart ideas about different ways of working with, or breaking up, or transforming those various nested sums, particular in ways that might take advantage of symmetries or connections between them. As Eric Naslund mentions in his equation (2) below, these nested sums are closely related to, and can be expressed in terms of, the divisor summatory function. As a consequence, this opens up the door to the possibility of applying Voronoi summation or the Dirichlet Hyperbola method in connection with these nested sums, for example. I'm curious if there are any other smart techniques from the study of the divisor summatory function that might be brought to bear on (4). [End Edit] Extra I have explored, a bit, one approach with (4), to inconclusive results. The following are all equal $\displaystyle\sum_{j=2}^n \sum_{k=2}^{\lfloor \frac{n}{j} \rfloor} 1 = \sum_{j=2}^n \lfloor \frac{n}{j} \rfloor - 1 = \sum_{j=2}^n \frac{n}{j} - 1 - \sum_{j=2}^n ${$\frac{n}{j}$} where {n} is the fractional part function. This approach can be generalized to arbitrary depths of nested sums. $\sum_{j=2}^n \frac{n}{j} - 1$ and its generalization appear to be relatively smooth curves, capable of relatively tight approximation, or that's my hunch based on empirical results. Anyway, relying on this, (4) can be rewritten as $\displaystyle \sum_{\rho} li(n^\rho) + E(n) =$ $\displaystyle - \frac{1}{2}(\int_{1}^n \int_{1}^{\frac{n}{x}} dy dx - (\sum_{j=2}^n \frac{n}{j} - 1) + \sum_{j=2}^n ${$\frac{n}{j}$}) $\displaystyle+ \frac{1}{3}(\int_{1}^n \int_{1}^{\frac{n}{j}}\int_{1}^{\frac{n}{x y}} dz dy dx - (\sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor}\frac{n}{j k} - 1) + \sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor}${$\frac{n}{j k}$}) $\displaystyle- \frac{1}{4}(\int_{1}^n \int_{1}^{\frac{n}{j}}\int_{1}^{\frac{n}{x y}}\int_{1}^{\frac{n}{x y z }} dw dz dy dx - (\sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor}\sum_{l=2}^{\lfloor\frac{n}{j k}\rfloor}\frac{n}{jkl}-1) + \sum_{j=2}^n \sum_{k=2}^{\lfloor\frac{n}{j}\rfloor}\sum_{l=2}^{\lfloor\frac{n}{j k}\rfloor}${$ \frac{n}{jkl} $}) $ \displaystyle +\frac{1}{5}...$ (5). If you sum up just the terms that look like $\sum_{j=2}^n \frac{n}{j} - 1$, you get a smooth curve that bears a strong resemblance to the Logarithmic Integral - you can see part of it here. I'm very curious how to approximate these terms, and what relationship they bear to the logarithmic integral. So that's another question of mine. Finally, if you sum up just the fractional part sums in (5), you get this function. At least empirically, it appears that all the discontinuities of $\sum_\rho li(n^\rho)$ are confined to just these terms. All of which leads to my final question - are there any interesting tools or approaches that can be applied to these fractional part sums to make any interesting observations? I hope this question isn't too vague or broad for MO.
\documentclass[12pt,a4paper,onecolumn]{article}\usepackage{chemfig}\usepackage{tikz} \begin{document}\begin{tikzpicture} \path node(A){\chemfig{@{H1}H-[@{HB1}:30]@{C1}C(=[@{CDB1}:90]@{O1}O)-[@{CB2}:-30]@{O2}O-[@{OBH1}:30]@{H2}H}};\chemmove{\draw (HB1) node[above] {$\alpha$};\draw (CDB1) node[right] {$\beta$};\draw (CB2) node[above] {$\gamma$};\draw (OBH1) node[above] {$\zeta$};\draw (CB2) node[above] {$\gamma$};\draw (C1) node[below] {$\theta_A$};\draw (O2) node[above] {$\theta_b$};}\end{tikzpicture} \end{document} But I am still far from what I would like to have. Wishes Drawing the two red arrows to show where the angles are. Replacing Alpha to Zeta with a to d. Problem: the letters are too big. I would like to have them tiny (didn't worked). I would like to have them in grey. And it would look nice if they could rotate - they should (if possible) have the same rotation as the bonds they show to. The "thetas" are too big. I would like to have the smaller. And they are too close to the letters C and O. And it would be nice if they could have the two red arrows around to show that they are angles. Using the correct angles to draw the molecule. Theta_A has an angle of 112, while Theta_B has an angle of 110. All in degrees. In the GIMP figure the size is really bad. The letters (not the molecule letters) are too big (in my opinion) in proposition to the bonds. But I think if you could make the letters smaller it would look nice? Kind regards! And thank you very much in advance! PS! The picture frame is nothing that should be written in LaTeX. It was just to show you the molecule in a fancy way.
Is there always a frame in which spatially separated events are simultaneous? The answer is no. Two events that are spatially separated in one frame of reference (1) will be co-located in another frame of reference and not simultaneous in any frame if the interval is time-like (2) will be simultaneous in another frame of reference and not co-located in any frame if the interval is space-like . (3) will be neither co-located nor simultaneous in any other frame if the interval is light-like. Time-like interval If the interval is time-like, the separation in time, $|c\Delta t|$, is larger than the separation in space $|\Delta x|$: $$|c\Delta t| \gt |\Delta x|$$ Thus, there is a frame of reference in which $\Delta x' = 0$; the two events are co-located in this frame. Space-like interval If the interval is space-like, the separation in time is less than the separation in space: $$|c\Delta t| \lt |\Delta x|$$ Thus, there is a frame of reference in which $c\Delta t' = 0$; the two events are simultaneous in this frame. Light-like interval If the interval is light-like the separation in time equals the separation in space: $$|c\Delta t| = |\Delta x|$$ Thus, in all frames of reference, the events are neither co-located nor simultaneous, i.e., $$|c\Delta t'| = |\Delta x'|$$ All of this follows directly from the Lorentz transformation. Let's take your example of two events with spatial separation of a tennis court so $$|\Delta x| = 78\mathrm m$$ Light travels this distance in $\Delta t_c = \frac{78}{300 \cdot 10^6} = 260\mathrm{ns}$ Thus, if the two events occur within 260ns in this frame of reference, the events have space-like interval and are thus simultaneous in another, relatively moving reference frame of reference. Since, in your example, the events occur 1 day apart, the events have time-like interval and cannot be simultaneous in any reference frame.
(30 intermediate revisions by 3 users not shown) Line 1: Line 1: __NOTOC__ __NOTOC__ − = Spring 2019 = + = 2019 = <b>Thursdays in 901 Van Vleck Hall at 2:25 PM</b>, unless otherwise noted. <b>Thursdays in 901 Van Vleck Hall at 2:25 PM</b>, unless otherwise noted. Line 9: Line 9: [mailto:join-probsem@lists.wisc.edu join-probsem@lists.wisc.edu] [mailto:join-probsem@lists.wisc.edu join-probsem@lists.wisc.edu] + + + + − == January 31, [https://www.math.princeton.edu/people/oanh-nguyen Oanh Nguyen], [https://www.math.princeton.edu/ Princeton] == + == , , == − Title: '''Survival and extinction of epidemics on random graphs with general degrees''' + − Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. + , , − Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. − == <span style="color:red"> Wednesday, February 6 at 4:00pm in Van Vleck 911</span> , [https://lc-tsai.github.io/ Li-Cheng Tsai], [https://www.columbia.edu/ Columbia University] == + == , , == − Title: '''When particle systems meet PDEs''' + − Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. + , , − == February 7, [http://www.math.cmu.edu/~yug2/ Yu Gu], [https://www.cmu.edu/math/index.html CMU] == + == 7, , == − Title: '''Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime''' + − Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. + − == February 14, [https://www.math.wisc.edu/~seppalai/ Timo Seppäläinen], UW-Madison== + == , , == − Title: '''Geometry of the corner growth model''' + − Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). − == February 21, [https://people.kth.se/~holcomb/ Diane Holcomb], KTH == + − Title: '''On the centered maximum of the Sine beta process''' + : + − Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log- correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. + Abstract: . -. we will of the of , and a the .. − − == Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison == − − Title: Quantitative homogenization in a balanced random environment − Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). − == <span style="color:red"> Wednesday, February 27 at 1:10pm</span> [http: //www.math.purdue.edu/~peterson/ Jon Peterson], [http://www.math.purdue.edu/ Purdue] == + == <span style="color:red"></span>:, == − <div style="width: 520px;height:50px;border:5px solid black"> + <div style="width:;height:50px;border:5px solid black"> − <b><span style="color:red">  Please note the unusual day and time. + <b><span style="color:red">  Please note the unusual day. </span></b> </span></b> </div> </div> + − Title: '''Functional Limit Laws for Recurrent Excited Random Walks''' + Abstract: I will the of the scaling limit of () of a -invariant on the , and to . The directed is a limit for and random .. , , the a . and . − − Abstract: − − Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit . This is joint work with Elena Kosygina. − − == March 7, TBA == − − == March 14, TBA == − == March 21, Spring Break, No seminar == − − == March 28, [https://www.math.wisc.edu/~shamgar/ Shamgar Gurevitch] [https://www.math.wisc.edu/ UW-Madison]== − − Title: '''Harmonic Analysis on GLn over finite fields, and Random Walks''' − − Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the ''character ratio'': − − $$ − \text{trace}( \rho(g) )/\text{dim}(\rho), − $$ − − for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G- biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant ''rank''. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). − − == April 4, TBA == − == April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] == − − == April 18, [https://services.math.duke.edu/~agazzi/index.html Andrea Agazzi], [https://math.duke.edu/ Duke] == − − == April 25, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] == − − == April 26, Colloquium, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] == − − == April 26, TBA == − == May 2, TBA == − − − <!-- − ==<span style="color:red"> Friday, August 10, 10am, B239 Van Vleck </span> András Mészáros, Central European University, Budapest == − − − Title: '''The distribution of sandpile groups of random regular graphs''' − − Abstract: − We study the distribution of the sandpile group of random <math>d</math>-regular graphs. For the directed model we prove that it follows the Cohen-Lenstra heuristics, that is, the probability that the <math>p</math>-Sylow subgroup of the sandpile group is a given <math>p</math>-group <math>P</math>, is proportional to <math>|\operatorname{Aut}(P)|^{-1}</math>. For finitely many primes, these events get independent in limit . Similar results hold for undirected random regular graphs, there for odd primes the limiting distributions are the ones given by Clancy, Leake and Payne. − − Our results extends a recent theorem of Huang saying that the adjacency matrices of random <math>d</math>-regular directed graphs are invertible with high probability to the undirected case. − − − ==September 20, [http://math. columbia. edu/~hshen/ Hao Shen], [https://www.math.wisc.edu/ UW-Madison] == − − Title: '''Stochastic quantization of Yang-Mills''' − − Abstract: − "Stochastic quantization” refers to a formulation of quantum field theory as stochastic PDEs. Interesting progress has been made these years in understanding these SPDEs, examples including Phi4 and sine-Gordon. Yang-Mills is a type of quantum field theory which has gauge symmetry, and its stochastic quantization is a Yang-Mills flow perturbed by white noise. − In this talk we start by an Abelian example where we take a symmetry-preserving lattice regularization and study the continuum limit. We will then discuss non-Abelian Yang-Mills theories and introduce a symmetry-breaking smooth regularization and restore the symmetry using a notion of gauge-equivariance. With these results we can construct dynamical Wilson loop and string observables. Based on [S., arXiv:1801.04596] and [Chandra,Hairer,S., work in progress]. − --> -->
I first learned big-Oh (little-Oh, big-Theta.....) complexity for growth of functions using CLRS in a computer science class. Now I am doing a project on optimization. In our optimization class, we were introduced to the notion of rate of convergence which are characterized by the ratio: $\lim\limits_{k \to \infty} \dfrac{|x_{k+1} - L|}{|x_k-L|}$ where $L$ is the limit of the sequence $(x_k)$. And from there we define linear, superlinear and sublinear convergence raes. However, when I looked up on some reference online, the above notions of rate of convergence is almost never used. Instead, all the convergence rates are characterized in terms of big-Oh. Quoting from the slides: : Gradient decent with fixed step size $t\le\frac{1}{L}$ satisfies $f(x^{(k)})-f(x^*)\le\frac{\|x^{(0)}-x^*\|^2}{2tk}$. Theorem I.e. gradient decent has convergence rate $O\left(\frac{1}{k}\right)$. I.e. to get $f(x^{(k)})-f(x^*)\le\epsilon$, we need $O\left(\frac{1}{k}\right)$ iterations. Unfortunately, these authors never define what their notations mean. I am in need of citing the definitions of big-Oh for my class project. Is there any disparity between the rate of convergence in terms of big-Oh (and other asymptotic) used in optimization versus that used in CS (as can be found in a standard textbook such as CLRS)? Is there an optimization textbook that addresses big-Oh notation?
I'd like to reproduce results from this paper in which the authors used Group Delay-grams as an input to the Machine Learning model. The Machine Learning side is not the one that I have problems with. I have a problem with computing a Group Delay function of an audio file. To begin with, I know that a group delay per se is a property of the filter. Authors of the paper, however, wrote something like this Group delay [10] is defined as the negative derivative of the phase spectrum of STFT: $$\tau(\omega,t)=-\frac{d(\theta(\omega,t))}{d\omega}\quad(2)$$ As the implementation of Equation (2) requires the unwrap- ping of the phase spectrum, the group delay function can be alternatively calculated using only the amplitude values: $$\tau(\omega,t)=\frac{X_{R}(\omega,t)Y_{R}(\omega,t)+Y_{I}(\omega,t)X_{I}(\omega,t)}{|X(\omega,t)|^2}\quad(3)$$ where $R$ and $I$ denote the real and imaginary parts. $X(\omega,t)$ and $Y(\omega,t)$ denote the STFT of $x(n)$ and $nx(n)$, respectively. This, I believe, is supposed to generate an image like in Fig. 3 in the paper, which looks like this (after either truncating or padding to the length of 256 along the time axis) The problem is that when I try to compute this GD-gram from the equation above, the matrices multiplication cannot be done since both $X$ and $Y$ are non-symmetric. This is the original paper, that is referenced in both this paper and [10], where the authors are deriving the equation (3). As I understood, the matrices multiplication is a standard one, not element-wise (element-wise multiplication doesn't produce the result that is in the paper). In my implementation below I'm using a window of length 800 samples since the authors of the paper use 50 ms window, which corresponds to 800 samples for 16 kHz sampling rate audio file. I tried to implement that in python as follows: from scipy.io import wavfilefrom scipy.signal import stft, get_windowimport numpy as npimport matplotlib.pyplot as pltrate, data = wavfile.read("test.wav")n_data = np.multiply(data, np.arange(len(data)))f, t, X = stft(data, rate, window="hamming", nperseg=800, return_onesided=False)f_n, t_n, Y = stft(n_data, rate, window="hamming", nperseg=800, return_onesided=False)group_delay = (np.dot(X.real, Y.real) + np.dot(Y.imag, X.imag)) / np.power(np.abs(X), 2) but the code fails with ---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-8-0058b9785caf> in <module>----> 1 group_delay = (np.dot(X.real, Y.real) + np.dot(Y.imag, X.imag)) / np.power(np.abs(X), 2)ValueError: shapes (800,109) and (800,109) not aligned: 109 (dim 1) != 800 (dim 0) which is rather obvious. I tried to implement that in Matlab as well, hoping that maybe it's a python fault but it fails as well. [x,fs] = audioread('test.wav');n_x = x .* [1:size(x,1)];X = spectrogram(x, 800, 400, 2048, fs);Y = spectrogram(n_x, 800, 400, 2048, fs);T = (real(X) * real(Y) + imag(Y) * imag(X)) / abs(X)^2; How can I compute the Group Delay function using mentioned equation
EDIT : I read more about it and I get some help with someone else, here is the correct answer : The density forecast is the predictive likelihood value of the process estimated at the realized value computed in a one step ahead way. Thus for instance for a standard arma garch process with normal errors: You forecast the mean $u^{f}_{t|t-1}$ and variance $v^{f}_{t|t-1}$ process at time t-1 for time t the predictive density forecast for time t of the realized value $u_{t}$ is $N(u^{f}_{t|t-1},v^{f}_{t|t-1})$ the predictive density forecast $u_{t}\sim N(u^{f}_{t|t-1},v^{f}_{t|t-1}) $ is equivalent to the predictive residuals density forecast : $r_{t} = u_{t}-u^{f}_{t|t-1}\sim N(0,v^{f}_{t|t-1}) $ the density forecast is the density of $r_{t}$ with respect to a $N(0,v^{f}_{t|t-1}) $ Note that it is very similar to the "usual" likelihood except you are estimating the model in a one step ahead way (the parameters are re-estimated at every step) Previous Post : It is not a "basic question", If I am correct : First you estimate your model on the return series and obtains parameters. You must estimate your model in such a way you obtain one-step ahead errors (that I will call computed errors in what follow) and associated time-serie of the predictive errors distributions parameters: $\hat{\mu_{t}}$ and $ \hat{\sigma_{t}^{2}}$ (note : these are not the parameters of the mean and variance processes but parameters of the error distribution). I would take the original return serie minus the fitted returns to obtain the computed errors : $$\hat{e_{t}} = r_{t} - \hat{r_{t}}$$ These computed errors should behave accordingly your predictive (errors) density which is defined by parameters you obtain in the first step ($\hat{\mu_{t}}, \hat{\sigma_{t}}$) : Note the subscript $_{t}$ for parameters ! Then I would compute the predictive density of these residuals using parameters obtained in the first step estimation, indeed if your forecasts are accurate thedensity of $\hat{e_{t}}$ should ideally be equal to the predictive density defined as: normal($\hat{\mu_{t}}, \hat{\sigma_{t}}$). If it fit perfectly (they have the same mean, variance) then the density forecast will returns a high mass . If the model is misspecified, the errors $\hat{e_{t}}$ will fall outside of the range implied by your predictive error density and then it will assign very small probability. So in your case I will use the following function (again note the time subscript) Density forecast ($\hat{e_{t}}$) = normpdf($\hat{e_{t}}, \hat{\mu_{t}}, \hat{\sigma_{t}}$)
10.10. Adam¶ Created on the basis of RMSProp, Adam also uses EWMA on the mini-batch stochastic gradient[1]. Here, we are going to introduce this algorithm. 10.10.1. The Algorithm¶ Adam [Kingma.Ba.2014] uses the momentum variable \(\boldsymbol{v}_t\) and variable \(\boldsymbol{s}_t\), which is an EWMA on the squares of elements in the mini-batch stochastic gradient from RMSProp, and initializes each element of the variables to 0 at time step 0. Given the hyperparameter \(0 \leq \beta_1 < 1\) (the author of the algorithm suggests a value of 0.9), the momentum variable \(\boldsymbol{v}_t\) at time step \(t\) is the EWMA of the mini-batch stochastic gradient \(\boldsymbol{g}_t\): Just as in RMSProp, given the hyperparameter \(0 \leq \beta_2 < 1\) (the author of the algorithm suggests a value of 0.999), After taken the squares of elements in the mini-batch stochastic gradient, find \(\boldsymbol{g}_t \odot \boldsymbol{g}_t\) and perform EWMA on it to obtain \(\boldsymbol{s}_t\): Since we initialized elements in \(\boldsymbol{v}_0\) and \(\boldsymbol{s}_0\) to 0, we get \(\boldsymbol{v}_t = (1-\beta_1) \sum_{i=1}^t \beta_1^{t-i} \boldsymbol{g}_i\) at time step \(t\). Sum the mini-batch stochastic gradient weights from each previous time step to get \((1-\beta_1) \sum_{i=1}^t \beta_1^{t-i} = 1 - \beta_1^t\). Notice that when \(t\) is small, the sum of the mini-batch stochastic gradient weights from each previous time step will be small. For example, when \(\beta_1 = 0.9\), \(\boldsymbol{v}_1 = 0.1\boldsymbol{g}_1\). To eliminate this effect, for any time step \(t\), we can divide \(\boldsymbol{v}_t\) by \(1 - \beta_1^t\), so that the sum of the mini-batch stochastic gradient weights from each previous time step is 1. This is also called bias correction. In the Adam algorithm, we perform bias corrections for variables \(\boldsymbol{v}_t\) and \(\boldsymbol{s}_t\): Next, the Adam algorithm will use the bias-corrected variables \(\hat{\boldsymbol{v}}_t\) and \(\hat{\boldsymbol{s}}_t\) from above to re-adjust the learning rate of each element in the model parameters using element operations. Here, \(\eta\) is the learning rate while \(\epsilon\) is a constant added to maintain numerical stability, such as \(10^{-8}\). Just as for Adagrad, RMSProp, and Adadelta, each element in the independent variable of the objective function has its own learning rate. Finally, use \(\boldsymbol{g}_t'\) to iterate the independent variable: 10.10.2. Implementation from Scratch¶ We use the formula from the algorithm to implement Adam. Here, time step\(t\) uses hyperparams to input parameters to the adamfunction. %matplotlib inlineimport d2lfrom mxnet import np, npxnpx.set_np()def init_adam_states(feature_dim): v_w, v_b = np.zeros((feature_dim, 1)), np.zeros(1) s_w, s_b = np.zeros((feature_dim, 1)), np.zeros(1) return ((v_w, s_w), (v_b, s_b))def adam(params, states, hyperparams): beta1, beta2, eps = 0.9, 0.999, 1e-6 for p, (v, s) in zip(params, states): v[:] = beta1 * v + (1 - beta1) * p.grad s[:] = beta2 * s + (1 - beta2) * np.square(p.grad) v_bias_corr = v / (1 - beta1 ** hyperparams['t']) s_bias_corr = s / (1 - beta2 ** hyperparams['t']) p[:] -= hyperparams['lr'] * v_bias_corr / (np.sqrt(s_bias_corr) + eps) hyperparams['t'] += 1 Use Adam to train the model with a learning rate of \(0.01\). data_iter, feature_dim = d2l.get_data_ch10(batch_size=10)d2l.train_ch10(adam, init_adam_states(feature_dim), {'lr': 0.01, 't': 1}, data_iter, feature_dim); loss: 0.243, 0.080 sec/epoch 10.10.3. Concise Implementation¶ From the Trainer instance of the algorithm named “adam”, we canimplement Adam with Gluon. d2l.train_gluon_ch10('adam', {'learning_rate': 0.01}, data_iter) loss: 0.242, 0.034 sec/epoch 10.10.4. Summary¶ Created on the basis of RMSProp, Adam also uses EWMA on the mini-batch stochastic gradient Adam uses bias correction. 10.10.5. Exercises¶ Adjust the learning rate and observe and analyze the experimental results. Some people say that Adam is a combination of RMSProp and momentum. Why do you think they say this?
Cython helper functions for congruence subgroups¶ This file contains optimized Cython implementations of a few functions related to the standard congruence subgroups \(\Gamma_0, \Gamma_1, \Gamma_H\). These functions are for internal use by routines elsewhere in the Sage library. sage.modular.arithgroup.congroup. degeneracy_coset_representatives_gamma0( N, M, t)¶ Let \(N\) be a positive integer and \(M\) a divisor of \(N\). Let \(t\) be a divisor of \(N/M\), and let \(T\) be the \(2 \times 2\) matrix \((1, 0; 0, t)\). This function returns representatives for the orbit set \(\Gamma_0(N) \backslash T \Gamma_0(M)\), where \(\Gamma_0(N)\) acts on the left on \(T \Gamma_0(M)\). INPUT: N– int M– int (divisor of \(N\)) t– int (divisor of \(N/M\)) OUTPUT: list – list of lists [a,b,c,d], where [a,b,c,d]should be viewed as a 2x2 matrix. This function is used for computation of degeneracy maps between spaces of modular symbols, hence its name. We use that \(T^{-1} \cdot (a,b;c,d) \cdot T = (a,bt; c/t,d)\), that the group \(T^{-1} \Gamma_0(N) T\) is contained in \(\Gamma_0(M)\), and that \(\Gamma_0(N) T\) is contained in \(T \Gamma_0(M)\). ALGORITHM: Compute representatives for \(\Gamma_0(N/t,t)\) inside of \(\Gamma_0(M)\): COSET EQUIVALENCE: Two right cosets represented by \([a,b;c,d]\) and \([a',b';c',d']\) of \(\Gamma_0(N/t,t)\) in \({\rm SL}_2(\ZZ)\) are equivalent if and only if \((a,b)=(a',b')\) as points of \(\mathbf{P}^1(\ZZ/t\ZZ)\), i.e., \(ab' \cong ba' \pmod{t}\), and \((c,d) = (c',d')\) as points of \(\mathbf{P}^1(\ZZ/(N/t)\ZZ)\). ALGORITHM to list all cosets: Compute the number of cosets. Compute a random element \(x\) of \(\Gamma_0(M)\). Check if x is equivalent to anything generated so far; if not, add x to the list. Continue until the list is as long as the bound computed in step (a). There is a bijection between \(\Gamma_0(N)\backslash T \Gamma_0(M)\) and \(\Gamma_0(N/t,t) \backslash \Gamma_0(M)\) given by \(T r \leftrightarrow r\). Consequently we obtain coset representatives for \(\Gamma_0(N)\backslash T \Gamma_0(M)\) by left multiplying by \(T\) each coset representative of \(\Gamma_0(N/t,t) \backslash \Gamma_0(M)\) found in step 1. EXAMPLES: sage: from sage.modular.arithgroup.all import degeneracy_coset_representatives_gamma0 sage: len(degeneracy_coset_representatives_gamma0(13, 1, 1)) 14 sage: len(degeneracy_coset_representatives_gamma0(13, 13, 1)) 1 sage: len(degeneracy_coset_representatives_gamma0(13, 1, 13)) 14 sage.modular.arithgroup.congroup. degeneracy_coset_representatives_gamma1( N, M, t)¶ Let \(N\) be a positive integer and \(M\) a divisor of \(N\). Let \(t\) be a divisor of \(N/M\), and let \(T\) be the \(2 \times 2\) matrix \((1,0; 0,t)\). This function returns representatives for the orbit set \(\Gamma_1(N) \backslash T \Gamma_1(M)\), where \(\Gamma_1(N)\) acts on the left on \(T \Gamma_1(M)\). INPUT: N– int M– int (divisor of \(N\)) t– int (divisor of \(N/M\)) OUTPUT: list – list of lists [a,b,c,d], where [a,b,c,d]should be viewed as a 2x2 matrix. This function is used for computation of degeneracy maps between spaces of modular symbols, hence its name. ALGORITHM: Everything is the same as for degeneracy_coset_representatives_gamma0(), except for coset equivalence. Here \(\Gamma_1(N/t,t)\) consists of matrices that are of the form \((1,*; 0,1) \bmod N/t\) and \((1,0; *,1) \bmod t\). COSET EQUIVALENCE: Two right cosets represented by \([a,b;c,d]\) and \([a',b';c',d']\) of \(\Gamma_1(N/t,t)\) in \({\rm SL}_2(\ZZ)\) are equivalent if and only if\[a \cong a' \pmod{t}, b \cong b' \pmod{t}, c \cong c' \pmod{N/t}, d \cong d' \pmod{N/t}.\] EXAMPLES: sage: from sage.modular.arithgroup.all import degeneracy_coset_representatives_gamma1 sage: len(degeneracy_coset_representatives_gamma1(13, 1, 1)) 168 sage: len(degeneracy_coset_representatives_gamma1(13, 13, 1)) 1 sage: len(degeneracy_coset_representatives_gamma1(13, 1, 13)) 168 sage.modular.arithgroup.congroup. generators_helper( coset_reps, level)¶ Helper function for generators of Gamma0, Gamma1 and GammaH. These are computed using coset representatives, via an “inverse Todd-Coxeter” algorithm, and generators for \({\rm SL}_2(\ZZ)\). ALGORITHM: Given coset representatives for a finite index subgroup \(G\) of \({\rm SL}_2(\ZZ)\) we compute generators for \(G\) as follows. Let \(R\) be a set of coset representatives for \(G\). Let \(S, T \in {\rm SL}_2(\ZZ)\) be defined by \((0,-1; 1,0)\) and \((1,1,0,1)\), respectively. Define maps \(s, t: R \to G\) as follows. If \(r \in R\), then there exists a unique \(r' \in R\) such that \(GrS = Gr'\). Let \(s(r) = rSr'^{-1}\). Likewise, there is a unique \(r'\) such that \(GrT = Gr'\) and we let \(t(r) = rTr'^{-1}\). Note that \(s(r)\) and \(t(r)\) are in \(G\) for all \(r\). Then \(G\) is generated by \(s(R)\cup t(R)\). There are more sophisticated algorithms using group actions on trees (and Farey symbols) that give smaller generating sets – this code is now deprecated in favour of the newer implementation based on Farey symbols. EXAMPLES: sage: Gamma0(7).generators(algorithm="todd-coxeter") # indirect doctest [ [1 1] [-1 0] [ 1 -1] [1 0] [1 1] [-3 -1] [-2 -1] [-5 -1] [0 1], [ 0 -1], [ 0 1], [7 1], [0 1], [ 7 2], [ 7 3], [21 4], [-4 -1] [-1 0] [ 1 0] [21 5], [ 7 -1], [-7 1] ]
Subschemes of affine space¶ AUTHORS: David Kohel (2005): initial version. William Stein (2005): initial version. Ben Hutz (2013): affine subschemes class sage.schemes.affine.affine_subscheme. AlgebraicScheme_subscheme_affine( A, polynomials, embedding_center=None, embedding_codomain=None, embedding_images=None)¶ Construct an algebraic subscheme of affine space. Warning INPUT: A– ambient affine space polynomials– single polynomial, ideal or iterable of defining polynomials. EXAMPLES: sage: A3.<x, y, z> = AffineSpace(QQ, 3) sage: A3.subscheme([x^2-y*z]) Closed subscheme of Affine Space of dimension 3 over Rational Field defined by: x^2 - y*z dimension()¶ Return the dimension of the affine algebraic subscheme. OUTPUT: Integer. EXAMPLES: sage: A.<x,y> = AffineSpace(2, QQ) sage: A.subscheme([]).dimension() 2 sage: A.subscheme([x]).dimension() 1 sage: A.subscheme([x^5]).dimension() 1 sage: A.subscheme([x^2 + y^2 - 1]).dimension() 1 sage: A.subscheme([x*(x-1), y*(y-1)]).dimension() 0 Something less obvious: sage: A.<x,y,z,w> = AffineSpace(4, QQ) sage: X = A.subscheme([x^2, x^2*y^2 + z^2, z^2 - w^2, 10*x^2 + w^2 - z^2]) sage: X Closed subscheme of Affine Space of dimension 4 over Rational Field defined by: x^2, x^2*y^2 + z^2, z^2 - w^2, 10*x^2 - z^2 + w^2 sage: X.dimension() 1 intersection_multiplicity( X, P)¶ Return the intersection multiplicity of this subscheme and the subscheme Xat the point P. The intersection of this subscheme with Xmust be proper, that is \(\mathrm{codim}(self\cap X) = \mathrm{codim}(self) + \mathrm{codim}(X)\), and must also be finite. We use Serre’s Tor formula to compute the intersection multiplicity. If \(I\), \(J\) are the defining ideals of self, X, respectively, then this is \(\sum_{i=0}^{\infty}(-1)^i\mathrm{length}(\mathrm{Tor}_{\mathcal{O}_{A,p}}^{i} (\mathcal{O}_{A,p}/I,\mathcal{O}_{A,p}/J))\) where \(A\) is the affine ambient space of these subschemes. INPUT: X– subscheme in the same ambient space as this subscheme. P– a point in the intersection of this subscheme with X. OUTPUT: An integer. EXAMPLES: sage: A.<x,y> = AffineSpace(QQ, 2) sage: C = Curve([y^2 - x^3 - x^2], A) sage: D = Curve([y^2 + x^3], A) sage: Q = A([0,0]) sage: C.intersection_multiplicity(D, Q) 4 sage: R.<a> = QQ[] sage: K.<b> = NumberField(a^6 - 3*a^5 + 5*a^4 - 5*a^3 + 5*a^2 - 3*a + 1) sage: A.<x,y,z,w> = AffineSpace(K, 4) sage: X = A.subscheme([x*y, y*z + 7, w^3 - x^3]) sage: Y = A.subscheme([x - z^3 + z + 1]) sage: Q = A([0, -7*b^5 + 21*b^4 - 28*b^3 + 21*b^2 - 21*b + 14, -b^5 + 2*b^4 - 3*b^3 \ + 2*b^2 - 2*b, 0]) sage: X.intersection_multiplicity(Y, Q) 3 sage: A.<x,y,z> = AffineSpace(QQ, 3) sage: X = A.subscheme([z^2 - 1]) sage: Y = A.subscheme([z - 1, y - x^2]) sage: Q = A([1,1,1]) sage: X.intersection_multiplicity(Y, Q) Traceback (most recent call last): ... TypeError: the intersection of this subscheme and (=Closed subscheme of Affine Space of dimension 3 over Rational Field defined by: z - 1, -x^2 + y) must be proper and finite sage: A.<x,y,z,w,t> = AffineSpace(QQ, 5) sage: X = A.subscheme([x*y, t^2*w, w^3*z]) sage: Y = A.subscheme([y*w + z]) sage: Q = A([0,0,0,0,0]) sage: X.intersection_multiplicity(Y, Q) Traceback (most recent call last): ... TypeError: the intersection of this subscheme and (=Closed subscheme of Affine Space of dimension 5 over Rational Field defined by: y*w + z) must be proper and finite is_smooth( point=None)¶ Test whether the algebraic subscheme is smooth. INPUT: point– A point or None(default). The point to test smoothness at. OUTPUT: Boolean. If no point was specified, returns whether the algebraic subscheme is smooth everywhere. Otherwise, smoothness at the specified point is tested. EXAMPLES: sage: A2.<x,y> = AffineSpace(2,QQ) sage: cuspidal_curve = A2.subscheme([y^2-x^3]) sage: cuspidal_curve Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: -x^3 + y^2 sage: smooth_point = cuspidal_curve.point([1,1]) sage: smooth_point in cuspidal_curve True sage: singular_point = cuspidal_curve.point([0,0]) sage: singular_point in cuspidal_curve True sage: cuspidal_curve.is_smooth(smooth_point) True sage: cuspidal_curve.is_smooth(singular_point) False sage: cuspidal_curve.is_smooth() False multiplicity( P)¶ Return the multiplicity of Pon this subscheme. This is computed as the multiplicity of the local ring of this subscheme corresponding to P. This subscheme must be defined over a field. An error is raised if Pis not a point on this subscheme. INPUT: P– a point on this subscheme. OUTPUT: An integer. EXAMPLES: sage: A.<x,y,z,w> = AffineSpace(QQ, 4) sage: X = A.subscheme([z*y - x^7, w - 2*z]) sage: Q1 = A([1,1/3,3,6]) sage: X.multiplicity(Q1) 1 sage: Q2 = A([0,0,0,0]) sage: X.multiplicity(Q2) 2 sage: A.<x,y,z,w,v> = AffineSpace(GF(23), 5) sage: C = A.curve([x^8 - y, y^7 - z, z^3 - 1, w^5 - v^3]) sage: Q = A([22,1,1,0,0]) sage: C.multiplicity(Q) 3 sage: K.<a> = QuadraticField(-1) sage: A.<x,y,z,w,t> = AffineSpace(K, 5) sage: X = A.subscheme([y^7 - x^2*z^5 + z^3*t^8 - x^2*y^4*z - t^8]) sage: Q1 = A([1,1,0,1,-1]) sage: X.multiplicity(Q1) 1 sage: Q2 = A([0,0,0,-a,0]) sage: X.multiplicity(Q2) 7 Check that trac ticket #27479 is fixed: sage: A1.<x> = AffineSpace(QQ, 1) sage: X = A1.subscheme([x^1789 + x]) sage: Q = X([0]) sage: X.multiplicity(Q) 1 projective_closure( i=None, PP=None)¶ Return the projective closure of this affine subscheme. INPUT: i– (default: None) determines the embedding to use to compute the projective closure of this affine subscheme. The embedding used is the one which has a 1 in the i-th coordinate, numbered from 0. PP– (default: None) ambient projective space, i.e., ambient space of codomain of morphism; this is constructed if it is not given. OUTPUT: a projective subscheme. EXAMPLES: sage: A.<x,y,z,w> = AffineSpace(QQ,4) sage: X = A.subscheme([x^2 - y, x*y - z, y^2 - w, x*z - w, y*z - x*w, z^2 - y*w]) sage: X.projective_closure() Closed subscheme of Projective Space of dimension 4 over Rational Field defined by: x0^2 - x1*x4, x0*x1 - x2*x4, x1^2 - x3*x4, x0*x2 - x3*x4, x1*x2 - x0*x3, x2^2 - x1*x3 sage: A.<x,y,z> = AffineSpace(QQ, 3) sage: P.<a,b,c,d> = ProjectiveSpace(QQ, 3) sage: X = A.subscheme([z - x^2 - y^2]) sage: X.projective_closure(1, P).ambient_space() == P True projective_embedding( i=None, PP=None)¶ Returns a morphism from this affine scheme into an ambient projective space of the same dimension. The codomain of this morphism is the projective closure of this affine scheme in PP, if given, or otherwise in a new projective space that is constructed. INPUT: i– integer (default: dimension of self = last coordinate) determines which projective embedding to compute. The embedding is that which has a 1 in the i-th coordinate, numbered from 0. PP– (default: None) ambient projective space, i.e., ambient space of codomain of morphism; this is constructed if it is not given. EXAMPLES: sage: A.<x, y, z> = AffineSpace(3, ZZ) sage: S = A.subscheme([x*y-z]) sage: S.projective_embedding() Scheme morphism: From: Closed subscheme of Affine Space of dimension 3 over Integer Ring defined by: x*y - z To: Closed subscheme of Projective Space of dimension 3 over Integer Ring defined by: x0*x1 - x2*x3 Defn: Defined on coordinates by sending (x, y, z) to (x : y : z : 1) sage: A.<x, y, z> = AffineSpace(3, ZZ) sage: P = ProjectiveSpace(3,ZZ,'u') sage: S = A.subscheme([x^2-y*z]) sage: S.projective_embedding(1,P) Scheme morphism: From: Closed subscheme of Affine Space of dimension 3 over Integer Ring defined by: x^2 - y*z To: Closed subscheme of Projective Space of dimension 3 over Integer Ring defined by: u0^2 - u2*u3 Defn: Defined on coordinates by sending (x, y, z) to (x : 1 : y : z) sage: A.<x,y,z> = AffineSpace(QQ, 3) sage: X = A.subscheme([y - x^2, z - x^3]) sage: X.projective_embedding() Scheme morphism: From: Closed subscheme of Affine Space of dimension 3 over Rational Field defined by: -x^2 + y, -x^3 + z To: Closed subscheme of Projective Space of dimension 3 over Rational Field defined by: x0^2 - x1*x3, x0*x1 - x2*x3, x1^2 - x0*x2 Defn: Defined on coordinates by sending (x, y, z) to (x : y : z : 1) When taking a closed subscheme of an affine space with a projective embedding, the subscheme inherits the embedding: sage: A.<u,v> = AffineSpace(2, QQ, default_embedding_index=1) sage: X = A.subscheme(u - v) sage: X.projective_embedding() Scheme morphism: From: Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: u - v To: Closed subscheme of Projective Space of dimension 2 over Rational Field defined by: x0 - x2 Defn: Defined on coordinates by sending (u, v) to (u : 1 : v) sage: phi = X.projective_embedding() sage: psi = A.projective_embedding() sage: phi(X(2, 2)) == psi(A(X(2, 2))) True
Suppose you are given two oriented manifolds with boundary $M$ say $B, B'$ and $\partial B = M = \partial B'$. Identify the boundaries and form $C = B \sqcup_{Id: M \to M} B'$. I want to see why, with the orientation induced by being submanifolds of $C$, $B$ and $B'$ induce opposite orientations to their boundary $M$. I'm particularly interested in the way the fundamental classes of $C,B,B'$ and $M$ behave. Thanks a lot! I'm assuming you're JuanOS from MO, and this question corresponds to the MO question: https://mathoverflow.net/questions/54278/orientation-of-a-glued-manifold Here's how I interpret your question. You have an oriented manifold $C$ which is compact without boundary, and you've decomposed it into the union of two submanifolds $B$ and $B'$ with $B \cap B' = M$, $M$ a compact manifold. Let $n=dim(C)$, so $n-1=dim(M)$. The global orientation class for $C$ is a generator $\mu_C \in H_n C$. There are restriction maps $H_n C \to H_n(B,M)$ and $H_n C \to H_n(B',M)$ which give the corresponding global orientations $\mu_B$ for $B$ and $\mu_{B'}$ for $B'$ respectively. Then there is the pairs $M \to B \to (B,M)$ and $M \to B' \to (B',M)$ and you want to know how the two generators of $H_n(B,M)$ and $H_n(B',M)$ compare when mapped to elements of $H_{n-1}M$ via the two connecting maps for the above pairs, specifically you want to show that $\partial \mu_{B} + \partial \mu_{B'} = 0$. Moreover, you want an argument that's fairly generic, in particular not specific to triangulated smooth manifolds or anything like that. The above "restriction maps" are formally induced by inclusion $C \to (C, C \setminus int(B') ) \leftarrow (B,M)$, one being an excision inclusion, the other just an inclusion. I don't believe this is as complicated as Kuperberg makes out -- the complication comes when attempting to bridge the gap between the smooth or simplicial views with the strictly homological view, especially say in the singular homology setting. But the above formulation side-steps those complications as your question is phrased entirely in terms of Mayer-Vietoris type constructions. Okay, so here's a cheap way to check that it's true. Since $M$ is a submanifold of $C$, given a point $p \in M$ you can find an orientation-preserving degree 1 map $f : C \to S^n$ such that $f(M) \subset S^{n-1} \times \{0\} \subset S^n$. Moreover, you can ensure $f$ when restricted to $M$ is an isomorphism on the top homology groups of $M$ and $S^{n-1}$ respectively, and that $f$ sends $B$ to the top hemi-sphere, and $B'$ to the bottom hemi-sphere. So by naturality, you've reduced your problem to the case $D^n \sqcup_{S^{n-1}} D^n = S^n$, i.e. $C= S^n$, and $B$ and $B'$ both discs, which one way or another boils down to a cellular homology computation (we are using singular homology but what I'm saying is this is singular homology of a CW-complex to effectively cellular homology). This is the equivalent step to using outward pointing normals and determinants for smooth manifolds. I don't know if this answers what you meant by "why", but to my mind this is why the orientations are opposite: Given a point $P$ on the boundary, consider a neighbourhood $U$ of $P$ in $C$ which is homeomorphic to an open ball in $\mathbb{R}^n$, with the boundary mapped to a hyperplane bisecting the ball. An orientation of $C$ determines an orientation of this ball. For the ball, it is clear that its two halves induce opposite orientations on the hyperplane bisecting it. Then this has to be true also of the intersections of $M$, $B$ and $B'$ with $U$, and hence, since $P$ was arbitrary, of $M$, $B$ and $B'$ in general. If this wasn't the sort of answer you were looking for, please elaborate.
Let $\{ e_{\alpha} \}_{\alpha\in\Lambda}$ be an orthonormal subset of a Hilbert space $H$, and let $x\in H$ be given. Then the following are equivalent: $\sum_{\alpha} \langle x,e_\alpha \rangle e_{\alpha} =x$. $\sum_{\alpha} |\langle x,e_{\alpha}\rangle|^2 = \|x\|^2$. $x$ is in the closure of the linear span of $\{ e_{\alpha} \}_{\alpha\in\Lambda}$. The orthonormal subset $\{ e_{\alpha} \}$ is complete if every $x\in H$ satisfies the above. If you let $M$ denote the subspace of all finite linear combinations of elements of $M$, then the following are equivalent: $\{ e_{\alpha} \}$ is complete. $\sum_{\alpha}\langle x,e_{\alpha}\rangle e_{\alpha}=x$ for all $x\in H$. $\sum_{\alpha}\langle x,e_{\alpha}\rangle|^2 =\|x\|^2$ for all $x\in H$. The subspace $M$ spanned by the $\{e_{\alpha}\}$ is dense in $X$. $\langle x,e_\alpha\rangle =0$ for all $\alpha$ iff $x=0$. You can abstract to looking only at a subspace, which leads you to consider a subspace $M$ in a Banach space $X$ that is generated by taking finite linear combinations of the set of elements $\{ x_{\alpha} \}_{\alpha\in\Lambda}$. The closure $\overline{M}$ of $M$ consists of every element $x\in X$ such that, for every $\epsilon > 0$, there is a finite linear combination $\sum_{n=1}^{k}\mu_n x_{\alpha_n}$ such that $\|\sum_{n=1}^{k}\mu_n x_{\alpha_n} -x\|< \epsilon$. In the case of orthonormal sets $\{ e_{\alpha} \}_{\alpha\in\Lambda}$, any such $x$ must be equal to $\sum_{\alpha\in\Lambda}\langle x,e_{\alpha}\rangle e_{\alpha}$. But in a Banach space, there may be no way to write the element as such a sum. If the set $\{ x_{\alpha} \}_{\alpha\in\Lambda}$ is total in a Banach space, there is a sequence of finite sums of the $x_{\alpha}$ that converges to $x$, but that does not necessarily give $x$ as an infinite sum. Having a sum representation in a Banach space $X$ is usually posed in terms of a countable Schauder basis $\{ x_n \}$ where, for every $x\in X$, one assumes the existence of unique constants $\alpha_n$ such that $x = \sum_{n} \alpha_n x_n$. Every complete orthonormal basis of a Hilbert space is a Schauder basis for that Hilbert space. $\ell^1$ has a Schauder basis consisting of the sequences $\{ 1,0,0,0,\cdots \}$, $\{ 0,1,0,0,0,\cdots \}$, $\{ 0,0,1,0,0,\cdots \}$, etc.. Not every Banach space has a Schauder basis. For example, $L^1[-\pi,\pi]$ does not have a Schauder basis. Even though you might guess the Fourier basis $\{ e^{inx} \}_{n=-\infty}^{\infty}$ to be a Schauder basis for $L^1$, it isn't because the Fourier series does not converge in $L^1$ to an arbitrary $f\in L^1$. However, $\{ e^{inx} \}_{n=-\infty}^{\infty}$ is total in $L^1$ because the finite span $M$ of these functions is dense in $L^1$. Of course $\{ e^{inx} \}$ is a Schauder basis for $L^2$ because it's an orthonormal basis of the Hilbert space $L^2$. For a Banach space $X$, a set of vectors $\{ x_\alpha \}_{\alpha\in\Lambda}$ is total if the subspace $M$ spanned by these vectors is dense in $X$. Equivalently, every $x$ can be approximated arbitrarily closely by a finite linear combination of the $x_\alpha$. Another equivalent is that the only $x^* \in X^*$ for which $x^*(x_\alpha)=0$ for all $\alpha$ is $x^* = 0$. This condition extends (5) for the Hilbert space case because every $x^*$ on a Hilbert space has the representation $x^*(y)=\langle y,x\rangle$ for a unique $x$.
In the paper Nerovny et al. (2017), the commentaries about a convergence of series which represent the absolute value function and corresponding equations contain several mistakes (Sect. 2, from Eqs. (4) to (6)). The series Eq. (3) $$\begin{aligned} |\hat{\mathbf {n}}\cdot \hat{\mathbf {s}}|=|x| = \frac{2}{\pi } - \frac{4}{\pi }\sum \limits _{n=1}^{\infty }\frac{(-1)^n T_{2n}(x)}{-1+4n^2} \end{aligned}$$ of Chebyshev polynomials of the first kind for \(|\hat{\mathbf {n}}\cdot \hat{\mathbf {s}}|=|x|\le 1\) is absolutely convergent. If we define \(x=\cos y\) , than \(T_{2n}=\cos 2ny\) , \(|T_{2n}|\le 1\) , and we get the ordinary Fourier series which is majorizable by the following convergent series: $$\begin{aligned} \frac{2}{\pi } -\frac{4}{\pi }\sum \limits _{n=1}^{\infty }\frac{1}{-1+4n^2}. \end{aligned}$$ Additionally, for any x the original series is an alternating Leibniz series. Its partial sum differs from | x | less or equal than the absolute value of the first neglected term. These are the steps to produce a power series of absolute value function from Eq. (3): $$\begin{aligned} |\hat{\mathbf {n}}\cdot \hat{\mathbf {s}}|&= -\lim \limits _{N_{\max }\rightarrow \infty } \frac{4}{\pi }\sum \limits _{n=1}^{N_{\max }}\sum \limits _{k=0}^{n-1}\frac{(-1)^n(-1)^k n (2n-k-1)!}{(-1+4n^2)k!(2n-2k)!}4^{n-k}(\hat{\mathbf {n}}\cdot \hat{\mathbf {s}})^{2(n-k)}=\\&(\text {let}\ m = n-k)\\&=-\lim \limits _{N_{\max }\rightarrow \infty }\sum \limits _{m=1}^{N_{\max }} \frac{(-1)^m 4^{m+1}}{\pi (2m)!}\sum \limits _{n=m}^{N_{\max }}\frac{n(n+m-1)!}{(-1+4n^2)(n-m)!} (\hat{\mathbf {n}}\cdot \hat{\mathbf {s}})^{2m}. \end{aligned}$$ That’s why the Eqs. (4) and (5) from Nerovny et al. (2017 ) should be written as follows: $$\begin{aligned} |\hat{\mathbf {n}}\cdot \hat{\mathbf {s}}|= & {} \lim \limits _{N_{\max }\rightarrow \infty }\sum \limits _{m=1}^{N_{\max }} B_m (\hat{\mathbf {n}}\cdot \hat{\mathbf {s}})^{2m}\approx \sum \limits _{m=1}^{N_{\max }} B_m (\hat{\mathbf {n}}\cdot \hat{\mathbf {s}})^{2m},\\ B_m\approx & {} -\frac{(-1)^m 4^{m+1}}{\pi (2m)!}\sum \limits _{n=m}^{N_{\max }}\frac{n(n+m-1)!}{(-1+4n^2)(n-m)!}, \end{aligned}$$ and in equations for \(N_{\max B}\) , Eqs. (6) and (34), the \(\lfloor (N_{\max }-1)/2 \rfloor \) term should be replaced by \(N_{\max }\) . The results of calculations in Sects. 7 and 8 are not affected by this error because in the numerical calculations we used correct relations presented in this erratum.
There is a natural notion of a presentations in the category of residually finite groups. Namely, if $X$ is set and $R$ is a set of words in the free group $FG(X)$ on $X$, then define $G=RF\langle X\mid R\rangle$ to be $FG(X)/N$ where $N$ is the intersection of all finite index normal subgroups of $FG(X)$ containing $R$. Equivalently, $N$ is the closure in the profinite topology of the normal closure of $R$. The group $G$ is residually finite and is the universal residually finite quotient of the group $H$ with presentation $\langle X\mid R\rangle$ (equivalently, it is the quotient of $H$ by the closure of $\{1\}$ in the profinite topology). Every residually finite group has a presentation in this sense. Having a finite presentation in the residually finite category does not (or at least should not) imply being finitely presented in the usual sense. One can then ask about the uniform word problem for residually finite groups in this setting. That is, we can ask given a finite set $X$, a finite set of relations $R\subseteq FG(X)$ and a word $w\in FG(X)$, is $w=1$ in $G=RF\langle X\mid R\rangle$? Notice that the set of such $w$ is co-r.e. because we can enumerate all $X$-generated finite groups satisfying $R$ and determine if $w\neq 1$ in some such finite group. But there is no procedure to enumerate the words $w$ equal to $1$ in $G$. Indeed, Slobodoskoii proved the uniform word problem is undecidable for residually finite groups. One can then ask the following restricted problem. Question.Is the uniform word problem decidable for groups with a one-relator residual finite presentation? That is, given a finite set $X$ and a word $r\in FG(X)$, is there an algorithm to determine if a word $w\in FG(X)$ is trivial in $G=RF\langle X\mid R\rangle$? This is basically asking for a Magnus theorem in the category of residually finite groups. Not all $1$-relator groups (in the usual sense) are residually finite so being $1$-relator as a residually finite group does not ( a priori) mean being a $1$-relator group. My question is equivalent to asking whether there is an algorithm to compute membership in the profinite closure of the trivial subgroup of a $1$-relator group.
Articles 1 - 14 of 14 Full-Text Articles in Physics Unitarized Pseudoscalar Meson Scattering Amplitudes In Three Flavor Linear Sigma Models, Joseph Schechter, Deirdre Black, Amir H. Fariborz, Sherif Moussa, Salah Nasri Unitarized Pseudoscalar Meson Scattering Amplitudes In Three Flavor Linear Sigma Models, Joseph Schechter, Deirdre Black, Amir H. Fariborz, Sherif Moussa, Salah Nasri Physics The three flavor linear sigma model is studied as a ``toy model'' for understanding the role of possible light scalar mesons in the \pi \pi, \pi K and \pi \eta scattering channels. The approach involves computing the tree level partial wave amplitude for each channel and unitarizing by a simple K-matrix prescription which does not introduce any new parameters. If the renormalizable version of the model is used there is only one free parameter. While this highly constrained version has the right general structure to explain \pi \pi scatteirng, it is ``not quite'' right. A reasonable fit can be made ... Phase Diagram Of Three-Dimensional Dynamical Triangulations With A Boundary, Simon Catterall, Simeon Warner, Ray Renken Phase Diagram Of Three-Dimensional Dynamical Triangulations With A Boundary, Simon Catterall, Simeon Warner, Ray Renken Physics We use Monte Carlo simulation to study the phase diagram of three-dimensional dynamical triangulations with a boundary. Three phases are indentified and characterized. One of these phases is a new, boundary dominated phase; a simple argument is presented to explain its existence. First-order transitions are shown to occur along the critical lines separating phases. Master Equation For Hydrogen Recombination On Grain Surfaces, Gianfranco Vidali, Ofer Biham, Itay Furman, Valerio Pirronello Master Equation For Hydrogen Recombination On Grain Surfaces, Gianfranco Vidali, Ofer Biham, Itay Furman, Valerio Pirronello Physics Recent experimental results on the formation of molecular hydrogen on astrophysically relevant surfaces under conditions similar to those encountered in the interstellar medium provided useful quantitative information about these processes. Rate equation analysis of experiments on olivine and amorphous carbon surfaces provided the activation energy barriers for the diffusion and desorption processes relevant to hydrogen recombination on these surfaces. However, the suitability of rate equations for the simulation of hydrogen recombination on interstellar grains, where there might be very few atoms on a grain at any given time, has been questioned. To resolve this problem, we introduce a master equation ... Exploring The Structure Of A Possible Light Scalar Nonet, Joseph Schechter, Deirdre Black, Amir H. Fariborz Exploring The Structure Of A Possible Light Scalar Nonet, Joseph Schechter, Deirdre Black, Amir H. Fariborz Physics We first review the work of the Syracuse group, which uses an effective chiral Lagrangian approach, on meson-meson scattering. An illustration providing evidence for the existence of a strange scalar resonance of mass around 900 MeV is given. An attempt to fit this \kappa (900) together with a similarly obtained \sigma (560) and the well known a_0(980) and f_0(980) into a nonet pattern suggests that the underlying structure is closer to a dual quark-dual antiquark than to a quark-antiquark. A possible mechanism to explain a next higher-in mass scalar meson nonet is also discussed. This involves mixing between ... Phase Diagram Of Four-Dimensional Dynamical Triangulations With A Boundary, Simon Catterall, Simeon Warner Phase Diagram Of Four-Dimensional Dynamical Triangulations With A Boundary, Simon Catterall, Simeon Warner Physics We report on simulations of DT simplicial gravity for manifolds with the topology of the 4-disk. We find evidence for four phases in a two-dimensional parameter space. In two of these the boundary plays no dynamical role and the geometries are equivalent to those observed earlier for the sphere S^4. In another phase the boundary is maximal and the quantum geometry degenerates to a one dimensional branched polymer. In contrast we provide evidence that the fourth phase is effectively three-dimensional. We find discontinuous phase transitions at all the phase boundaries. The Cleo Iii Ring Imaging Cherenkov Detector, Raymond Mountain, Marina Artuso, R. Ayad, A. Efimov The Cleo Iii Ring Imaging Cherenkov Detector, Raymond Mountain, Marina Artuso, R. Ayad, A. Efimov Physics The CLEO detector has been upgraded to include a state of the art particle identification system, based on the Ring Imaging Cherenkov Detector (RICH) technology, in order to take data at the upgraded CESR electron positron collider. The expected performance is reviewed, as well as the preliminary results from an engineering run during the first few months of operation of the CLEO III detector. A Lattice Path Integral For Supersymmetric Quantum Mechanics, Simon Catterall, Eric B. Gregory A Lattice Path Integral For Supersymmetric Quantum Mechanics, Simon Catterall, Eric B. Gregory Physics We report on a study of the supersymmetric anharmonic oscillator computed using a euclidean lattice path integral. Our numerical work utilizes a Fourier accelerated hybrid Monte Carlo scheme to sample the path integral. Using this we are able to measure massgaps and check Ward identities to a precision of better than one percent. We work with a non-standard lattice action which we show has an {\it exact} supersymmetry for arbitrary lattice spacing in the limit of zero interaction coupling. For the interacting model we show that supersymmetry is restored in the continuum limit without fine tuning. This is contrasted with ... Driven Vortices In Confined Geometry: The Corbino Disk, M. Cristina Marchetti Driven Vortices In Confined Geometry: The Corbino Disk, M. Cristina Marchetti Physics The fabrication of artificial pinning structures allows a new generation of experiments which can probe the properties of vortex arrays by forcing them to flow in confined geometries. We discuss the theoretical analysis of such experiments in both flux liquids and flux solids, focusing on the Corbino disk geometry. In the liquid, these experiments can probe the critical behavior near a continuous liquid-glass transition. In the solid, they probe directly the onset of plasticity. Energetics And Geometry Of Excitations In Random Systems, Alan Middleton Energetics And Geometry Of Excitations In Random Systems, Alan Middleton Physics Methods for studying droplets in models with quenched disorder are critically examined. Low energy excitations in two dimensional models are investigated by finding minimal energy interior excitations and by computing the effect of bulk perturbations. The numerical data support the assumptions of compact droplets and a single exponent for droplet energy scaling. Analytic calculations show how strong corrections to power laws can result when samples and droplets are averaged over. Such corrections can explain apparent discrepancies in several previous numerical results for spin glasses Essentials Of K-Essence, Christian Armendariz-Picon, V. Mukhanov, Paul J. Steinhardt Essentials Of K-Essence, Christian Armendariz-Picon, V. Mukhanov, Paul J. Steinhardt Physics We recently introduced the concept of "k-essence" as a dynamical solution for explaining naturally why the universe has entered an epoch of accelerated expansion at a late stage of its evolution. The solution avoids fine-tuning of parameters and anthropic arguments. Instead, k-essence is based on the idea of a dynamical attractor solution which causes it to act as a cosmological constant only at the onset of matter-domination. Consequently, k-essence overtakes the matter density and induces cosmic acceleration at about the present epoch. In this paper, we present the basic theory of k-essence and dynamical attractors based on evolving scalar fields ... An Improved Limit On The Rate Of The Decay K^+ -> Pi^+ Mu^+ E^-, Duncan Brown, R. Appel An Improved Limit On The Rate Of The Decay K^+ -> Pi^+ Mu^+ E^-, Duncan Brown, R. Appel Physics The experiment E865 at BNL places an upper limit on the branching ratio for the decay K+ -> pi+ mu+ e- of 3.9x10^-11 (90% C.L.). Along with with other results this yields a combined upper limit on this branching ratio of 2.8x10^-11. A new upper limit on the branching ratio for pi0 -> mu+ e- of 3.8x10^-10 (90% C.L.) is also established. The experiment and analysis are described. A Dynamical Solution To The Problem Of A Small Cosmological Constant And Late-Time Cosmic Acceleration, Christian Armendariz-Picon, V. Mukhanov, Paul J. Steinhardt A Dynamical Solution To The Problem Of A Small Cosmological Constant And Late-Time Cosmic Acceleration, Christian Armendariz-Picon, V. Mukhanov, Paul J. Steinhardt Physics Increasing evidence suggests that most of the energy density of the universe consists of a dark energy component with negative pressure, a ``cosmological constant" that causes the cosmic expansion to accelerate. In this paper, we address the puzzle of why this component comes to dominate the universe only recently rather than at some much earlier epoch. We present a class of theories based on an evolving scalar field where the explanation is based entirely on internal dynamical properties of the solutions. In the theories we consider, the dynamics causes the scalar field to lock automatically into a negative pressure state ... Viscoelastic Depinning Of Driven Systems: Mean-Field Plastic Scallops, Alan Middleton, M. Cristina Marchetti, Thomas Prellberg Viscoelastic Depinning Of Driven Systems: Mean-Field Plastic Scallops, Alan Middleton, M. Cristina Marchetti, Thomas Prellberg Physics We have investigated the mean field dynamics of an overdamped viscoelastic medium driven through quenched disorder. The model introduced incorporates coexistence of pinned and sliding degrees of freedom and can exhibit continuous elastic depinning or first order hysteretic depinning. Numerical simulations indicate mean field instabilities that correspond to macroscopic stick-slip events and lead to premature switching. The model is relevant for the dynamics of driven vortex arrays in superconductors and other extended disordered systems. The Statistical Mechanics Of Membranes, Mark Bowick, Alex Travesset The Statistical Mechanics Of Membranes, Mark Bowick, Alex Travesset Physics The fluctuations of two-dimensional extended objects membranes is a rich and exciting field with many solid results and a wide range of open issues. We review the distinct universality classes of membranes, determined by the local order, and the associated phase diagrams. After a discussion of several physical examples of membranes we turn to the physics of crystalline (or polymerized) membranes in which the individual monomers are rigidly bound. We discuss the phase diagram with particular attention to the dependence on the degree of self-avoidance and anisotropy. In each case we review and discuss analytic, numerical and experimental predictions of ...
I got a problem of calculating $E[e^X]$, where X follows a normal distribution $N(\mu, \sigma^2)$ of mean $\mu$ and standard deviation $\sigma$. I still got no clue how to solve it. Assume $Y=e^X$. Trying to calculate this value directly by substitution $f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\, e^{\frac{-(x-\mu)^2}{2\sigma^2}}$ then find $g(y)$ of $Y$ is a nightmare (and I don't know how to calculate this integral to be honest). Another way is to find the inverse function. Assume $Y=\phi(X)$, if $\phi$ is differentiable, monotonic, and have inverse function $X=\psi(Y)$ then $g(y)$ (PDF of random variable $Y$) is as follows: $g(y)=f[\psi(y)]|\psi'(y)|$. I think we don't need to find PDF of $Y$ explicitly to find $E[Y]$. This seems to be a classic problem. Anyone can help?
I wish to add a remark to Noam D. Elkies' beautiful answer. From the integral representation for $f$, putting $e^{-t}=s$ in the integral,$$f(-x)=1-x\int_0^\infty e^{-xte^{-t}} e^{-t}dt = 1-x\int_0^1 s^{sx}ds\, ,$$so that, for $x\to \infty$, $ f(-x)=o(1)$ is equivalent to $$\int_0^1 xu(s)^xds=1+o(1)\, ,$$ where $u\in C([0,1])$ is the function $u(s):=s^s$. As a matter of fact, since $0\le u(s)\le 1$ for all $s$ and $u(s)=1$ only for $s=0$ or $s=1$, it turns out that the limit only depends on $u'(0)$ and $u'(1)$. Since $u'(1)=1$, for any $\lambda < 1 < \mu$ there exists a $b < 1$ such that for all $s\in [b,1]$ there holds$$1+\mu(s-1) \le u(s)\le 1+\lambda(s-1)\, ,$$so that$$x\big(1+\mu(s-1)\big)^x \le xu(s)^x\le x\big(1+\lambda(s-1)\big)^x\, .$$Similarly, since $u'(0)=-\infty$, for any $\nu > 0$ there exists a $a > 0$ such that for all $s\in [0,a]$ $$u(s)\le1-\nu s\, ,$$ so $$xu(s)^x\le x\big( 1-\nu s\big) ^ x \, .$$ Moreover, since on any interval $[a,b]\subset\subset(0,1)$ the function $u$ is bounded away from $1$, it is clear that $\int_a^b xu(s)^xds=o(1)$ by uniform convergence to $0$. Integrating over $s\in [ 0,1]$, and recalling that $\lambda < 1 < \mu$ and $\nu > 0$ were arbitrary, the inequalities above plainly give $$\int_0^1 xu(s)^xds=\int_0^a x u(s)^xds+\int_a^b xu(s)^xds+\int_b^1 xu(s)^xds=1+o(1) \, ,$$ for $x\to \infty$.
This is one of those things I never expected to be hard until I tried to prove it. Why is the right permutohedron order (a.k.a. weak Bruhat order, a.k.a. weak order -- not to be confused with the strong Bruhat order) on the symmetric group $S_n$ a lattice? Details: Let $n$ be a nonnegative integer. Consider the symmetric group $S_n$, with multiplication defined by $\left(\sigma\pi\right)\left(i\right)=\sigma\left(\pi\left(i\right)\right)$ for all $\sigma$ and $\pi$ in $S_n$ and all $i \in \left\lbrace 1,2,\cdots ,n \right\rbrace$. The right permutohedron order is a partial order on the set $S_n$ and can be defined in the following equivalent ways: Two permutations $u$ and $v$ in $S_n$ satisfy $u \leq v$ in the right permutohedron order if and only if the length of the permutation $v^{-1} u$ equals the length of $v$ minus the length of $u$. Here, the length(also known as "Coxeter length") of a permutation is its number of inversions. Two permutations $u$ and $v$ in $S_n$ satisfy $u \leq v$ in the right permutohedron order if and only if every pair $\left(i, j\right)$ of elements of $\{ 1, 2, \cdots, n \}$ such that $i < j$ and $u^{-1}\left(i\right) > u^{-1}\left(j\right)$ also satisfies $v^{-1}\left(i\right) > v^{-1}\left(j\right)$. (In more vivid terms, this condition states that whenever two integers $i$ and $j$ satisfy $i < j$ but $i$ stands to the right of $j$ in the one-line notation of the permutation $u$, the integer $i$ must also stand to the right of $j$ in the one-line notation of the permutation $v$.) A permutation $v \in S_n$ covers a permutation $u \in S_n$ in the right permutohedron order if and only if we have $v = u \cdot \left(i, i + 1\right)$ for some $i \in \{ 1, 2, \cdots, n - 1 \}$ satisfying $u\left(i\right) < u\left(i + 1\right)$. Here, $\left(i, i + 1\right)$ denotes the transposition switching $i$ with $i + 1$. (I have mostly quoted these definitions from a part of Sage documentation I've written a while ago. A "left permutohedron order" also exists, but differs from the right one merely by swapping a permutation with its inverse.) It is easy to prove the equivalence of the above three definitions using nothing but elementary reasoning about inversions and bubblesort. This made me believe that everything about the permutohedron order is simple. Now I have read in some sources (which all give either no or badly accessible references) that the poset $S_n$ with the right permutohedron order is a lattice. This is related to the Tamari lattice. (Specifically, there is an injection from the Tamari lattice to the permutohedron-ordered $S_n$ sending each binary search tree to a certain 132-avoiding permutation obtained from a postfix reading of the tree, and there is a surjection in the other direction sending each permutation to its binary search tree. If I am not mistaken, these two maps form a Galois connection.) But I am not able to prove the lattice property! I see some obstructions to the existence of overly simple proofs: The strong Bruhat order is not a lattice. One might think that the meet of two permutations $u$ and $v$ will be a permutation $p$ whose coinversion set (= the set of all pairs $\left(i, j\right)$ of elements of $\{ 1, 2, \cdots, n \}$ such that $i < j$ and $p^{-1}\left(i\right) > p^{-1}\left(j\right)$) will be the intersection of the coinversion sets of $u$ and $v$. This is not the case. A permutation having such a coinversion set might not exist, and the meet has a smaller coinversion set. In particular, it is not always possible to obtain the meet of $u$ and $v$ by bubblesorting each of $u$ and $v$ without ever killing inversions which are common to $u$ and $v$. The permutohedron-order lattice is not distributive.
Question Let $$ \pi_{rm_c}(x) = \sum_{ \substack{ {n\leq x}\\{(n+a,P(\sqrt{n}))=1}}} 1-1, $$ where $P(x)$ is the product of all primes less or equal to $x$ and $a$ is a random integer constrained to those values such that $(n+a,P(\sqrt{n}))\leq n$ for all $n\leq x$. What is the asymptotic behaviour of the expected value of $\pi_{rm_c}(x)$? Below follows some background for the question, a possible partial approach, and numerical results, the latter suggesting that the answer I seek is $\sim x/\log x$. I'm interested in any advancements towards a solution. Background The prime counting function $\pi(x)$ can be written on the form $$ \pi(x) = \sum_{ \substack{ {n\leq x}\\{(n,P(\sqrt{n}))=1}}} 1-1 \sim \frac{x}{\log x}. $$ We construct a random model with the same multiplicative structure as the primes in terms of $$ \pi_{rm}(x) = \sum_{ \substack{ {n\leq x}\\{(n+a,P(\sqrt{n}))=1}}} 1-1, $$ where $a$ initially is any random integer. Thus, $(n+a,P(\sqrt{n}))$ can take any value from $1$ to $P(\sqrt{n})$. The expected value of $\pi_{rm}(x)$ in this case is simply $$ \mathbf{E}\left[ \pi_{rm}(x) \right] = \sum_{n\leq x} W(\sqrt{n}) \sim 2 \operatorname{e}^{-\gamma} \frac{x}{\log x}, $$ where $W(x)= \prod_{p\leq x} \left(1-1/p\right)$, and the asymptotic equality follows from Merten's product theorem. The variance in this case satisfies $\operatorname{Var}(\pi_{rm}(x)) < \sum_{n\leq x} W(\sqrt{n})(1-W(\sqrt{n}))$ for $x\geq 2$. Consider now the fact that the constraint $(n, P(\sqrt{n}))\leq n$ is satisfied for the primes. The prime counting function $\pi(x)$ therefore lies in a subspace of $\pi_{rm}(x)$ corresponding to those values of $a$ such that $(n+a,P(\sqrt{n}))\leq n$ for all $n\leq x$. This gives us the random model $\pi_{rm_c}(x)$ in the question. Legendre sieve perspective Can the Legendre sieve be a possible approach? Let $A(a) = \left\{ m: 1+a \leq m \leq x + a \right\}$ where $a$ is an integer such that $(n+a,P(\sqrt{n}))\leq n$ for all $n\leq x$. Also, let $A_d(a)$ be the set of integers in $A(a)$ divisible by $d$. In general, when $d\leq x$, $|A_d(a)|$ take either of the values $\lfloor x/d \rfloor=x/d -\{x/d\}$ or $\lfloor x/d \rfloor + 1 = x/d -\{x/d\}+1$. When $d>x$, $|A_d(a)| = \lfloor x/d \rfloor = 0$, meaning we only need to consider $d\leq x$. We then obtain the Legendre identity: \begin{align} S(A(a),P(\sqrt{x})) &= \sum_{\substack{ {d\mid P(\sqrt{x})}\\ {d\leq x}}}\mu(d)|A_d(a)|\\ &= x \sum_{\substack{ {d\mid P(\sqrt{x})}\\ {d\leq x} } } \frac{\mu(d) }{d} - \sum_{\substack{ {d\mid P(\sqrt{x})}\\ {d\leq x} } } \mu(d) \left\{ \frac{x}{d}\right\} + \sum_{\substack{ {d\mid P(\sqrt{x}) }\\ {d\leq x} \\ {|\mathcal{A}_d(a)| = \lfloor x/d \rfloor+1 } }} \mu(d). \end{align} In the case of the primes, $a=0$ and $|A_d(0)| = \lfloor x/d \rfloor$. The last term in the previous equation becomes zero and we obtain \begin{align} \pi(x) - \pi(\sqrt{x}) + 1 &= S(A(0),P(\sqrt{x})) \\ &= x \sum_{\substack{ {d\mid P(\sqrt{x})}\\ {d\leq x} } } \frac{\mu(d) }{d} - \sum_{\substack{ {d\mid P(\sqrt{x})}\\ {d\leq x} } } \mu(d) \left\{ \frac{x}{d}\right\}. \end{align} From the prime number theorem the left hand side of this equation is $\sim x/\log x$. Also, as shown by @Lucia in the MO post Asymptotic limit of truncated Legendre sieve, we have that $$ x \sum_{\substack{ {d\mid P(\sqrt{x})}\\ {d\leq x} } } \frac{\mu(d) }{d} \sim \frac{x}{\log x}. $$ It therefore follows that $$ \sum_{\substack{ {d\mid P(\sqrt{x})}\\ {d\leq x} } } \mu(d) \left\{ \frac{x}{d}\right\} = o\left(\frac{x}{\log x}\right). $$ To obtain an asymptotic estimate of $\mathbf{E}[S(A(a),P(\sqrt{x}))]$ one therefore needs to evaluate or bound the expected value of $$ \sum_{\substack{ {d\mid P(\sqrt{x}) }\\ {d\leq x} \\ {|\mathcal{A}_d(a)| = \lfloor x/d \rfloor+1 } }} \mu(d). $$ Numerical results Consider the random models with and without the constraint on $a$ for $x=p_{41}^2-1$. For this value of $x$ the sample space of the unconstrained model contains more than $1.6 \times 10^{68}$ elements, while the sample space of the constrained model contains only 88 elements. In Figure A we see 88 realisations of each of the two error terms $\pi_{rm}(x) - \mathbf{E}[\pi_{rm}(x)]$ (dark gray) and $\pi_{rm_c}(x) - \mathbf{E}[\pi_{rm}(x)]$ (light gray). The black line shows $\operatorname{li}(x)-\mathbf{E}[\pi_{rm}(x)]$. In Figure B we see the 88 realisations of the error term $\pi_{rm_c}(x) - \operatorname{li}(x)$ (light gray). The mean of the 88 realisations is displayed as dark gray. Also shown are $\pi(x) - \operatorname{li}(x)$ (black) and $R(x) - \operatorname{li}(x)$ (white), where $R(x)$ is the Riemann prime counting function. What seems to be the case is that the constraint on $a$ forces all elements in the constrained random model to be strongly correlated. This suggests that perhaps not only the expected value of the constrained random model is $\sim x/\log x$, but that all elements in this model has the same asymptotic mean.
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you. Purchase individual online access for 1 year to this journal. Impact Factor 2019: 1.204 Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing: - solutions by mathematical methods of problems emerging in computer science - solutions of mathematical problems inspired by computer science. Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods. Article Type: Research Article Abstract: We consider the inductive inference model of Gold [15]. Suppose we are given a set of functions that are learnable with certain number of mind changes and errors. What can we consistently predict about those functions if we are allowed fewer mind changes or errors? In [20] we relaxed the notion of exact learning by considering some higher level properties of the input-output behavior of a given function. in this context, a learner produces a program that describes the property of a given function. Can we predict generic properties such as threshold or modality if we allow fewer number of …mind changes or errors? These questions were completely answered in [20] when the learner is restricted to a single IIM. In this paper we allow a team of IIMs to collaborate in the learning process. The learning is considered to be successful if any one of the team member succeeds. A motivation for this extension is to understand and characterize properties that are learnable for a given set of functions in a team environment. Show more Keywords: Inductive Inference, properties of functions, mind changes, errors, learning, teams DOI: 10.3233/FI-2013-833 Citation: Fundamenta Informaticae, vol. 124, no. 3, pp. 251-270, 2013 Article Type: Research Article Abstract: A schema algebra comprises operations on database schemata for a given data model. Such algebras are useful in database design as well as in schema integration. In this article we address the necessary theoretical underpinnings by introducing a novel notion of conceptual schema morphism that captures at the same time the conceptual schema and its semantics by means of the set of valid instances. This leads to a category of schemata that is finitely complete and co-complete. This is the basis for a notion of completeness of schema algebras, if it captures all universal constructions in the category of schemata. …We exemplify this notion of completeness for a recently introduced particular schema algebra. Show more Keywords: database design, schema category, completeness, schema algebra DOI: 10.3233/FI-2013-834 Citation: Fundamenta Informaticae, vol. 124, no. 3, pp. 271-295, 2013 Article Type: Research Article Abstract: Networks are known to be prone to node or link failures. A central issue in the analysis of networks is the assessment of their stability and reliability. The main aim is to understand, predict, and possibly even control the behavior of a networked system under attacks or disfunctions of any type. A central concept that is used to assess stability and robustness of the performance of a network under failures is that of vulnerability. A network is usually represented by an undirected simple graph where vertices represent processors and edges represent links between processors. Different approaches to properly define a …measure for graph vulnerability has been proposed so far. In this paper, we study the vulnerability of cycles and related graphs to the failure of individual vertices, using a measure called residual closeness which provides a more sensitive characterization of the graph than some other well-known vulnerability measures. Show more Keywords: Graph vulnerability, Closeness, Network design and communication, Stability, Communication network, Cycles DOI: 10.3233/FI-2013-835 Citation: Fundamenta Informaticae, vol. 124, no. 3, pp. 297-307, 2013 Authors: Simson, Daniel Article Type: Research Article Abstract: Following the spectral Coxeter analysis of matrix morsifications for Dynkin diagrams, the spectral graph theory, a graph coloring technique, and algebraic methods in graph theory, we continue our study of the category 𝒰ℬigrn of loop-free edge-bipartite (signed) graphs Δ, with n ≥ 2 vertices, by means of the Coxeter number cΔ , the Coxeter spectrum speccΔ of Δ, that is, the spectrum of the Coxeter polynomial coxΔ (t) ∈ $\mathbb{Z}$[t] and the $\mathbb{Z}$-bilinear Gram form bΔ : $\mathbb{Z}$n × $\mathbb{Z}$n → $\mathbb{Z}$ of Δ [SIAM J. Discrete Math. 27(2013)]. Our main inspiration for the study …comes from the representation theory of posets, groups and algebras, Lie theory, and Diophantine geometry problems. We show that the Coxeter spectral classification of connected edge-bipartite graphs Δ in 𝒰ℬigrn reduces to the Coxeter spectral classification of rational matrix morsifications A ∈ $\widehat{M}$orDΔ for a simply-laced Dynkin diagram DΔ associated with Δ. Given Δ in 𝒰ℬigrn , we study the isotropy subgroup Gl(n,$\mathbb{Z}$)Δ of Gl(n, $\mathbb{Z}$) that contains the Weyl group $\mathbb{W}$Δ and acts on the set $\widehat{M}$orΔ of rational matrix morsifications A of Δ in such a way that the map A $\mapsto$ (speccA , det A, cΔ ) is Gl(n, $\mathbb{Z}$)Δ -invariant. It is shown that, for n ≤ 6, speccΔ is the spectrum of one of the Coxeter polynomials listed in Tables 3.11-3.11(a) (we determine them by computer search using symbolic and numeric computation). The question, if two connected positive edge-bipartite graphs Δ,Δ′ in 𝒰ℬigrn , with speccΔ = speccΔ′ , are $\mathbb{Z}$-bilinear equivalent, is studied in the paper. The problem if any $\mathbb{Z}$-invertible matrix A ∈ Mn ($\mathbb{Z}$) is $\mathbb{Z}$-congruent with its transpose Atr is also discussed. Show more Keywords: edge-bipartite graph, toroidal mesh algorithm, inflation algorithm, matrix morsification, Dynkin diagram, Coxeter spectrum, Weyl group, Euler bilinear form, mesh geometry of root orbits, toroidal mesh algorithm, sand-glass tube DOI: 10.3233/FI-2013-836 Citation: Fundamenta Informaticae, vol. 124, no. 3, pp. 309-338, 2013 Authors: Simson, Daniel Article Type: Research Article Abstract: By applying symbolic and numerical computation and the spectral Coxeter analysis technique of matrix morsifications introduced in our previous paper [Fund. Inform. 124(2013)], we present a complete algorithmic classification of the rational morsifications and their mesh geometries of root orbits for the Dynkin diagram $\mathbb{D}_4$. The structure of the isotropy group $Gl(4, \mathbb{Z})_{\mathbb{D}_4}$ of $\mathbb{D}_4$ is also studied. As a byproduct of our technique we show that, given a connected loop-free positive edge-bipartite graph Δ, with n ≥ 4 vertices (in the sense of our paper [SIAM J. Discrete Math. 27(2013)]) and the positive definite Gram unit form $q_\Delta : …\mathbb{Z}^n \rightarrow \mathbb{Z}$, any positive integer d ≥ 1 can be presented as d = qΔ (v), with $v \in \mathbb{Z}^n$. In case n = 3, a positive integer d ≥ 1 can be presented as d = qΔ (v), with $v \in \mathbb{Z}^n$, if and only if d is not of the form 4a (16 · b + 14), where a and b are non-negative integers. Show more Keywords: edge-bipartite graph, toroidal mesh algorithm, inflation algorithm, matrix morsification, Dynkin diagram, Coxeter spectrum, Weyl group, Euler bilinear form, mesh geometry of root orbits DOI: 10.3233/FI-2013-837 Citation: Fundamenta Informaticae, vol. 124, no. 3, pp. 339-364, 2013 Inspirees International (China Office) Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl 如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
I'm trying to create a diagram like this with Latex. (I'm trying to learn to write in Latex by myself.) The best code I can do is this: \documentclass{article}\begin{document}\usepackage[all]{xy}$$\xymatrix{\overset{\supset ker(f)}{ker(w)\subset E} \ar[d]_f \ar[dr]_{\pi_f} \ar[r]^w &\mathbb{K} \\F & E/ker(f) \simeq Im(f) \ar[u]_{w'} }$$\end{document} That produces: But it isn't the same, because the arrows must start at "E" and not in the middle of ker(w) \subset E and its obvious that \overset{} isn't the appropiate command to write \subset ker(f) as I want. Thank you for your attention, and sorry for my english.
Just a small thought popped up in my mind; and now I'm stuck on it. Any idea on how to find the value of $\tan 20^\circ$? I tried doing it by using the multiple angle formulas, but I didn't get an answer... How do I proceed? $20°$ is a third of $60°$, for which the value of the tangent is well known to be $\sqrt3$. Let us denote $x:=\tan(20°)$. Then, by the addition formula, $$\tan(40°)=\frac{2x}{1-x^2},$$ and $$\tan(60°)=\frac{x+\dfrac{2x}{1-x^2}}{1-x\dfrac{2x}{1-x^2}}=\frac{3x-x^3}{1-3x^2}=\sqrt3,$$ or $$x^3-3\sqrt3x^2-3x+\sqrt3=0.$$ It turns out that this cubic equation cannot be solved by real radicals, so you need to use numerical methods such as Newton's. Starting from $x=\dfrac\pi9$ ($20°$ in radians), the iterates are $0.349065850399\cdots\\ 0.364116885850\cdots\\ 0.363970248087\cdots\\ 0.363970234266\cdots\\ 0.363970234266\cdots\\ 0.363970234266\cdots\\ \cdots$ $\sin(20^\circ)$ and $\cos(20^\circ)$ can both be found using the triple-angle formulas. (I'm assuming your "20" was in degrees, not radians.) Unfortunately, both of those will end up with you having to solve a cubic; you can use Cardano's formula to do that. You can't solve it with just quadratics and algebra, for if you could, it'd be possible to trisect a 60-degree angle; but that is in fact exactly the example used generally to show that trisection is impossible, because $\cos(60^{\circ})$ is not a surd. We have that $\tan 20^{\circ} = \dfrac{\sin 20^{\circ}}{\cos 20^{\circ}} = \dfrac{\sin 20^{\circ}}{\sin 70^{\circ}}$. This list here, gives you the exact value of the sine of every integer angle between $1$ and $90$. This allows you to compute $\tan 20^{\circ}$. Whilst this section doesn't answer your question the way you want it, it does prove an alternative from brutally disgusting surds or horrible calculators. You can construct a $20$-$70$-$90$ triangle yourself using a protractor and a ruler. Then take the ratio of the sides to get $\tan 20^{\circ}$. This won't be accurate, but you may find it amusing to do. :-) $\cos(3t) = 4 \cos(t)^3 - 3 \cos(t)$ $\sin(3t) = 3 \sin(t) - 4 \sin(t)^3$. Substituting $t = 20^\circ$ gives cubics that can be solved in closed form if you allow the operation of taking cube roots. I'm assuming you mean via multiple angle formulas, and I'm assuming you want an exact answer. The simple answer is, it would require a ridiculous amount of work. You can write: $$ \tan \left(45-30+5\right)$$ You could perform the double angle formula twice, and reduce the problem to finding tan(5), which still isn't easily solvable. Heres a fun site to give you an idea of how bad these things can get :) Note that if you are happy with an approximate solution, you should look into the CORDIC algorithm, which is how many hand-held calculators still do trigonometry :P Since, obviously, numerical calculations would be required, I cannot resist the pleasure of reusing a 1400 years old approximation proposed by Mahabhaskariya of Bhaskara I, a seventh-century Indian mathematician $$\sin(x) \simeq \frac{16 (\pi -x) x}{5 \pi ^2-4 (\pi -x) x}$$ A similar one $$\cos(x)\simeq \frac{\pi^2-4x^2}{x^2+\pi^2}$$ Applied to $x=\frac \pi 9$, this leads to $$\tan(\frac \pi 9)\simeq \frac{10496}{28721} \approx 0.365447$$ Using this as a starting point $x_0$ for Newton method, as Yves Daoust proposed, the iterative scheme will converge in a couple of iterations $$x_1 \approx 0.363971632203$$ $$x_2\approx0.363970234267$$ $$x_3\approx0.363970234266$$. Another solution would be to use Taylor expansion around $x=a$ $$\tan(x)=\tan (a)+(x-a) \left(\tan ^2(a)+1\right)+(x-a)^2 \left(\tan ^3(a)+\tan (a)\right)+$$ $$(x-a)^3 \left(\tan ^4(a)+\frac{4 \tan ^2(a)}{3}+\frac{1}{3}\right)+$$ $$(x-a)^4 \left(\tan ^5(a)+\frac{5 \tan ^3(a)}{3}+\frac{2 \tan (a)}{3}\right)+O\left((x-a)^5\right)$$ and to use $a=\frac \pi 8$ for which the tangent is $\sqrt 2 -1$ ( easily obtained using the double angle formula). Using one term, the result is $\approx 0.363094$; with two terms $\approx 0.364018$; with three terms $\approx 0.363969$; with four terms $\approx 0.363970$. Another solution is based on Pade approximants. The simplest would be $$\tan(x)\simeq \frac{(x-a)+\tan (a)}{1-(x-a) \tan (a)}$$ Using $a=\frac \pi 8$ would give $\approx 0.364002$. The next Pade approximant would be $$\tan(x)\simeq \frac{(x-a)+\tan(a)-\frac{1}{3} (x-a)^2 \tan (a)}{1-(x-a) \tan (a)-\frac{1}{3} (x-a)^2}$$ which would give $\approx 0.363970238$.
In the lecture notes accompanying an introductory course in relativistic quantum mechanics, the Klein-Gordon probability density and current are defined as: $$ \begin{eqnarray} P & = & \dfrac{i\hbar}{2mc^2}\left(\Phi^*\dfrac{\partial\Phi}{\partial t}-\Phi\dfrac{\partial\Phi^*}{\partial t}\right) \\ \vec{j} &=& \dfrac{\hbar}{2mi}\left(\Phi^*\vec{\nabla}\Phi-\Phi\vec{\nabla}\Phi^*\right) \end{eqnarray} $$ together with the statement that: One can show that in the non-relativistic limit, the known expressions for the probability density and current are recovered. The 'known' expressions are: $$ \begin{eqnarray} \rho &=& \Psi^*\Psi \\ \vec{j} &=& \dfrac{\hbar}{2mi}\left(\Psi^*\vec{\nabla}\Psi-\Psi\vec{\nabla}\Psi^*\right) \end{eqnarray} $$ When taking a 'non-relativistic limit', I am used to taking the limit $c \to \infty$, which does give the right result for $\vec{j}$, but for the density produces $P=0$. How should one then take said limit to recover the non-relativistic equations?
GolfScript (23 chars) {:^((1${\.**2^?%}+*}:f; The sentinel result for a non-existent inverse is 0. This is a simple application of Euler's theorem. \$x^{\varphi(2^n)} \equiv 1 \pmod {2^n}\$, so \$x^{-1} \equiv x^{2^{n-1}-1} \pmod {2^n}\$ Unfortunately that's rather too big an exponential to compute directly, so we have to use a loop and do modular reduction inside the loop. The iterative step is \$x^{2^k-1} = \left(x^{2^{k-1}-1}\right)^2 \times x\$ and we have a choice of base case: either k=1 with {1\:^(@{\.**2^?%}+*}:f; or k=2 with {:^((1${\.**2^?%}+*}:f; I'm working on another approach, but the sentinel is more difficult. The key observation is that we can build the inverse up bit by bit: if \$xy \equiv 1 \pmod{2^{k-1}}\$ then \$xy \in \{ 1, 1 + 2^{k-1} \} \pmod{2^k}\$, and if \$x\$ is odd we have \$x(y + xy-1) \equiv 1 \pmod{2^k}\$. (If you're not convinced, check the two cases separately). So we can start at any suitable base case and apply the transformation \$y' = (x+1)y - 1\$ a suitable number of times. Since \$0x \equiv 1 \pmod {2^0}\$ we get, by induction \$x\left(\frac{1 - (x+1)^n}{x}\right) \equiv 1 \pmod {2^n}\$ where the inverse is the sum of a geometric sequence. I've shown the derivation to avoid the rabbit-out-of-a-hat effect: given this expression, it's easy to see that (given that the bracketed value is an integer, which follows from its derivation as a sum of an integer sequence) the product on the left must be in the right equivalence class if \$x+1\$ is even. That gives the 19-char function {1$)1$?@/~)2@?%}:f; which gives correct answers for inputs which have an inverse. However, it's not so simple when \$x\$ is even. One potentially interesting option I've found is to add x&1 rather than 1. {1$.1&+1$?@/~)2@?%}:f; This seems to give sentinel values of either \$0\$ or \$2^{n-1}\$, but I haven't yet proved that. Taking that one step further, we can ensure a sentinel of \$0\$ for even numbers by changing the expression \$1 - (x+1)^n\$ into \$1 - 1^n\$: {1$.1&*)1$?@/~)2@?%}:f; That ties with the direct application of Euler's theorem for code length, but is going to have worse performance for large \$n\$. If we take the arguments the other way round, as n x f, we can save one character and get to 22 chars: {..1&*)2$?\/~)2@?%}:f;
Sufficient conditions for a discrete energy spectrum: In one-dimension Sturm-Louville theory implies that the spectrum is purely discrete provided that the system is restricted to a finite interval [a,b] (with appropriate boundary conditions). I assume (but don't know) this can be extended to the case of a system restricted to finite volume in higher dimensions. The next two conditions I found in this paper: In dimension $n\geq 3$ the Cwickel-Lieb-Rosenbljum bound implies that the spectrum will be discrete for energies for which the classical phase-space volume of $\{(\vec{x},\vec{p})|\vec{p}^2+V(\vec{x})<E\}$ is finite. As a consequence of the Golden-Thompson inequality the spectrum will be purely discrete in any system where the classical partition function is finite (at some value of temperature) Conditions for the spectrum to have a continuous part or to be purely continuous seem a lot more complicated. The main purpose of the paper cited above was to provide a counter-example to the 'rule-of-thumb' I gave in the question. Namely, it shows that the Hamiltonian$$ H = - \frac{\partial^2}{\partial x^2} -\frac{\partial^2}{\partial y^2} + x^2 y^2 $$has a purely discrete spectrum even though the volume of $\{\vec{x}|V(\vec{x})<E \}$ is infinite for any $E>0$. A reference suggested by yuggib (thanks!), and references therein give some more general criteria for continuous spectra. It looks like a difficult problem and research on classes of Hamiltonian with continous spectra seems to be ongoing. Here are a few results: Consider the Hamiltonian$$ H = -\nabla^2 +V(\vec{x}) $$and suppose that $|V(\vec{x})|<C(1+|\vec{x}|)^{-\alpha}$ for some constants $C$ and $\alpha>0$ (i.e. the potential decays to zero at infinity). Then: If $\alpha >1$ then the Hamiltonian has purely continuous spectrum on $E\in [0,\infty)$. On the other hand there exist $V(\vec{x})$ satisfying the above with $\alpha \leq \frac{1}{2}$ for which the spectrum is purely discrete. The interesting stuff seems to happen for $\frac{1}{2}< \alpha \leq 1$. Here there are examples of continous spectra on $E\in [0,\infty)$ with embedded discrete Eigenvalues. The most well known seem to be the Wigner von-Neumann potentials (e.g. $V(r) \sim \frac{\sin 2 r}{r}$ which has a discrete Eigenvalue at $E=1$). It looks like for $\frac{1}{2}< \alpha \leq 1$, while the spectrum may no longer be purely continous, the continous part on $E\in [0,\infty)$ is always preserved. However, the proofs I have found of this always make additional assumptions, so I'm not sure if this has been proved in general. (For example, yuggib's reference proves this result in 1D, and references therein based on scattering theory prove it in any dimension if $V(\vec{x})$ is differentiable and $|\frac{\partial V}{\partial x^i}|<C_1 (1+|\vec{x}|)^{-3/2-\epsilon}$ for some $C_1>0$ and all $i=1,\ldots,n$.) Finally, the assumption that the potential must decay to zero as $|\vec{x}|\to \infty$ can be relaxed (in some cases) to the assumption that it decays in this way only on some cone of $\mathbb{R}^n$. Here 'cone' is defined in the linear algebra sense. See, for example, this paper.
I think, I understood the main idea behind this algorithm: We have the spatial coordinates of points that we want to bisect. We find a line (or, hyperplane, in general) $L$ through these points, such that it is a total least squares fit of the nodes. We project the points onto this line. We find the "median projection", which then is used to split the points into two partitions: that is, the perpendicular hyperplane to $L$ that goes through the "median projection" divides the points into two partitions. Nonetheless, I have a doubt regarding the theory behind the inertial partitioning algorithm. Now, it turns out that the idea above can be framed as an eigenvalue-eigenvector problem (as for spectral partitioning algorithm). More specifically, it turns out, after a derivation (see [2]), we want to minimise the quantity $$\vec{u}^T M \vec{u} = \lambda,$$ i.e. find $\vec{u}$ such that $\lambda$ is minimised. According to [2], $u=[a, b]$ is the unit eigenvector of $M$ corresponding to the smallest eigenvalue. The fact that $\vec{u}$ is a unit eigenvector, I suppose, is related to the fact that the author of [2] assumes (w.l.o.g.) that $a^2 + b^2 = 1$. Why does the author assume $a^2 + b^2 = 1$? Why would that be convenient? Why is the assumption w.l.o.g.? Note $-\frac{a}{b}$ is the slow of the line $L$. Given the assumption $a^2 + b^2 = 1$, I understand that $\vec{u} = [a, b]$ is a unit eigenvector, because $\|\vec{u}\| = \sqrt{a^2 + b^2} = 1 \iff a^2 + b^2 = 1$. Why is $\vec{u}$ the eigenvector corresponding to the smallest eigenvalue of $M$? Why do we care about the smallest eigenvalue of $M$? Once we have found $\vec{u}$, we have the slow of the line. We can also retrieve the center of mass, that is $(\bar{x},\bar{y})$, which is used to formulate the equation of the line of $L$: it is basically an average of the points.
Given a polynomial with real coefficients is there a method (e.g. from algebra or complex analysis) to calculate the number of complex zeros with a specified real part? Background. This question is motivated by my tests related to this problem. Let $p>3$ be a prime number. Let $G_p(x)=(x+1)^p-x^p-1$, and let $$F_p(x)=\frac{(x+1)^p-x^p-1}{px(x+1)(x^2+x+1)^{n_p}}$$ where the exponent $n_p$ is equal to $1$ (resp. $2$) when $p\equiv-1\pmod 6$ (resp. $p\equiv1\pmod 6$). The answer by Lord Shark the Unknown (loc. linked) implies that $F_p(x)$ is a monic polynomial with integer coefficients. The degree of $F_p$ is equal to $6\lfloor(p-3)/6\rfloor$. I can show that the complex zeros of $F_p(x)$ come in groups of six. Each of the form $\alpha,-\alpha-1,1/\alpha,-1/(\alpha+1),-\alpha/(\alpha+1),-(\alpha+1)/\alpha.$ That is, orbits of a familiar group (isomorphic to $S_3$) of fractional linear transformations. My conjecture. Exactly one third of the zeros of $F_p(x)$ have real part equal to $-1/2$. I tested this with Mathematica for a few of the smallest primes and it seems to hold. Also, each sextet of zeros of the above form seems to be stable under complex conjugation, and seems to contain a complex conjugate pair of numbers with real part $=-1/2$. Anyway, I am curious about the number of zeros $z=s+it$ of the polynomial $F_p(x)$ on the line $s=-1/2$. Summary and thoughts. Any general method or formula is welcome, but I will be extra grateful if you want to test a method on the polynomial $G_p(x)$ or $F_p(x)$ :-) My first idea was to try the following: Given a polynomial $P(x)=\prod_i(x-z_i)$ is there a way of getting $R(x):=\prod_i(x-z_i-\overline{z_i})$? If this can be done, then we get the answer by calculating the multiplicity of $-1$ as a zero of $R(x)$. May be a method for calculating the number of real zeros can be used with suitable substitution that maps the real axes to the line $s=-1/2$ (need to check on this)? Of course, if you can prove that $F_p(x)$ is irreducible it is better that you post the answer to the linked question. The previous bounty expired, but that can be fixed.
First, let me say I'm not sure what is meant by a BEC with $T\gt 0$. Condensation is a finite temperature phenomenon, which occurs due to the presence of pair-wise interactions (generally attractive, but pairing can happen even for repulsive potential) in a many-body system. For instance, in a superconductor below some critical temperature $T_c$, electrons with opposite momenta and spin (s-wave pairing) pair up to form a bound state called a Cooper pair. The ground state of the unpaired electron gas for $T \gt T_c$ is characterized by the Fermi energy $E_F$. After condensation, the many-body system has a new ground state at energy $E_{bcs} = E_F - \Delta $, where $\Delta \sim k_B T_c$ is the binding energy of a Cooper pair. $\Delta$ is also known as the gap. For $T\lt T_c$ all the electrons are not paired up due to thermal fluctuations. However, the number of unpaired electrons as a fraction of the total number of electrons (the condensate fraction) goes as $N_{free}/N_{all} = 1- (T/T_c)^\alpha$, where $\alpha\gt 0$. The number of free electrons drops rapidly as $T$ is decreased below $T_c$. In lab setups, BEC's generally undergo some form of evaporative cooling to get rid of particles with energies greater than $\Delta$. At this point the condensate can be treated as a gas of interacting (quasi)particles (cooper pairs) with an approximate hard-core repulsion. So the gas, before and after condensate formation, is always at finite temperature! This is reflected, for instance, in the dependance of the condensate fraction on $T$ as mentioned above. The mean-field solutions for low-energy excitations of the condensate are given by the Gross-Pitaevskii equation(GPE): $$ \left( - \frac{\hbar^2}{2m}\frac{\partial^2}{\partial r^2} + V(r) + \frac{4\pi\hbar^2 a_s}{m} |\psi(r)|^2 \right) \psi(r) = \mu \psi(r) $$ where $a_s$ is the scattering length for the hard-core boson interaction, with $a \lt 0$ for an attractive interaction and $a \gt 0$ for a repulsive interaction. Presumably one should be able to construct a canonical ensemble with solutions $\psi(k)$ ($k$ being a momentum label of the above equation), but this is by no means obvious because of the non-linearity represented by the $|\psi(r)|^2$ term. Here a "zero temperature" state would correspond to a perfect BEC with no inhomogeneities, i.e. the vacuum solution of the GPE. However, the entire system is at some finite temperature $T \lt T_c$ as noted above. The resulting thermal fluctuations will manifest in the form of inhomogeneities in the condensate, the exact form of which will be determined by the solutions of the GPE. Of course, the GPE's regime of validity is that of dilute bose gases ($l_p \gg a_s$ - the average interparticle separation $l_p$ is much greater than the scattering length). For strong coupling I do not know of any similar analytical formalism. If I had to take a wild guess I'd say that the strong-coupling regime could be made analytically tractable by mapping it to a dual gravitational system, but that's another story altogether. As $l_p$ approaches $a_s$ from above, the GPE breaks down and it will have singular solutions for any given $T$ and these are likely the singularities that you are referring to. Reference: The single best reference I can suggest is Fetter and Walecka's book on many-body physics. I'm sure you can find more compact sources with a little effort. But generally the brief explanations leave one wanting for a comprehensive approach such as the one F&W provides. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
For a function $f:\{0,1\}^n \rightarrow \{0,1\}^m$, let $C(f)$ be the circuit complexity (for concreteness, constants and NOT gates are free, while 2-input AND gates cost 1). Let $k{\times}f : \{0,1\}^{kn} \rightarrow \{0,1\}^{km}$ be the function which computes $k$ copies of $f$ on independent inputs, so $k{\times}f(x_1, ..., x_{kn}) = (f(x_1, ..., x_n), ..., f(x_{(k-1)n+1}, ..., x_{kn}))$. For instance, $2{\times}{\oplus}(x,y,z,w) = (x\oplus y, z\oplus w)$. Define the asymptotic mass production complexity of $f$ to be $C_a(f) = \lim_{k\rightarrow\infty} \frac{C(k{\times}f)}{k}$ (since $C(k{\times}f)$ is a subadditive function of $k$, the limit always exists and is equal to the $\inf$). For example, it's possible to show that $C_a(\oplus) = C(\oplus) = 3$. A less trivial example is given by random linear functions: if $f : \mathbb{F}_2^n \rightarrow \mathbb{F}_2^n$ is a random linear function, then $C(f) = \Omega(\frac{n^2}{\log(n)})$ by a standard counting argument, while $C_a(f) \le \frac{C(n{\times}f)}{n} = O(n^{\omega_2-1})$, where $\omega_2 \le \log_2(7)$ is the matrix multiplication constant for $\mathbb{F}_2$. My question: Can we prove that for any $f : \{0,1\}^n \rightarrow \{0,1\}^m$, we always have $C_a(f) = O(n+m)$? There is an easy construction based on sorting networks that shows that $C_a(f) = O(n(n+m))$ - this is off by a factor of $n$ from what I want. In fact, I'll show that $C(2^n{\times}f) = O(n(n+m)2^n)$: Given $2^n$ input strings $v_0, ..., v_{2^n-1}$ each of length $n$, for each $i < 2^n$ we make a string $X_i$ of length $n+m+1$ by putting $m+1$ $0$s at the end of each $v_i$, and we make a string $Y_i$ of the same length by first encoding $i$ with $n$ binary bits, following this with a $1$, and filling the last $m$ bits with $f(i)$ ($i$ considered as a string of $n$ bits). Then we sort the combined list of $X_i$s and $Y_i$s using a sorting network, keeping track of the result of each comparison - this requires $O(n2^n)$ comparisons, and $O(n+m)$ gates per comparison. After sorting, each $X_i$ is either next to an identical $X_{i'}$ or to some $Y_j$ with $v_i = j$ and with last $m$ bits $f(j) = f(v_i)$, and we can use the $n+1$st bit of each string to tell whether we are looking at an $X_i$ or a $Y_j$. So now we propagate the values $f(v_i)$ to the left with $O(m2^n)$ gates (or if we prefer low-depth circuits, with $n$ layers of $O(m2^n)$ gates using power-of-2 shenanigans). Finally, we unsort, using our record of the comparisons we kept track of during the sorting process, to route each answer to the relevant output gate. It's plausible that this construction could be improved by using a sorting method which is not based on comparisons, but then I don't see how to unsort at the end. Edit: To unsort at the end, we just need to modify the beginning of the strategy by appending $i$ to the end of each $X_i$, making all the strings have length $2n+m+1$, and then finish by sorting based on the last $n$ bits. I tried searching the literature, and eventually found this paper which says in the abstract that improving the circuit complexity of sorting to $o(n(n+m)2^n)$ is an open problem.
I am trying to understand the calculation upper bound that is given in book. Edit 1: I added 3,44 Can someone explain to me how to come from (5.5) to (5.7) ? Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community I think eq. 5.6 is unnecessary. To know where eq. 5.7 comes from, you need to realize that 5.7 is actually the average bit error rate. Which means that we find the expected value of eq 5.5 with respect to the effective channel. To do that, let $$Y=\sum_{i=1}^M|h_i|^2$$ In case that $h_i\sim\mathcal{CN}(0,1)$ and i.i.d, $Y$ is a central Chi-square random variable with $2M$ degrees of freedom. The PDF of $Y$ is then given by $$f_Y(y)=\frac{1}{(M-1)!}y^{M-1}e^{-y}$$ The ABER is then given by $$\frac{1}{(M-1)!}\int_0^{\infty}\exp\left[-y\left(1+\frac{\rho\,d_{\text{min}}^2}{4M}\right)\right]y^{M-1}\,dy$$ From the table of integrals we have $$\int_0^{\infty}x^ne^{-\mu\,x}\,dx = n!\mu^{-n-1}$$ So, the ABER can be evaluated to $$\frac{1}{(M-1)!}(M-1)!\left[1+\frac{\rho\,d_{\text{min}}^2}{4M}\right]^{-M}$$ At high SNR $\rho>>1$, so, the ABER reduces to $$\left(\frac{\rho\,d_{\text{min}}^2}{4M}\right)^{-M}$$ which means that the diversity order of this system is $M$. (You can add the scaler $\bar{N_c}$ to these equations, which has no effect on the calculations)
There are reasons that any modern example is likely to resemble the status of Legendre's constant. Most (but not all) interesting numbers admit a polynomial-time algorithm to compute their digits. In fact, there is an interesting semi-review by Borwein and Borwein that shows that most of the usual numbers in calculus (for example, $\exp(\sqrt{2}+\pi)$) have a quasilinear time algorithm on a RAM machine, meaning $\tilde{O}(n) = O(n(\log n)^\alpha)$ time to compute $n$ digits. Once you have $n$ digits, you can use the continued fraction algorithm to find the best rational approximation with at most $n/2-O(1)$ digits in the denominator. The continued fraction algorithm is equivalent to the Euclidean algorithm, which also has a quasilinear time version according to Wikipedia. Euler's constant has been to computed almost 30 billion digits, using a quasilinear time algorithm due to Brent and McMillan. As a result, for any such number it's difficult to be surprised. You would need a mathematical coincidence that the number is rational, but with a denominator that is out of reach for modern computers. (This was Brent and MacMillian's stated motivation in the case of Euler's constant.) I think that it would be fairly newsworthy if it happened. On the other hand, if you can only compute the digits very slowly, then your situation resembles Legendre's. I got e-mail asking for a reference to the paper of Borwein and Borwein. The paper is On the complexity of familiar functions and numbers. To summarize the relevant part of this survey paper, any value or inverse value of an elementary function in the sense of calculus, including also hypergeometric functions as primitives, can be computed in quasilinear time. So can the gamma or zeta function evaluated at a rational number.
Little explorations with HP calculators (no Prime) 03-27-2017, 07:39 PM Post: #41 RE: Little explorations with the HP calculators Let's divide one of the right-triangles into four right-triangles (two with sides lengths equal to r and x and two with side lengths r and 1-x) and a square with side r. Then it's area is S = r*x + r(1-x) + r^2 S = r^2 + r The area of the larger square, witch we know is equal to 1, is the sum of the areas of these four right-triangles plus the area of the smaller square in the center, with side 2*r: S = 4(r^2 + r) + (2*r)^2 = 1 Or 8*r^2 + 4*r - 1 = 0 Here I am tempted to just take my wp34s and do 8 ENTER 4 ENTER 1 +/- SLVQ and get a valid numerical answer for the quadratic equation, but I decide to go through a few more steps by hand and get the exact answer: r = (sqrt(3) - 1)/4 03-27-2017, 07:58 PM Post: #42 RE: Little explorations with the HP calculators Gerson do you mean the following subdivision? Wikis are great, Contribute :) 03-27-2017, 08:01 PM Post: #43 RE: Little explorations with the HP calculators (03-27-2017 07:24 PM)Joe Horn Wrote: So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up. Some multivariate calculus and probability theory will get you: \[ \frac{2+\sqrt{2}+5\ln(1+\sqrt{2})}{15} \approx 0.521405433164720678330982356607243974914031567779008341796 \] Graph 3D | QPI | SolveSys 03-27-2017, 08:07 PM Post: #44 RE: Little explorations with the HP calculators (03-27-2017 07:24 PM)Joe Horn Wrote: After running 100 million iterations several times in UBASIC, I'm surprised that each run SEEMS to be converging, but each run ends with a quite different result: I wonder if this may be due to precision. The shorter the distance between two points, the higher the probability of seeing that distance appear. Thus, you're looking at a lot of tiny values (as they are more likely) that represent the distance, which may or may not get picked up in the sum if your machine precision is not large enough. Graph 3D | QPI | SolveSys 03-27-2017, 08:08 PM Post: #45 RE: Little explorations with the HP calculators (03-27-2017 08:01 PM)Han Wrote:(03-27-2017 07:24 PM)Joe Horn Wrote: So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up. Awesome! Time to dust off these old math texts.... <0|ɸ|0> -Joe- 03-27-2017, 08:14 PM (This post was last modified: 03-27-2017 08:19 PM by pier4r.) Post: #46 RE: Little explorations with the HP calculators (03-27-2017 08:01 PM)Han Wrote: Some multivariate calculus and probability theory will get you: This formula comes out from? I suppose a double integral (for x and y?) On a side note: by chance could you tell me why my algorithm screws up so many digits? Did I make a mistake somewhere or is it again a problem of precision? Wikis are great, Contribute :) 03-27-2017, 08:36 PM (This post was last modified: 03-27-2017 09:00 PM by Han.) Post: #47 RE: Little explorations with the HP calculators (03-27-2017 08:14 PM)pier4r Wrote:(03-27-2017 08:01 PM)Han Wrote: Some multivariate calculus and probability theory will get you: Since the distance between two points \( x_1 , y_1 \) and \( x_2, y_2\) is \[ d = \sqrt{ (x_2 - x_1)^2 + (y_2 - y_1)^2}, \] I took the approach of looking at the probability density function for the distance between each of the coordinates: \( |x_2 - x_1| \) and \( |y_2 - y_1| \). Since they are independent and identically distributed, just consider the probability density of \( x=|x_2 - x_1| \). Once I got the probability distribution function (it's a triangular distribution with \( 0\le x \le 1 \) ), the integral I ended up with was indeed a double integral. EDIT: there were two of them; one for find the PDF and the other to compute the expected value. (I had to pull out my calculus textbook because the second one is quite a tedious computation to do by hand. I started with a double integral in x and y, but had to convert over to polar coordinates.) Quote:On a side note: by chance could you tell me why my algorithm screws up so many digits? Did I make a mistake somewhere or is it again a problem of precision? My suspicion is that it is due to precision. This is just a hunch, though. Here is my thought process. For 1000000 iterations, the sum must be close to 520000 so that the average comes out to be about .52. However, the small distances that are added into the running average (that appear the most frequently) would be very close to zero (but sufficiently many occurrences to add up to a significant value if there was enough precision). Each incremental sum, however, would likely be computed as adding 0 once the partial sum has reached a large enough magnitude. Graph 3D | QPI | SolveSys 03-27-2017, 08:44 PM (This post was last modified: 03-27-2017 09:18 PM by pier4r.) Post: #48 RE: Little explorations with the HP calculators (03-27-2017 08:36 PM)Han Wrote: Since the distance between two points \( x_1 , y_1 \) and \( x_2, y_2\) is Interesting, both parts. I do indeed take only the integer part of the random value, after 3 digits (with IP). I will check what happens if I extend it to 6. Now I compute... poor batteries. Wikis are great, Contribute :) 03-27-2017, 08:55 PM Post: #49 RE: Little explorations with the HP calculators 03-27-2017, 10:12 PM Post: #50 RE: Little explorations with the HP calculators 03-27-2017, 10:35 PM Post: #51 RE: Little explorations with the HP calculators (03-27-2017 08:55 PM)Dieter Wrote: Thanks, Dieter! Honestly, I was a bit lucky on this one. I had only introduced the x and while I was still worried about how to get rid of it, it just magically disappeared in the second line :-) Probably there are better solutions around. Gerson. 03-28-2017, 10:44 AM (This post was last modified: 03-28-2017 10:47 AM by pier4r.) Post: #52 RE: Little explorations with the HP calculators So there is the summation function built in in the 50g, I searched for the product function (like \PI ) in the AUR and I had no luck. So using a quick search on this site using a search engine I found this: http://www.hpmuseum.org/cgi-sys/cgiwrap/...ead=249353 Message #10 Quote:Outside the equation writer one could use a combination of SEQ and PILIST (from the MTH/LIST menu). This would also work for negative members of the series. I have to say that this is pretty neat, but do you know any better way to achieve the same (function / user defined program)? Of course I could do a rough program by myself, but I do like to reuse code, especially if the code is well tested and refined. I may also study the following post, maybe it yields to neat results: https://groups.google.com/forum/#!topic/...discussion (Simone knows a lot / has great searching skills) Wikis are great, Contribute :) 03-28-2017, 11:07 AM (This post was last modified: 03-28-2017 05:39 PM by pier4r.) Post: #53 RE: Little explorations with the HP calculators Quote:brilliant.org (this site is very nice on certain topics, other topics are a bit, well, low profile still) On this I have at first no useful direction whatsoever. Anyway since the intention is to burn the 50g somehow, I will try with a trial and error approach in the range of possible values (knowing that the diameter cannot be smaller than, say, 10, and bigger than 10 * sqrt(2) ) until I fit the mentioned form. edit, the minvalue 10 is a tad too much, since the diameter is included in the radius of the big circle. So the maxvalue is 10 actually. Ok wrote the program, not so nice but it is the first iteration. Its output are all the possible "valid" length of the diameter, given that the diameter is expected to be greater or equal to 7 and smaller or equal to 10, plus the value of a b and c. The problem is, to deremine the right value if the right value is there (there could be the case that a is large, while sqrt(b) and c have a small difference, this is not captured by the program. Code: Wikis are great, Contribute :) 03-28-2017, 05:33 PM (This post was last modified: 03-28-2017 05:35 PM by Dieter.) Post: #54 RE: Little explorations with the HP calculators What? No clue? Really ?-) Take a look at the picture. From the upper left corner to the point where the circle touches the arc it's √2 · d/2 plus d/2, and this sum is 10. This directly leads to d = 20/(1+√2) or 8,284. Expand this with 1–√2 to get d = 20 · (√2–1). So a=20, b=2, c=1, and the desired value is floor(1600/23) = 69. Dieter 03-28-2017, 05:46 PM (This post was last modified: 03-28-2017 05:54 PM by Dieter.) Post: #55 RE: Little explorations with the HP calculators (03-21-2017 10:40 PM)pier4r Wrote: So I got to another problem and I'm stuck. Better use your brain. ;-) You know the summation formulas. For any positive upper limit (not just 2014), B is the square of A. So log AB = 2 and log √AB = 2 · 2 = 4. No calculator required. Dieter 03-28-2017, 06:13 PM Post: #56 RE: Little explorations with the HP calculators (03-28-2017 05:33 PM)Dieter Wrote: What? No clue? Really ?-) Thanks for the contribution, my only clue (see image below) died immediately because I did try to determine the x in the image but I failed with my rusty knowledge. And if your solution is correct (we need peer review here) then my program fails even to capture the solution. http://i.imgur.com/24nmK1G.jpg Could you explain (or hint a known relationship) how did you get that d/2+x = d/2*sqrt(2) ? Wikis are great, Contribute :) 03-28-2017, 06:17 PM (This post was last modified: 03-28-2017 06:20 PM by pier4r.) Post: #57 RE: Little explorations with the HP calculators But the idea of the explorations is: either when I know the shortcut or solutions, or when I do not know them, can I let the calculator solve most of the problem? In particular that problem raised the problem of "hidden digits by the real representation" that then was solved first with a homemade program (and proper flags), then with the knowledge of Joe Horn with flags and built in summation function. The problem of the circle above, where I ask you to justify how do you get that x+r = r*sqrt(2), let me refresh a couple of userRPL commands and the usage of flags as booleans. I mean, the more I analyze the failures, the better. Wikis are great, Contribute :) 03-28-2017, 07:32 PM Post: #58 RE: Little explorations with the HP calculators (03-28-2017 06:13 PM)pier4r Wrote: Could you explain (or hint a known relationship) how did you get that d/2+x = d/2*sqrt(2) ? The right picture can be worth a 1000 words, as they say :-) Draw a line segment from the point of tangency (where the circle touches the radial segments of length 10) toward the center of the inner circle. This length is d/2, and the line segment we created is perpendicular to the segments of length 10. (You can produce a square whose diagonal lies at the center of the two circles, and whose side lengths are d/2.) Graph 3D | QPI | SolveSys 03-28-2017, 07:36 PM Post: #59 RE: Little explorations with the HP calculators Also, for the summation problem, one does not actually need to know either summation formulas. The value 2014 is not that significant (likely chosen because the problem may have been created in 2014). You can make up a conjecture about \( \log_{\sqrt{A}}B \) by using smaller values instead of 2014, and computing the individual sums on your calculator (no program needed, really) which should lead you to the conclusion that the result is always 4. Moreover, this result would enable one to deduce the formula for the sum of cubes if one knew only the formula for the sum of the integers. Graph 3D | QPI | SolveSys 03-28-2017, 08:15 PM (This post was last modified: 03-29-2017 08:47 AM by pier4r.) Post: #60 RE: Little explorations with the HP calculators (03-28-2017 07:32 PM)Han Wrote: Draw a line segment from the point of tangency (where the circle touches the radial segments of length 10) toward the center of the inner circle. This length is d/2, and the line segment we created is perpendicular to the segments of length 10. (You can produce a square whose diagonal lies at the center of the two circles, and whose side lengths are d/2.) Understood. I thought about that but I could not prove... frick. I relied too much on the visual image. Instead of thinking that when a line is tangent to a circle then the radius is perpendicular to it (otherwise the circle would pass through the line), I looked at the picture and I sad "hmm, here I cannot build a square with the radius, I do not see perpendicularity". So it was actually trivial but I relied too much on the visual hint. Damn me. Well, experience for the next time. Thanks! Wikis are great, Contribute :) User(s) browsing this thread: 1 Guest(s)
By results of Aoki and Ibukiyama [MR:2130626, 10.1142/S0129167X05002837], and of Hayashida and Ibukiyama [MR:1840071], the ring $M_{*}(\Gamma_0(4))$ of Siegel modular forms of degree 2 with respect to the group $\Gamma_0(4)$ is generated by the following functions, which are specified in terms of theta constants: $X$, a form of weight 2, with formula $X = ((\theta_{0000})^4+(\theta_{0001})^4+(\theta_{0010})^4+(\theta_{0011})^4)/4.$ $X(2\Omega)$, a form of weight 2. $f_2(2\Omega)$, a form of weight 4, with formula $f_2 = (\theta_{0000})^4.$ $K(2\Omega)$, a form of of weight 6, with formula $$K = \theta_{0100}\theta_{0110}\theta_{1000}\theta_{1001}\theta_{1100}\theta_{1111})^2/4096.$$ $f_{11}(2\Omega)$, a cusp form of weight 11, with formula $f_{11} = f_6\chi_5,$ where $$ \chi_5=\theta_{0000}\theta_{0001}\theta_{0010}\theta_{0011}\theta_{0100}\theta_{0110}\theta_{1000}\theta_{1001}\theta_{1100}\theta_{1111},\qquad f_6 = ((\theta_{0001})^{4}-(\theta_{0010})^{4}) ((\theta_{0001})^{4}-(\theta_{0011})^{4})((\theta_{0010})^{4}-(\theta_{0011})^{4}). $$ Note that we write $F(2\Omega)$ to mean "apply $F$ after doubling the input". The generators $X, X(2\Omega),f_2(2\Omega), K(2\Omega)$ are algebraically independent. Let $B={\Bbb C}[X, X(2\Omega),f_2(2\Omega), K(2\Omega)]$. The ring of modular forms is $$ M(\Gamma_0(4)) = B + Y(2\Omega) B + f_{11}(2\Omega)(B + Y(2\Omega) B),$$ where $Y = (\theta_{0000}\theta_{0001}\theta_{0010}\theta_{0011})^2$. Authors: Knowl status: Review status: beta Last edited by Andrew Sutherland on 2016-07-01 02:52:45 Referred to by: Not referenced anywhere at the moment. History:(expand/hide all)
Let $a > 0$. We consider the function: $f: (0, \infty) \to (0, \infty)$, defined by $f(x) = \frac{1}{2}(x + \frac{a}{x})$. Let $(x_n)_{n \in \mathbb{N}_0}$ be defined by: $x_0 \in (0, \infty)$, $x_{n+1} := f(x_n)$ What is the smallest $b > 0$ so that f is contracting on $[b, \infty)$, and what is $f's$ Lipschitz-constant $L > 0$? Also, I want to prove using the Banach fixed point theorem, that $(x_n)$ as defined above converges against $\sqrt{a}$. Finally, why is it that $|f(x_n) - \sqrt{a}| ≤ \frac{1}{2^n}|\frac{a}{x_0} - x_0|$? Thanks in advance. I'm not very familiar with the Banach fixed point theorem, so I've been struggling with these questions so far.
$(a+\sqrt{a^2+1})\,(b+\sqrt{b^2+1})=1$ is supposed to equal: $b = -a$ but how do i get that? I've been trying to solve for like 2 days now. If $$ (a+\sqrt{a^2+1})(b+\sqrt{b^2+1})=1$$ we have $$ a+\sqrt{a^2+1}=\sqrt{b^2+1}-b \ \ \ \ \ (1) $$ $$ b+\sqrt{b^2+1}=\sqrt{a^2+1}-a \ \ \ \ \ (2) $$ $(1)+(2)$ then $a=-b$. First approach. You can do it quite directly. This is not an elegant path, but it's easy to grasp and it is reliable. Regard your equality as an equation, where $b$ is fixed and $a$ is an unknown. How would you go about solving it? It seems that the natural thing to do is to get $\sqrt{a^2+1}$ on one side of the equation, and then square everything to get rid of the radicals. Like this: $$ \begin{align} a + \sqrt{a^2+1} &= \frac{1}{\sqrt{b^2+1} + b} \\ \sqrt{a^2 + 1} &= \frac{1}{\sqrt{b^2+1} + b} - a \\ a^2 + 1 &= \left(\frac{1}{\sqrt{b^2+1} + b} - a\right)^2 = a^2 - \frac{2a}{\sqrt{b^2+1} + b} + \frac{1}{\left(\sqrt{b^2+1} + b\right)^2} \\ \frac{2a}{\sqrt{b^2+1}+b} &= \frac{1}{\left(\sqrt{b^2+1} + b\right)^2} - 1 \\ 2a &= \frac{1}{\sqrt{b^2+1}+b} - \sqrt{b^2+1} - b = \frac{1 - (b^2 + 1) - 2b\sqrt{b^2+1} - b^2}{\sqrt{b^2+1} + b} = -2b \end{align} $$ And therefore $a=-b$. Second approach. Another way to go about the problem is to exercise your curiosity. You might try to look at the converse statement, just to see what comes out of it: is it true that whenever $a=-b$, the equality $(a + \sqrt{a^2+1})(b + \sqrt{b^2+1})=1$ holds? It this is so, then it would mean that $(a + \sqrt{a^2+1})(-a + \sqrt{a^2+1})=1$ is true for any $a$. Does it look like that's indeed the case? Yes it does, by the formula for the difference of two squares:$$(\sqrt{a^2+1}+a)(\sqrt{a^2+1}-a) = (\sqrt{a^2+1})^2 - a^2 = 1.$$So, this is always true. Can we use this equality somehow to prove the original statement? We can, but I won't do it, because it's already been done in other answers.