text
stringlengths
256
16.4k
Difference between revisions of "Probability Seminar" (→March 14, TBA) (→April 26, TBA) Line 91: Line 91: == April 26, Colloquium, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] == == April 26, Colloquium, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] == − == April 26, TBA == + == April 26, TBA == + == May 2, TBA == == May 2, TBA == Revision as of 01:09, 26 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
The Catenary Problem Introduction A chain with uniformly distributed mass hangs from the endpoints \((0,1)\) and \((1,1)\) on a 2-D plane. Gravitational force acts in the negative \(y\) direction. Our goal is to find the shape of the chain in equilibrium, which is equivalent to determining the \((x,y)\) coordinates of every point along its curve when its potential energy is minimized. This is the famous catenary problem. A Discrete Version To formulate as an optimization problem, we parameterize the chain by its arc length and divide it into \(m\) discrete links. The length of each link must be no more than \(h > 0\). Since mass is uniform, the total potential energy is simply the sum of the \(y\)-coordinates. Therefore, our (discretized) problem is \[ \begin{array}{ll} \underset{x,y}{\mbox{minimize}} & \sum_{i=1}^m y_i \\ \mbox{subject to} & x_1 = 0, \quad y_1 = 1, \quad x_m = 1, \quad y_m = 1 \\ & (x_{i+1} - x_i)^2 + (y_{i+1} - y_i)^2 \leq h^2, \quad i = 1,\ldots,m-1 \end{array} \] The basic catenary problem has a well-known analytical solution(see Gelfand and Fomin (1963)) which we can easily verify with CVXR. ## Problem datam <- 101L <- 2h <- L / (m - 1)## Form objectivex <- Variable(m)y <- Variable(m)objective <- Minimize(sum(y))## Form constraintsconstraints <- list(x[1] == 0, y[1] == 1, x[m] == 1, y[m] == 1, diff(x)^2 + diff(y)^2 <= h^2)## Solve the catenary problemprob <- Problem(objective, constraints)result <- solve(prob) We can now plot it and compare it with the ideal solution. Below we use alpha blending and differing line thickness to show the ideal in red and the computed solution in blue. xs <- result$getValue(x)ys <- result$getValue(y)catenary <- ggplot(data.frame(x = xs, y = ys)) + geom_line(mapping = aes(x = x, y = y), color = "blue", size = 1) + geom_point(data = data.frame(x = c(xs[1], ys[1]), y = c(xs[m], ys[m])), mapping = aes(x = x, y = y), color = "red") ideal <- function(x) { 0.22964 *cosh((x -0.5) / 0.22964) - 0.02603 }catenary + stat_function(fun = ideal , colour = "brown", alpha = 0.5, size = 3) Additional Ground Constraints A more interesting situation arises when the ground is not flat. Let \(g \in {\mathbf R}^m\) be the elevation vector (relative to the \(x\)-axis), and suppose the right endpoint of our chain has been lowered by \(\Delta y_m = 0.5\). The analytical solution in this case would be difficult to calculate. However, we need only add two lines to our constraint definition, constr[[4]] <- (y[m] == 0.5)constr <- c(constr, y >= g) to obtain the new result. Below, we define \(g\) as a staircase function and solve the problem. ## Lower right endpoint and add staircase structureground <- sapply(seq(0, 1, length.out = m), function(x) { if(x < 0.2) return(0.6) else if(x >= 0.2 && x < 0.4) return(0.4) else if(x >= 0.4 && x < 0.6) return(0.2) else return(0)})constraints <- c(constraints, y >= ground)constraints[[4]] <- (y[m] == 0.5)prob <- Problem(objective, constraints)result <- solve(prob) to obtain the new result. The figure below shows the solution of this modified catenary problem for \(m = 101\) and \(h = 0.04\). The chain is shown hanging in blue, bounded below by the red staircase structure, which represents the ground. xs <- result$getValue(x)ys <- result$getValue(y)ggplot(data.frame(x = xs, y = ys)) + geom_line(mapping = aes(x = x, y = y), color = "blue") + geom_point(data = data.frame(x = c(xs[1], ys[1]), y = c(xs[m], ys[m])), mapping = aes(x = x, y = y), color = "red") + geom_line(data.frame(x = xs, y = ground), mapping = aes(x = x, y = y), color = "brown") Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] ggplot2_3.1.1 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 highr_0.8 ## [4] compiler_3.6.0 pillar_1.4.1 plyr_1.8.4 ## [7] R.methodsS3_1.7.1 R.utils_2.8.0 tools_3.6.0 ## [10] digest_0.6.19 bit_1.1-14 evaluate_0.14 ## [13] tibble_2.1.2 gtable_0.3.0 lattice_0.20-38 ## [16] pkgconfig_2.0.2 rlang_0.3.4 Matrix_1.2-17 ## [19] yaml_2.2.0 blogdown_0.12.1 xfun_0.7 ## [22] withr_2.1.2 dplyr_0.8.1 Rmpfr_0.7-2 ## [25] ECOSolveR_0.5.2 stringr_1.4.0 knitr_1.23 ## [28] tidyselect_0.2.5 bit64_0.9-7 grid_3.6.0 ## [31] glue_1.3.1 R6_2.4.0 rmarkdown_1.13 ## [34] bookdown_0.11 purrr_0.3.2 magrittr_1.5 ## [37] scales_1.0.0 htmltools_0.3.6 scs_1.2-3 ## [40] assertthat_0.2.1 colorspace_1.4-1 labeling_0.3 ## [43] stringi_1.4.3 lazyeval_0.2.2 munsell_0.5.0 ## [46] crayon_1.3.4 R.oo_1.22.0 Source References Gelfand, I. M., and S. V. Fomin. 1963. Calculus of Variations. Prentice-Hall. Griva, I. A., and R. J. Vanderbei. 2005. “Case Studies in Optimization: Catenary Problem.” Optimization and Engineering 6 (4): 463–82.
Differential and Integral Equations Differential Integral Equations Volume 20, Number 3 (2007), 293-308. On the finite-time blow-up of a non-local parabolic equation describing chemotaxis Abstract The non-local parabolic equation \[ v_t=\Delta v+\frac{\lambda e^v}{\int_\Omega e^v}\quad\mbox{in $\Omega\times (0,T)$} \] associated with Dirichlet boundary and initial conditions is considered here. This equation is a simplified version of the full chemotaxis system. Let $\lambda^*$ be such that the corresponding steady-state problem has no solutions for $\lambda>\lambda^*$, then it is expected that blow-up should occur in this case. In fact, for $\lambda>\lambda^*$ and any bounded domain $\Omega\subset {\bf R}^2$ it is proven, using Trudinger-Moser's inequality, that $\int_{\Omega} e^{v(x,t)}dx\to \infty$ as $t\to T_{max}\leq \infty.$ Moreover, in this case, some properties of the blow-up set are provided. For the two-dimensional radially symmetric problem, i.e. when $\Omega=B(0,1),$ where it is known that $\lambda^*=8\,\pi,$ we prove that $v$ blows up in finite time $T^* < \infty$ for $\lambda>8\,\pi$ and this blow-up occurs only at the origin $r=0$ (single-point blow-up, mass concentration at the origin). Article information Source Differential Integral Equations, Volume 20, Number 3 (2007), 293-308. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356039503 Mathematical Reviews number (MathSciNet) MR2293987 Zentralblatt MATH identifier 1212.35233 Subjects Primary: 35K60: Nonlinear initial value problems for linear parabolic equations Secondary: 35B05: Oscillation, zeros of solutions, mean value theorems, etc. 35Q80: PDEs in connection with classical thermodynamics and heat transfer 92C17: Cell movement (chemotaxis, etc.) Citation Kavallaris, Nikos I.; Suzuki, Takashi. On the finite-time blow-up of a non-local parabolic equation describing chemotaxis. Differential Integral Equations 20 (2007), no. 3, 293--308. https://projecteuclid.org/euclid.die/1356039503
Note that, unlike discrete random variables, continuous random variables have zero point probabilities, i.e., the probability that a continuous random variable equals a single value is always given by 0. Formally, this follows from properties of integrals: $$P(X=a) = P(a\leq X\leq a) = \int\limits^a_a\! f(x)\, dx = 0.\notag$$ Informally, if we realize that probability for a continuous random variable is given by areas under pdf's, then, since there is no area in a line, there is no probability assigned to a random variable taking on a single value. This does not mean that a continuous random variable will never equal a single value, only that we don't assign any probability to single values for the random variable. For this reason, we only talk about the probability of a continuous random variable taking a value in an INTERVAL. Recall Definition 3.3.1, the definition of the cdf, which applies to both discrete and continuous random variables. For continuous random variables we can further specify how to calculate the cdf with a formula as follows. Let \(X\) have pdf \(f\), then the cdf \(F\) is given by $$F(x) = P(X\leq x) = \int\limits^x_{-\infty}\! f(t)\, dt, \quad\text{for}\ x\in\mathbb{R}.\notag$$ In other words, the cdf for a continuous random variable is found by integrating the pdf. Note that the Fundamental Theorem of Calculus implies that the pdf of a continuous random variable can be found by differentiating the cdf. This relationship between the pdf and cdf for a continuous random variable is incredibly useful. Relationship between pdf and cdf for a continuous random variable: Let \(X\) be a continuous random variable with pdf \(f\) and cdf \(F\). By definition, the cdf is found by integratingthe pdf:\( \displaystyle{F(x) = \int\limits^x_{-\infty}\! f(t)\, dt}\) By the Fundamental Theorem of Calculus, the pdf can be found by differentiatingthe cdf: \(\displaystyle{f(x) = \frac{d}{dx}\left[F(x)\right]}\) Example \(\PageIndex{1}\): Continuing in the context of Example 17, we find the corresponding cdf. First, let's find the cdf at two possible values of \(X\), \(x=0.5\) and \(x=1.5\): \begin{align*} F(0.5) &= \int\limits^{0.5}_{-\infty}\! f(t)\, dt = \int\limits^{0.5}_0\! t\, dt = \frac{t^2}{2}\bigg|^{0.5}_0 = 0.125 \\ F(1.5) &= \int\limits^{1.5}_{-\infty}\! f(t)\, dt = \int\limits^{1}_0\! t\, dt + \int\limits^{1.5}_1 (2-t)\, dt = \frac{t^2}{2}\bigg|^{1}_0 + \left(2t - \frac{t^2}{2}\right)\bigg|^{1.5}_1 = 0.5 + (1.875-1.5) = 0.875 \end{align*} Now we find \(F(x)\) more generally, working over the intervals that \(f(x)\) has different formulas: \begin{align*} \text{for}\ x<0: \quad F(x) &= \int\limits^x_{-\infty}\! 0\, dt = 0 \\ \text{for}\ 0\leq x\leq 1: \quad F(x) &= \int\limits^{x}_{0}\! t\, dt = \frac{t^2}{2}\bigg|^x_0 = \frac{x^2}{2} \\ \text{for}\ 1<x\leq2: \quad F(x) &= \int\limits^{1}_0\! t\, dt + \int\limits^{x}_1 (2-t)\, dt = \frac{t^2}{2}\bigg|^{1}_0 + \left(2t - \frac{t^2}{2}\right)\bigg|^x_1 = 0.5 + \left(2x - \frac{x^2}{2}\right) - (2 - 0.5) = 2x - \frac{x^2}{2} - 1 \\ \text{for}\ x>2: \quad F(x) &= \int\limits^x_{-\infty}\! f(t)\, dt = 1 \end{align*} Putting this altogether, we write \(F\) as a piecewise function and Figure 2 gives its graph: $$F(x) = \left\{\begin{array}{l l} 0, & \text{for}\ x<0 \\ \frac{x^2}{2}, & \text{for}\ 0\leq x \leq 1 \\ 2x - \frac{x^2}{2} - 1, & \text{for}\ 1< x\leq 2 \\ 1, & \text{for}\ x>2 \end{array}\right.\notag$$ Figure 2: Graph of cdf in Example 4.2.1 Recall that the graph of the cdf for a discrete random variable is always a step function. Looking at Figure 2 above, we note that the cdf for a continuous random variable is always a continuous function. In the next two sections, we look at two common continuous distributions. There is a list of other common continuous distributions in section 4.7.
Dear Uncle Colin, I keep forgetting how to integrate $\sec(x)$ and $\cosec(x)$. Do you have any tips? - Literally Nothing Memorable Or Distinctive Hi, LNMOD, and thanks for your message! Integrating $\sec(x)$ and $\cosec(x)$ relies on a trick, and one the average mathematician probably wouldn't come up with without aRead More → For 2019, I'm trying an experiment: every couple of weeks, writing a post about a mathematical object that a) I don't know much about and b) is named after somebody. These posts are a trial run - let me know how you find them! The chief use of the AckermannRead More → Dear Uncle Colin, I have the simultaneous equations $3x^2 - 3y = 0$ and $3y^2 - 3x = 0$. I've worked out that $x^2 = y$ and $y^2 = x$, but then I'm stuck! - My Expertise Relatedto1 Simultaneous Equations? Not Nearly Enough! Hi, MERSENNE, and thanks for your message!Read More → Because I'm insufferably vain, I have a search running in my Twitter client for the words "The Maths Behind", in case someone mentions my book (which is, of course, available wherever good books are sold). On the minus side, it rarely is; on the plus side, the search occasionally throwsRead More → Dear Uncle Colin, I have been given the series $\frac{1}{2} + \frac{1}{3} + \frac{1}{8} + \frac{1}{30} + \frac{1}{144} + ...$, which appears to have a general term of $\frac{1}{k! + (k+1)!}$ - but I can't see how to sum that! Any ideas? - Series Underpin Maths! Hi, SUM, and thanksRead More → Since it's Christmas (more or less), let's treat ourselves to a colourful @solvemymaths puzzle: Have a go, if you'd like to! Below the line will be spoilers. Consistency The first and most obvious thing to ask is, is Ed's claim reasonable? At a glance, yes, it makes sense: there's aRead More → Dear Uncle Colin, How do I verify the identity $\frac{\cos(\theta)}{1 - \sin(\theta)} \equiv \tan(\theta) + \sec(\theta)$ for $\cos(\theta) \ne 0$? - Struggles Expressing Cosines As Nice Tangents Hi, SECANT, and thanks for your message! The key questions for just about any trigonometry proof are "what's ugly?" and "how can IRead More → Midway through the second half of You Can’t Polish A Nerd, Steve Mould neatly encapsulates the show in one line: “It creates images on your oscilloscope. It’s so cool!” Because of course you have an oscilloscope. And of course you would use it - or failing that, a balloon andRead More → Dear Uncle Colin, How would you factorise $63x^2 + 32x - 63$? I tried the method where you multiply $a$ and $c$ (it gives you -3969) - but I'm not sure how to find factors of that that sum to 32! Factors Are Troublesomely Oversized, Urgh Hi, FATOU, and thanksRead More → In this month’s episode of Wrong, But Useful, we’re joined by @ch_nira, who is Dr Nira Chamberlain in real life - and the World’s Most Interesting Mathematician. Nira is a professional mathematical modeller, president-designate of the IMA, and a visiting fellow at Loughborough university. We discuss Nira’s entry in theRead More →
Angular deflection, abbreviated as \( \theta \) (Greek symbol theta), is when a flex connector is bent on it's centerline. One end of the hose assembly is deflected or bent with the other end remaining parallel. Angular Deflection formula \(\large{ \theta = \frac {F \;l}{2\; \lambda\; I} }\) Where: \(\large{ \theta }\) (Greek symbol theta) = angular deflection \(\large{ I }\) = area moment of inertia \(\large{ l }\) = beam or hose length \(\large{ C }\) = connector / coupling \(\large{ F }\) = force acting on the tip of beam or hose \(\large{ r }\) = minimum centerline bend radius for constant flexing \(\large{ \lambda }\) (Greek symbol lambda) = modulus of elasticity \(\large{ \pi }\) = Pi Solve for: \(\large{ l = \frac {\pi\; r\;\theta}{180} }\)
I know this is an old question. But I've been lately thinking about this. I don't think that using \text is the ideal solution. I think we need to differentiate math mode from text mode. That's all. For me \text should only have portions of text that, because of its nature within display math can't be typed naturally. All the rest, which is math mode, should be typed with another command. In this “rest” we have at least two kinds of text, for example: \{ x,\ \firstkind{such that $x$ is son of Julia} \} and x = \secondkind{number of cats}. For me, none of this two ones should be written with the outer text mode that \text provides. \[ \text{Let } x = \var{number of cats} \text{, and also } \var{Overlap Area} = \frac{\Area(\var{Detection} \cap \var{Ground Truth})} {\Area(\var{Detection} \cup \var{Ground Truth})} \] If we think in terms of {center} and $, I think it's clear that \begin{center} Let $x = \var{number of cats}$, and also $\var{Overlap Area} = \frac{\Area(\var{Detection} \cap \var{Ground Truth})} {\Area(\var{Detection} \cup \var{Ground Truth})}$ \end{center} Now the question is how to define \var or whatever name we choose for that thing that is kind of text, but is really math mode and not just text linking math parts, may be one wants \textmath, \mtext, \mthtxt, etc. I used \var for clarity, but may be that command is used so one needs to define another name. I think it should be typed in the math font \mathrm (ideally) or in the closest to it (which means no \text that changes depending on the outer text font) like \textnormal if the text font is from the family of the math font. The problem with \mathrm is the spacing, but that could be sorted out. The other case \{ x,\ \mathtext{such that $x$ is son of Julia} \} should also not depend of the outer font, so we should define \mathtext probably with \textnormal or something like that. I hope this different forms get differentiated in LaTeX3, which would bring more “robustness” and less ambiguities to the process of writing documents with math involved. Last, imagine a beamer presentation with sans serif font for text, and the usual Computer Modern for math mode; and think about which parts should be in sans serif and which not: \[ \text{Let } x = \text{number of cats} \text{, and also } \{ x,\ \text{such that $x$ is son of Julia} \} \] I do think it's clear that only two of those four \text should be in sans serif.
For a positive real number \(x > 1,\) the Riemann zeta function \(\zeta(x)\) is defined by \(\zeta(x) = \sum_{n = 1}^\infty \frac{1}{n^x}.\) Compute \(\sum_{k = 2}^\infty \{\zeta(2k - 1)\}.\) Note: For a real number \(x,\)\(\{x\}\) denotes the fractional part of \(x.\) The answer is 1/4 If you reply I might explain why. \(\text{One key part is that for }k\geq 2,~\{\zeta(k)\}=\zeta(k)-1\) \(\text{That applies to all of the terms of the series we are to compute}\) \(\text{What you do is this....}\\ \text{Consider the sum of the fractional parts of the zeta functions for all integers 2 and larger}\\ \text{Write out horizontally some of the terms of some of each series in the sum, for a few terms, i.e. }\\ \{\zeta(2)\} = \zeta(2)-1 = (1/2)^2 + (1/3)^2 + (1/4)^2 + \dots\\ \{\zeta(3)\} = \zeta(3)-1 = (1/2)^3 + (1/3)^3 + (1/4)^3 + \dots\\ \{\zeta(4)\} = \zeta(4)-1 = (1/2)^4 + (1/3)^4 + (1/4)^4 + \dots\\ \vdots\) \(\text{Now consider the sums vertically for each term}\\ \sum \limits_{k=2}^\infty (1/2)^k = \dfrac 1 2\\ \vdots\\ \sum \limits_{k=2}^\infty~(1/n)^k = \dfrac{1}{n(n-1)}\) \(\text{And then you take the sum over these simplified terms}\\ \sum \limits_{n=2}^\infty \dfrac{1}{n(n-1)} = 1\) \(\text{I leave it to you to modify this whole thing slightly to just sum the odd terms}\\ \text{greater than 2, as is asked for in the problem}\\ \text{You'll find the odd terms } > 2 \text{ sum to }\dfrac 1 4\\ \text{and the even terms }\geq 2 \text{ sum to }\dfrac 3 4\)
Difference between revisions of "Dictionary:Q(Quality)" (Added search links) (Prepared the page for translation) Line 1: Line 1: − {{DISPLAYTITLE:Dictionary:''Q''}}{{#category_index:Q|''Q''}} + − '''1'''. Quality factor, the ratio of 2π times the [[peak energy]] to the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic ''Q'' of rocks is of the order of 50 to 300. ''Q'' is related to other measures of absorption (see below): + + + {{DISPLAYTITLE:Dictionary:''Q''}} + {{#category_index:Q|''Q''}} + '''1'''. Quality factor, the ratio of 2π times the [[peak energy]] to the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic ''Q'' of rocks is of the order of 50 to 300. ''Q'' is related to other measures of absorption (see below): <center><math>\frac{1}{Q} = \frac{\alpha V}{\pi f} = \frac{\alpha \lambda}{\pi} = \frac{hT}{\pi} = \frac{\delta}{\pi} = \frac{2\Delta f}{f_\mathrm{r}} </math></center> <center><math>\frac{1}{Q} = \frac{\alpha V}{\pi f} = \frac{\alpha \lambda}{\pi} = \frac{hT}{\pi} = \frac{\delta}{\pi} = \frac{2\Delta f}{f_\mathrm{r}} </math></center> − where ''V'', ''f'', ''λ'', and ''T'' are, respectively, velocity, frequency, wavelength, and period.<ref>Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press.</ref> The [[Dictionary:absorption coefficient|absorption coefficient]] ''α'' is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as + where ''V'', ''f'', ''λ'', and ''T'' are, respectively, velocity, frequency, wavelength, and period.<ref>Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press.</ref> The [[Dictionary:absorption coefficient|absorption coefficient]] ''α'' is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as <center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center> <center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center> − where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; ''f''<sub>r</sub> is the resonance frequency and <math>\Delta f</math> is the change in frequency that reduces the amplitude by <math>\frac{1}{\sqrt{2}}</math>. The [[Dictionary:damping factor|damping factor]] ''h'' relates to the decrease in amplitude with time, + where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; ''f''<sub>r</sub> is the resonance frequency and <math>\Delta f</math> is the change in frequency that reduces the amplitude by <math>\frac{1}{\sqrt{2}}</math>. The [[Dictionary:damping factor|damping factor]] ''h'' relates to the decrease in amplitude with time, <center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center> <center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center> − [[File:Sega2.jpg|center|thumb|600px|[[Dictionary:Absorption|Absorption terminology]]. Sometimes this terminology is used for attenuation because of factors other than absorption. ''E'' = energy, <math>\Delta E</math> = energy lost in one cycle, <math>\lambda</math> = wavelength, ''f'' = frequency, ''x'' = distance, ''t'' = time, <math>\frac{A}{A_{0}} = \frac{\text {amplitude}}{\text {initial amplitude}}</math>, <math>\frac{A_{1}}{A_{2}} = \frac{\text {amplitude}}{\text {amplitude one cycle later}}</math>.<ref>Sheriff, R.E., 1989, ''Geophysical methods'', pg. 330: Prentice Hall Inc.</ref>]] + [[File:Sega2.jpg|center|thumb|600px|[[Dictionary:Absorption|Absorption terminology]]. Sometimes this terminology is used for attenuation because of factors other than absorption. ''E'' = energy, <math>\Delta E</math> = energy lost in one cycle, <math>\lambda</math> = wavelength, ''f'' = frequency, ''x'' = distance, ''t'' = time, <math>\frac{A}{A_{0}} = \frac{\text {amplitude}}{\text {initial amplitude}}</math>, <math>\frac{A_{1}}{A_{2}} = \frac{\text {amplitude}}{\text {amplitude one cycle later}}</math>.<ref>Sheriff, R.E., 1989, ''Geophysical methods'', pg. 330: Prentice Hall Inc.</ref>]] '''2'''. The ratio of the reactance of a circuit to the resistance. '''2'''. The ratio of the reactance of a circuit to the resistance. − '''3'''. A term to describe the sharpness of a [[Dictionary:filter|filter]]; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). + '''3'''. A term to describe the sharpness of a [[Dictionary:filter|filter]]; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). − '''4'''. A designation for ''[[Dictionary:Love wave|Love wave]]s'' (q.v.). + '''4'''. A designation for ''[[Dictionary:Love wave|Love wave]]s'' (q.v.). − '''5'''. Symbol for the [[Dictionary:Koenigsberger_ratio_(Q)|''Koenigsberger ratio'']] (q.v.). + '''5'''. Symbol for the [[Dictionary:Koenigsberger_ratio_(Q)|''Koenigsberger ratio'']] (q.v.) + + . − ==See also== ==See also== − * [[Dictionary:Attenuation|Attenuation]] + − * [[Dictionary:Attenuation factor|Attenuation factor]] + * [[Dictionary:Attenuation|Attenuation]] + * [[Dictionary:Attenuation factor|Attenuation factor]] + ==References== ==References== + + {{reflist}} {{reflist}} + + ==External links== ==External links== + + {{search}} {{search}} + + Revision as of 18:43, 21 July 2017 1. Quality factor, the ratio of 2π times the peak energy to the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic Q of rocks is of the order of 50 to 300. Q is related to other measures of absorption (see below): where V, f, λ, and T are, respectively, velocity, frequency, wavelength, and period. [1] The absorption coefficient α is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as where x is the distance traveled. The logarithmic decrement δ is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates Q to the sharpness of a resonance condition; f r is the resonance frequency and is the change in frequency that reduces the amplitude by . The damping factor h relates to the decrease in amplitude with time, 2. The ratio of the reactance of a circuit to the resistance. 3. A term to describe the sharpness of a filter; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). 4. A designation for Love waves (q.v.). 5. Symbol for the Koenigsberger ratio (q.v.). 6. See Q-type section. See also References Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press. Sheriff, R.E., 1989, Geophysical methods, pg. 330: Prentice Hall Inc.
Overview of calculus In this lesson, we'll give a broad overview and description of single-variable calculus. Single-variable calculus is a big tool kit for finding the slope or area underneath any arbitrary function \(f(x)\) which is smooth and continuous. If the slope of \(f(x)\) is constant, then we don't need calculus to find the slope or area; but when the slope of \(f(x)\) is a variable, then we must use a tool called a derivative to find the slope and another tool called an integral to find the area. Limits Limits describe what one quantity approaches as some other quantity approaches a given value. This concept is the basis of calculus because it is used to define both derivatives and integrals. In this lesson, we'll try to wrap our minds around what the notion of a limit is and use it to define the derivative function. In this lesson, we’ll prove that \(\lim_{ϴ→0}\frac{sinϴ}{ϴ}=1\). We'll prove this result by using the squeeze theorem and basic geometry, algebra, and trigonometry. In a future lesson, we'll learn why this result is important: the reason being because knowledge that \(\lim_{ϴ→0}\frac{sinϴ}{ϴ}=1\) is required to find the derivatives of the sin and cosine functions. But we'll save that for a future lesson. Derivatives Limits describe what one quantity approaches as some other quantity approaches a given value. This concept is the basis of calculus because it is used to define both derivatives and integrals. In this lesson, we'll try to wrap our minds around what the notion of a limit is and use it to define the derivative function. In previous lessons, we learned how the derivative \(f'(x)\) gives us the steepness at each point along a function \(f(x)\). In this lesson, we'll discuss how using the concept of a partial derivative we can find the steepness at each point along a surface \(z=f(x,y)\). To find the partial derivative we treat one of the variables as a constant and then take the ordinary derivative of \(f(x,y)\). Using this concept, we can specify how steep a surface \(f(x,y)\) is along the \(x\) direction and along the \(y\) direction at each point along the surface. In other words, for every point along the surface, there is a steepness of the surface associated with both the \(x\) and the \(y\) directions at that point. Optimization problems Calculus—specifically, derivatives—can be used to find the values of \(x\) at which the function \(f(x)\) is at either a minimum value or a maximum value. For example, suppose that we let \(x\) denote the horizontal distance away from the beginning of a hiking trail near a mountain and we let \(f(x)\) denote the altitude of the mountainous terrain at each \(x\) value. \(f(x)\) reaches a minimum value when the function "flattens out"—that is, when \(f'(x)\) becomes equal to zero. These particular values of \(x\) are associated with the bottom and top of the mountain. The condition that \(f'(x)=0\) only tells us that \(f(x)\) is at either a minimum or a maximum. To determine whether or not \(f(x)\) is at a minimum or a maximum, we must use the concept of the second derivative. This will be the topic of discussion in this lesson. Given that the perimeter \(2x+2y\) of any arbitrary rectangle must be constant, we can use calculus to find that particular rectangle with the greatest area. The solution to this problem has practical applications. For example, suppose that someone had only 30 meters of fencing to enclose their backyard and they wanted to know what fencing layout would maximize the size and total area of their backyard. Using calculus, we can answer such questions. If \((x,y)\) represents any point on the circle, if \(P\) is a point fixed at the coordinate point \((4,0)\), and if \(d\) represents the distance between those two points then, by using only calculus, we can find the point \((x,y)\) on the circle associated with the minimum distance \(d\). The law of reflection had been well known as early as the first century; but it took longer than another millennium to discover Snell's law, the law of refraction. The law of reflection was readily observable and could be easily determined by making measurements; this law states that if a light ray strikes a surface at an angle \(θ_i\) relative to the normal and gets reflected off of the surface, it will be reflected at an angle \(θ_r\) relative to the normal such that \(θ_i=θ_r\). The law of refraction, however, is a little less obvious and it required calculus to prove. The mathematician Pierre de Fermat postulated the principle of least time: that light travels along the path which gets it from one place to another such that the time \(t\) required to traverse that path is shorter than the time required to take any other path. In this lesson, we shall use this principle to derive Snell's law. Chain rule Integrals To find the gravitational force exerted by a sphere on a particle of mass \(M\) outside of that sphere, we must first subdivide that sphere into many very skinny shells and find the gravitational force exerted by anyone of those shells on \(m\). We'll see, however, that finding the gravitational force exerted by such a shell is in of itself a somewhat tedious exercise. In the end, we'll see that the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of the sphere (where \(D\) is the center-to-center separation distance between the sphere and particle) is completely identical to the gravitational force exerted by a particle of mass \(M\) on the mass \(m\) such that \(D\) is their separation distance. In previous lessons, we learned that by taking the integral of some function \(f(x)\) we can find the area underneath that curve by summing the areas of infinitely many, infinitesimally skinny rectangles. In this lesson, we'll use the concept of a double integral to find the volume underneath any smooth and continuous surface \(f(x,y)\) by summing the volumes of infinitely many, infinitesimally skinny columns. In the previous lesson, we defined the concept of a line integral and derived a formula for calculating them. We learned that line integrals give the volume between a surface \(f(x,y)\) and a curve \(C\). In this lesson, we'll learn about some of the applications of line integral for finding the volumes of solids and calculating work. In particular, we'll use the concept of line integrals to calculate the volume of a cylinder, the work done by a proton on another proton moving in the presence of its electric field, and the work done by gravity on a swinging pendulum. For a vector field \(\vec{F}(x,y)\) defined at each point \((x,y)\) within the region \(R\) and along the continuous, smooth, closed, piece-wise curve \(c\) such that \(R\) is the region enclosed by \(c\), we shall derive a formula (known as Green’s Theorem) which will allow us to calculate the line integral of \(\vec{F}(x,y)\) over the curve \(c\). Solids of revolution In this lesson, we'll use the concept of a definite integral to calculate the volume of a sphere. First, we'll find the volume of a hemisphere by taking the infinite sum of infinitesimally skinny cylinders enclosed inside of the hemisphere. Then we'll multiply our answer by two and we'll be done. In this lesson, we'll discuss how by using the concept of a definite integral one can calculate the volume of something called an oblate spheroid. An oblate spheroid is essentially just a sphere which is compressed or stretched along one of its dimensions while leaving its other two dimensions unchanged. For example, the Earth is technically not a sphere—it is an oblate spheroid. To find the volume of an oblate spheroid, we'll start out by finding the volume of a paraboloid . (If you cut an oblate spheroid in half, the two left over pieces would be paraboloids.) To do this, we'll draw an \(n\) number of cylindrical shells inside of the paraboloid; by taking the Riemann sum of the volume of each cylindrical shell, we can obtain an estimate of the volume enclosed inside of the paraboloid. If we then take the limit of this sum as the number of cylindrical shells approaches infinity and their volumes approach zero, we'll obtain a definite integral which gives the exact volume inside of the paraboloid. After computing this definite integral, we'll multiply the result by two to get the volume of the oblate spheroid. Series In this lesson, we'll derive Maclaurin/Taylor polynomials which are used to "approximate" arbitrary functions which are smooth and continuous. More generally, they are used to give a local approximation of such functions. We'll also derive Maclaurin/Taylor series where the approximation becomes exact.
Uma, S and Das, Puspendu Kumar (1996) Production of $I^*(^2P_{1/2})$ in the ultraviolet photodissociation of \alpha-branched alkyl iodides. In: The Journal of Chemical Physics, 104 (12). pp. 4470-4474. PDF Production_of.pdf Restricted to Registered users only Download (125kB) | Request a copy Abstract Photodissociation dynamics of a series of \alpha-branched alkyl iodides at excitation wavelengths of 222, 266, and \sim 305 nm has been investigated by measuring the quantum yield $(\phi^*)$ of $I^*(^2P_{1/2})$ production. $I^*$ is found to be the major product at 222 nm and 266 nm from methyl and ethyl iodides but not from the higher \alpha-branched homologs. On the contrary, $I(2^P_{3/2})$ is the major product at ~305 nm for all the iodides. Assuming that $I^*$ originates from the $^3Q_0$ state over the entire A-band, production of both I and $I^*$ in methyl and ethyl iodides at 222 and 266 nm is explained by invoking the curve-crossing mechanism in the upper state. The crossing probability (P) between the $^3Q_0$ and $^1Q_1$ surfaces for these two molecules has been estimated. At ~305 nm, simultaneous excitation to the $^3Q_0$ and $^3Q_1$ states remains a distinct possibility. For higher branched (i.e., i-propyl and t-butyl) alkyl iodides, the mechanism for $I^*$ production is qualitatively different from that of unbranched iodides. Coupling of a-carbon bending vibrational modes with the C–I bond excitation as well as the actual time spent in the excited state surfaces in i-propyl and t-butyl iodides seem to be the reasons for altering the dynamics of dissociation drastically in comparison with that of methyl iodide. Item Type: Journal Article Additional Information: Copyright of this article belongs to American Institute of Physics. Department/Centre: Division of Chemical Sciences > Inorganic & Physical Chemistry Depositing User: Sumana K Date Deposited: 05 Jan 2007 Last Modified: 19 Sep 2010 04:33 URI: http://eprints.iisc.ac.in/id/eprint/9328 Actions (login required) View Item
Yes, there is a more efficient algorithm. Your algorithm can take exponential time. You can check whether there exists any match in $O(nm)$ time, where $n$ is the length of text and $m$ is the length of mask, and find all matches in $O(n^2m)$ time. I'll show two solutions, one using dynamic programming and one using graph search. You can pick whichever you find easier to understand. I don't know whether you can do even better. Dynamic programming Build an array $A[i,j]$, where $A[i,j]$ is true if some prefix of $\text{text}[i..n]$ matches $\text{mask}[j..m]$. There's a recursive formula for $A[i,j]$: $$\begin{align*}A[i,j] &= \text{True} \qquad &&\text{if } j=m+1\\A[i,j] &= A[i+1,j+1] \qquad &&\text{if } \text{mask}[j]=\text{text}[i]\\A[i,j] &= A[i,j+1] \lor \cdots \lor A[n,j+1] \qquad &&\text{if } \text{mask}[j]=*\\A[i,j] &= \text{False} \qquad &&\text{otherwise}\end{align*}$$ If you fill this in, in the usual way, you get a $O(nm^2)$ time algorithm. If you additionally keep track of $B[i,j] = A[i,j+1] \lor \cdots \lor A[n,j+1]$ and fill in entries in the right order, you get a $O(nm)$ time algorithm. Once you have filled in the matrix, you can find all substrings of text that match: each entry $A[i,1]$ that is true corresponds to one or more substrings of text that match (the substring starts at index $i$ of text). You can adapt the above algorithm to enumerate all those matching substrings by repeating the above computation once per possible ending place of the match. There may be even faster methods, using ideas from string matching and/or regular expression matching. Graph search Build a directed graph on $nm$ vertices. Each vertex is of the form $\langle i,j \rangle$, which we think of as corresponding to the problem of checking whether some prefix of $\text{text}[i..n]$ matches $\text{mask}[j..m]$. Now add the following edges: Add the edge $\langle i,j \rangle \to \langle i+1,j+1 \rangle$ if mask$[j]$ = text$[i]$. Add the edge $\langle i,j \rangle \to \langle i,j' \rangle$ if mask$[j] = *$ and $j' > j$. Finally, mark each vertex $\langle i,m+1 \rangle$ as "accepting". This is a directed acyclic graph; it has no cycles. Now, for each $i$, find all accepting vertices that are reachable by some path starting at the vertex $\langle i,1 \rangle$. If $\langle i',m+1 \rangle$ is reachable from $\langle i,1 \rangle$, that means that the substring text$[i..i'-1]$ matches mask, so you can output this substring. This computation can be done in $O(nm)$ time per starting point using breadth-first search, for a total of $O(n^2m)$ time.
I recently asked (and then attemped to answer) a question about spontaneous symmetry breaking in the Heisenberg model: Spontanous symmetry breaking in the Heisenberg model? The question and then the conclusion I came to in the answer can be summarized as follows*: Spontaneous symmetry breaking is when the a ground state $|GS\rangle$ of the Hamiltonian does not posses the same symmetry as the Hamiltonian itself and the reason we see spontaneously broken systems is due to imperfections (e.g. symmetry breaking fields). Looking at the 1D Ising model the grounds states have either all spins up or all spins down. Thus we have a spontnous symmetry breaking of the $Z_2$ symmetry of the Hamiltonian. That said at any finite temperature: $$\lim_{h\rightarrow 0}\lim_{N\rightarrow \infty}\frac{1}{N} \sum_i\mathrm{Tr}(\rho_e \sigma_i)=0$$ where $\rho_e=e^{-\beta H}/T$. I.e. the thermal average of the magnetization is zero in the limit of the symmetry breaking field $h$ going to zero and the volume $V$ going to infinity. This means the symmetry breaking does not show at finite temperature. I.e. we appear to have the following: The 1D Ising model does have Spontaneous symmetry breaking. At any finite temperature the symmetry breaking is not manifest. I have seen several sources (e.g. here; pg1) state that (exact quote from linked source): Ising model cannot have spontaneous symmetry breaking at finite temperature,... I assume that this is an abuse of terminology and what is meant is that the spontaneous symmetry breaking does not manifest itself. I am not sure the way I am using the terminology in the above is correct and as such want to ask the following clarifying question: If, for a system, there is spontaneous symmetry breaking of a continuous symmetry which is not manifest at that temperature $T$. Will the system have Goldstone modes at temperature $T$? *If this is wrong please feel free to answer that question correctly. **Sorry for the long rambling before the actual question - I am trying to prevent it being an XY problem.
Category Theory for Programmers Chapter 14: Representable Functors The youtube video of this chapter was very helpful. Show that the hom-functors map identity morphisms in $C$ to corresponding identity functions in Set. Let $C(a,-)$ be a hom-functor. We want to show that for some identity morphism $\mathrm{Id}_x : x \to x$ in $C$ is an identity function in the lifted space $C(a,x)$, or specifically, we get a function such that for any element $x’ \in C(a,x)$ we get $x’$. However, this is pretty easy to see since $x’:a \to x$ is a morphism in $C$ and lifting $\mathrm{Id}_x$ is done by composition, we have $x’ \circ \mathrm{Id}_x = x’$. Show that Maybeis not representable. So, to be representable, we need to pick an object $a \in C$ from which to create a natural transformation from $H^a \to F$, where in this case $F$ is the Maybefunctor. In particular, we need to pick some function $f:a \to x$ such that we can recreate some Maybe x. Similarly, we need to go the other way, where given a Maybe xwe can create some function $f:a \to x$. So, let’s assume we have those natural transformations $\alpha$ and $\beta$. $\alpha$ needs to encapsulate $f$ entirely into a Maybe, however there are two values it could take, Noneor Just x'. $\beta$ needs to accept a Maybeand recreate our $f$. However, for $\alpha$ to be a function, it can only give one value when applied to $f$. Is the Readerfunctor representable? Reader a xis a function from $a \to x$. To be representable, we need to be able to encapsulate some function $f:a \to x$, and in this instance $\alpha$ and $\beta$ are just the identity natural transformations. Using Stream, memoize a function that squares its argument. A Streamis represented by $\mathbb{N}$, and so all we need to do is find an isomorphism between $\mathbb{N}$ and the domain $X$ of our function mapping $x \in X$ to $x^2$. Then all we need to do then use $X \to \mathbb{N}$ to tabulate our Stream. This is suitable for any countably infinite domain $X$. Show that tabulateand indexfor Streamare indeed the inverse of each other. instance Representable Stream where type Rep Stream = Integer tabulate f = Cons (f 0) (tabulate (f . (+1))) index (Cons b bs) n = if n == 0 then b else index bs (n - 1) We need to do two things, show tabulate . indexis an identity of Stream xand show index . tabulateis an identity of Integer -> x. Let s = Cons a0 (Cons a1 (Cons a2 (...)))and to show this is the same as (tabulate . index) s, we need to start the base case that s = Cons a0 ((tabulate . index) (Cons a1 (...))). Rewriting and expanding we have: tabulate (index s) Cons (index s 0) (tabulate ((index s) . (+1))) Cons (index (Cons a0 (...)) 0) (tabulate ((index s) . (+1))) Cons a0 (tabulate (\n -> index s (n+1)) Cons a0 (tabulate (\n -> index (Cons a0 (Cons a1 (...))) (n+1)) Cons a0 (tabulate (\n -> if (n+1==0) then a0 else index (Cons a1 (...))) (n)) Cons a0 (tabulate (\n -> index (Cons a1 (...))) (n)) Cons a0 (tabulate (index (Cons a1 (...)))) Cons a0 ((tabulate . index) (Cons a1 (...))) That was painful but proves our base case. Now the hypothesis is Cons a0 (... (Cons aN ((tabulate . index) (Cons aN1 (...))))) Using pretty much everything the same as above, it’s not a stretch to see Cons a0 (... (Cons aN (Cons aN1 ((tabulate . index) (Cons aN2 (...))))))) I don’t want to go the other way, but it’s basically the same game of just expanding definitions. The functor Pair a = Pair a ais representable. Guess the type. It can be represented by any type of cardinality 2, eg, bool. tabulate f = Pair (f true) (f false) index (Pair x y) b = if b then x else y
This content will become publicly available on September 10, 2020 From the outside looking in: what can Milky Way analogues tell us about the star formation rate of our own galaxy? Abstract ABSTRACT The Milky Way has been described as an anaemic spiral, but is its star formation rate (SFR) unusually low when compared to its peers? To answer this question, we define a sample of Milky Way analogues (MWAs) based on stringent cuts on the best literature estimates of non-transient structural features for the Milky Way. This selection yields only 176 galaxies from the whole of the SDSS DR7 spectroscopic sample which have morphological classifications in Galaxy Zoo 2, from which we infer SFRs from two separate indicators. The mean SFRs found are $$\log (\rm {SFR}_{SED}/\rm {M}_{\odot }~\rm {yr}^{-1})=0.53$$ with a standard deviation of 0.23 dex from SED fits, and $$\log (\rm {SFR}_{W4}/\rm {M}_{\odot }~\rm {yr}^{-1})=0.68$$ with a standard deviation of 0.41 dex from a mid-infrared calibration. The most recent estimate for the Milky Way’s SFR of $$\log (\rm {SFR}_{MW}/\rm {M}_{\odot }~\rm {yr}^{-1})=0.22$$ fits well within 2$$\sigma$$ of these values, where $$\sigma$$ is the standard deviation of each of the SFR indicator distributions. We infer that the Milky Way, while being a galaxy with a somewhat low SFR, is not unusual when compared to similar galaxies. Authors: School of Physics & Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, UK Publication Date: Sponsoring Org.: USDOE OSTI Identifier: 1566187 Resource Type: Published Article Journal Name: Monthly Notices of the Royal Astronomical Society Additional Journal Information: Journal Name: Monthly Notices of the Royal Astronomical Society Journal Volume: 489 Journal Issue: 4; Journal ID: ISSN 0035-8711 Publisher: Oxford University Press Country of Publication: United Kingdom Language: English Citation Formats Fraser-McKelvie, Amelia, Merrifield, Michael, and Aragón-Salamanca, Alfonso. From the outside looking in: what can Milky Way analogues tell us about the star formation rate of our own galaxy?. United Kingdom: N. p., 2019. Web. doi:10.1093/mnras/stz2493. Fraser-McKelvie, Amelia, Merrifield, Michael, & Aragón-Salamanca, Alfonso. From the outside looking in: what can Milky Way analogues tell us about the star formation rate of our own galaxy?. United Kingdom. doi:10.1093/mnras/stz2493. Fraser-McKelvie, Amelia, Merrifield, Michael, and Aragón-Salamanca, Alfonso. Tue . "From the outside looking in: what can Milky Way analogues tell us about the star formation rate of our own galaxy?". United Kingdom. doi:10.1093/mnras/stz2493. @article{osti_1566187, title = {From the outside looking in: what can Milky Way analogues tell us about the star formation rate of our own galaxy?}, author = {Fraser-McKelvie, Amelia and Merrifield, Michael and Aragón-Salamanca, Alfonso}, abstractNote = {ABSTRACT The Milky Way has been described as an anaemic spiral, but is its star formation rate (SFR) unusually low when compared to its peers? To answer this question, we define a sample of Milky Way analogues (MWAs) based on stringent cuts on the best literature estimates of non-transient structural features for the Milky Way. This selection yields only 176 galaxies from the whole of the SDSS DR7 spectroscopic sample which have morphological classifications in Galaxy Zoo 2, from which we infer SFRs from two separate indicators. The mean SFRs found are $\log (\rm {SFR}_{SED}/\rm {M}_{\odot }~\rm {yr}^{-1})=0.53$ with a standard deviation of 0.23 dex from SED fits, and $\log (\rm {SFR}_{W4}/\rm {M}_{\odot }~\rm {yr}^{-1})=0.68$ with a standard deviation of 0.41 dex from a mid-infrared calibration. The most recent estimate for the Milky Way’s SFR of $\log (\rm {SFR}_{MW}/\rm {M}_{\odot }~\rm {yr}^{-1})=0.22$ fits well within 2$\sigma$ of these values, where $\sigma$ is the standard deviation of each of the SFR indicator distributions. We infer that the Milky Way, while being a galaxy with a somewhat low SFR, is not unusual when compared to similar galaxies.}, doi = {10.1093/mnras/stz2493}, journal = {Monthly Notices of the Royal Astronomical Society}, number = 4, volume = 489, place = {United Kingdom}, year = {2019}, month = {9} }
I read somewhere the following sentence: the homotopy type of an aspherical manifold is determined by its fundamental group. Recall that $M$ is called aspherical if $\pi_n(M)=0$ for $n>1$. By Whitehead's theorem we know if $f\colon X \to Y$ is a mapping between spaces having a homotopy type of a CW-complex and $f$ induces isomorphisms on homotopy groups, then $f$ is a homotopy equivalence. Since every smooth manifold has a homotopy type of a CW-complex we can use this theorem: however if $N$ is another CW-complex (up to homotopy) and $N$ has the same homotopy groups as $M$ (where $M$ is aspherical) why do we know that there is a single mapping $f\colon M \to N$ inducing an isomorphism on homotopy groups? The thing you're stating is not really about manifolds. An Eilenberg MacLane space $K(G,n)$ is a space $X$ with $\pi_n X = G$ and all other homotopy groups equal to zero. These are unique up to weak homotopy equivalence (hence, by Whitehead, CW-complex Eilenberg MacLane spaces are unique up to homotopy equivalence). In fact, there exists a unique continuous map $K(G,n) \to K(H,n)$ up to homotopy that induces a given homomorphism on the level of $\pi_n$. The proof is, essentially, to just do what you want on the 1-skeleton and check that the 2-skeleton doesn't get mad when you do so. An actual proof is given in Hatcher, 1B.9, in the case $n=1$, and 4.30 for the general case. The thing you might be thinking about in the case of manifolds is even cooler: the Borel conjecture says that if $M, N$ are closed aspherical manifolds with isomorphic fundamental groups, there is a homeomorphism $M \to N$ inducing any given isomorphism on the fundamental groups. This seems likely to be true, and it's known for a wide class of manifolds, including manifolds of dimension $n \leq 3$ (by results of Waldhausen and geometrization) and all hyperbolic manifolds (by the even stronger Mostow rigidity).
I was watching a set of lectures on effective field theory and the lecturer said that you can always integrate the covariant derivative by parts due to gauge symmetry. For example, if I understand correctly, we can write: \begin{equation} \int {\cal D} \phi \exp \left\{ i \int d ^4 x D _\mu \phi D ^\mu \phi + \, ...\right\} = \int {\cal D} \phi \exp \left\{ i \int d ^4 x - \phi D ^2 \phi + \,...\right\} \end{equation} where $D_\mu = \partial_\mu + i g T ^a G_{a, \mu} $. This would be obvious if we didn't have the gauge boson contribution but why does integration by parts hold for the covariant derivative? Leibniz rule holds for covariant derivatives, both in gauge theories and gravity. Mathematically, a derivation is one for which the Leibniz rule holds. How does it work for non-abelian covariant derivatives. I will give you an example. Let $\Phi^\dagger \Phi$ be invariant under local non-abelian gauge transformations. Then $$ \partial_\mu (\Phi^\dagger \Phi) = (D_\mu\Phi)^\dagger \Phi + \Phi^\dagger (D_\mu\Phi) = [D_\mu(\Phi^\dagger)]\ \Phi + \Phi^\dagger (D_\mu\Phi) $$ The term on the left hand side is the ordinary derivative as the combination is gauge invariant. By construction, it is easy to see that each term on the right hand side is invariant under local gauge transformation. You need to further show that the terms linear in the gauge field cancel. That more or less follows from representation theory. It is also easy to work out the rules for integrating by parts using the above formula. For your Lagrangian, you would consider the ordinary derivative of the scalar,i.e,, $\partial_\mu (\Phi^\dagger D^\mu \Phi)$. Remarks: The ordinary derivative satisfies an additional property i.e., $\partial_\mu\partial_\nu\ f(x) = \partial_\nu \partial_\mu f(x)$ for all smooth functions. This is not assumed for all derivations. In fact, the obstruction to commutativity of the derivatives defines the field strength. $$ [D_\mu,D_\nu]\ \Phi \sim g\ (F_{\mu\nu}^a T_a)\ \Phi\ . $$ In supersymmetry, one uses a graded version of the Leibniz rule for (covariant) derivatives involving Grassmann coordinates.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Why the derivative of Fermi-Dirac distribution function at absolute zero temperature becomes negative of Dirac_Delta function. The Fermi-Dirac distribution function is \begin{equation} f_{0}(E)=\frac{1}{e^\frac{{E-E_{F}}}{k_{B}T}+1}, \end{equation} As $T\rightarrow0$, the Fermi-Dirac distribution becomes a step function \begin{equation} f_{0}(E)=\Theta({E}-E_{F}). \end{equation} and \begin{equation} \frac{\partial f_{0}}{\partial E}=\frac{-2}{\left(2\pi\hbar\right)^{3}} \delta\left(E-E_{F}\right), \end{equation} how we can get this and why the derivative becomes negative? should it not be \begin{equation} \frac{\partial f_{0}}{\partial E}=\frac{2}{\left(2\pi\hbar\right)^{3}} \delta\left(E-E_{F}\right), \end{equation} The Fermi-Dirac distribution is $$ f_T(E)=\frac{1}{\exp\left(\frac{E-E_F}{k_{\text B}T}\right)+1}$$ when $T\to0$, the denominator of $\frac{E-E_F}{k_{\text B}T}$ goes to zero and this ratio goes to $+\infty$ if $E>E_F$ and to $-\infty$ if $<E_F$. Therefore the exponential $\exp\left(\frac{E-E_F}{k_{\text B}T}\right)$ goes to $0$ if $E<E_F$ and to $+\infty$ if $E>E_F$. Thus the distribution $f_0$ goes to $1$ if $E<E_F$ and to $0$ if $E>E_F$. This is expressed by the limiing expression $$f_0=\Theta(E_F-E)$$ (and not $\Theta(E-E_F)$ as you wrote). So the distribution $f_0$ is constant except at the point $E=E_F$ where it decreases from $1$ to $0$. That is why the derivative is negative. By the way, we have $\frac{\partial f_0}{\partial E} =-\delta(E_F-E)$.
Section 29.4 "Duality" of CLRS (3rd Edition) describes the way of reading off an optimal dual solution from the last slack form of the primal as follows: Suppose that the last slack form of the primal is $$ \begin{align} z &= v' + \sum_{j \in N} c'_j x_j \\ x_i &= b'_i - \sum_{j \in N} a'_{ij} x_j, \; i \in B. \end{align} $$ Then, to produce an optimal dual solution, we set $$ \overline{y_i} = \begin{cases} - c'_{n+i} & \text{if } (n + i) \in N, \\ 0 & \text{otherwise}. \end{cases} $$ I am able to follow the proof of a later Theorem (Theorem 29.10: LP Duality) to convince myself that this $\overline{y_i}$ is indeed an optimal dual solution. However, what is the intuition behind the way the optimal dual solution is constructed? I notice that each non-zero $\overline{y_i}$ corresponds to a tight constraint in the optimal primal solution. Is this fact helpful to understand the optimal dual solution?
The short answer is yes. The longer answer is no. The short answer is yes: that's a fundamental computation operation and it's pretty much the definition of a function. The equivalence between def f(x): do something with x f(foo) and do something with foo is in fact the definition of a function, or more precisely of function application. This is so fundamental that it's the basis of the lambda calculus. In the lambda calculus, there are just three syntactic constructs: variables ($x$, $y$, …); applying a function to an argument ($F X$ where $F$ is a function and $X$ is the argument); lambda abstraction $\lambda x. M$ where $x$ is the parameter name and $M$ is the function body. The lambda calculus gets its name from that $\lambda$ notation, and it's where Python got lambda. In the lambda calculus, there is a single computation rule, called beta conversion (beta reduction when done from left to right, beta expansion from right to left):$$ (\lambda x. M) N \equiv M[x \leftarrow N] $$where $M[x \leftarrow N]$ means to replace $x$ by $N$ in $M$. (Details omitted because it would take a book chapter or two.) That single rule is enough to express all possible computations, in the sense that the lambda calculus is Turing-complete. Beta conversion can be done in any language that has something that can reasonably called a function. But you need to take care of the details, and there are some language features that require additional effort or make it impossible in certain cases. Pretty much any language that isn't purely functional has restrictions on when beta expansion is correct. In any language, the lines that you move to the new function must form a syntactic block. For example, if you have lines that are part of a multi-line construct like while condition(): instruction1() instruction2() then you can move the loop out as a whole, but you can't move out while condition(): instruction1() and keep instruction2() in place. One superficial but easily understood feature where beta conversion changes the behavior is introspection features. For example, if the language exposes a way to identify the current function or the function call stack trace, such as traceback.extract_stack() in Python, beta conversion changes that trace. If you put a call to traceback.extract_stack in a new auxiliary function, it's going to return something different. To make a beta-expansion that preserves the behavior, you'd need to modify calls to traceback.extract_stack to remove the new function from the trace. Note that this includes calls that may be deeply nested (if a function called inside the moved code calls a function that calls a function that … that calls traceback.extract_stack), so doing a fully behavior-preserving beta expansion turns into a global program transformation. Another introspection feature of Python that breaks beta conversion is that it exposes local variables through locals(): locals()['x'] evaluates to the same value as x. If the code that you move calls locals(), you also need to pass the variables accessed through locals() as arguments to the new function and return their new values. So it isn't a purely syntactic transformation anymore. A more interesting interaction is with flow control features. If the instructions that you put in the new function have self-contained flow, meaning that they're executed by starting at the top and either finishing at the bottom or raising an exception, then beta expansion or beta conversion doesn't change anything. It's ok if the code has loops and function calls inside it. But if the block of code that you move contains a non-local exit, i.e. an instruction that makes the execution jump outside that block of code such as return or break, you can't just move it. Likewise if the block contains a jump target (in imperative languages that have goto). It's possible to get around this with a local transformation: make the auxiliary function take one more argument which indicates the entry point (if there's a way to jump into the middle of the code), and one more return value which indicates where to exit to (if there's a way to jump out to a place other than the end of the code block). For example: def outer_function(x): if x == 1: return 2 # else: x = x - 1 # return x If you want to extract the two lines marked with # on the right into a function, you need to remember whether to return the 2 or continue on to return x. def new_auxiliary_function(x): if x == 1: return "RETURN", 2 else: x = x - 1 return "FALLTHROUGH", x def new_outer_function(x): tmp = new_auxiliary_function(x) # if tmp[0] == "RETURN": return tmp[1] # x = tmp[1] # return x Your transformation also changes exactly when variables are modified. This can become an issue due to aliasing. Aliasing is not normally an issue in Python since there's no way for a variable to designate another variable, as opposed to designating the same object as another variable. I wouldn't swear that it's never an issue, but I can't think of a way to do it. So instead I'll give an example in C, where aliasing is common due to pointers. int x = 3; int *p = &x; *p = 2; // printf("%d\n", x); // This code prints 2, since the pointer p points to x and the line *p = 2 therefore sets x to 2. Now let's create an auxiliary function for the part marked with //. Since C can't create compound values on the fly, we need to define a structure type for the return values, but that's just a cosmetic change compared with Python. typedef struct { int x; int *p; } values; values new_function(int x, int *p) { *p = 2; printf("%d\n", x); } … int x = 3; int *p = &x; values tmp = new_function(x, p); x = tmp.x; p = tmp.p; The line *p = 2 in new_function sets the outer variable x to 2, since that's where p points to. It does not change the variable x that is inside new_function. Therefore this program prints 3. When a compiler performs a beta reduction, it's called inlining. This is a common optimization, which typically makes the program run faster at the expense of larger code size. Compilers much more rarely do beta expansions. It's a worthwhile optimization when the same block of code (or more generally similar-enough blocks) appears more than once and code size is more important than execution speed, but it's difficult to detect worthwhile cases. Both transformations have limitations as to exactly when they're correct.
I'm trying to come up with a context-free grammar for the following language: $$L = \{a^mb^nc^{m+n}\mid 0 \le n \le m\}$$ My thinking is that i can rewrite this to $$L = \{a^mb^nc^nc^m\mid 0 \le n \le m\}$$ and then create the grammar $$ \begin{align*} S &\rightarrow aSc \mid B\\ B &\rightarrow bBc \mid \epsilon \end{align*} $$ The problem is that this grammar doesn't capture $0 \le n \le m$ and I can't figure out how to do this. Any help would be greatly appreciated.
I am having troubles to understand to connection between equilibrium and non equilibrium thermodynamics. I am studying a mixture of molecules $A,B,C$ and solvent $S$. The free energy $F$ is given by the functional $ F[\{\phi_i\}_i]=\int \left[ f(\{\phi_i(x)\}_{i})+\sum_i \gamma_i(\nabla\phi_i)^2 \right] dx $ with $\phi_i$ the concentrations, $f$ the free energy density, and $\gamma_i$ the surface tension coefficients. The reversible chemical reaction $A+B\Leftrightarrow_{k_2}^{k_1} C$ takes place. My goal is to write the equations of motion for the concentrations. In a paper I find for the first concentration $\phi_A$: $ \frac{\partial \phi_A}{\partial t}=-\nabla. \mathbf J_A - k_1 \phi_A\phi_B + k_2 \phi_C $ with $ \mathbf J_i=-\sum_j M_{ij}\frac{\delta F}{\delta \phi_j}$ I don't get the first term $-\nabla. \mathbf J_A$. Apparently $J_A$ is vector (bold font) but why? From the definition above I don't see. I roughly understand this is the contribution to the chemical potential gradient but I wish to understand how to derive it. I don't know where to look and I don't find so far in the non-equilibrium books I checked. I'd be happy if you could explain or direct me toward a book.
Since we don't know the volume of the ball, we may approximate it as a point particle. Some mention has been made of the friction between the ground and the ball - all I will say is that we should ignore it because a) the moment of inertia (or some way to find it) aren't specified, and b) if the ball did start spinning, the friction would become static, not kinetic. Still a doable problem, but you'd need the moment of inertia. $$\begin{eqnarray}U_0 &=& \frac{m \dot x_0^2}{2} + m g h = 1490\text J\\U_n &=& \frac{m \dot x_0^2}{2} + m g h - n\end{eqnarray}$$ Where $m$ is the ball's mass, $v_0$ is the initial (horizontal) velocity, $g$ is the local surface gravity (assumed to be $9.8 \frac{\text m}{ \text s}$), $h$ is the initial height, and $n$ is the number of bounces and has units of Joules. Obviously, we must devise a sum over the energy from 1489 to 1 and add the duration of the first half parabola. Even though the collision isn't perfectly elastic, I can think of no preference for a shallower or steeper angle after the bounce. Incidentally, that angle is $35^\circ$, and upon reflection, the ratio $\frac{\dot y_n}{\dot x_n} = \frac{\sqrt{2 g h}}{\dot x_0} = \eta \approx \frac{7}{10}$ would then be constant. The duration of the $n$th bounce is $t = \frac{2 \dot y_n}{g}$ (from kinematics), so all we must do is put $y_n$ in terms of $U_n$. When the ball hits the ground, it only has kinetic energy. $$\begin{eqnarray}U_n &=& \frac{m(\dot x_n^2 + \dot y_n^2)}{2} \\U_n &=& \frac{m(\frac{\dot y_n^2}{\eta^2} + \dot y_n^2)}{2} \\U_n &=& \frac{m\dot y_n^2}{2}\left( 1 + \frac{1}{\eta^2}\right) \\\dot y_n &=& \sqrt{\frac{2 U_n}{m \left(1 + \frac{1}{\eta^2} \right)}} \end{eqnarray}$$ Doing a little algebra and writing the sum, $$t_{total} = \sqrt{\frac{2 h}{g}} + \frac{2 \eta}{g} \sqrt{\frac{2}{m\left(1 + \eta^2\right)}} \sum_{n=1489}^1 \sqrt{U_n} \approx 1471 \text s$$ If you were writing a test, the sum of square roots is a formidable problem, but you can avoid that (using the apocryphal Gauss trick) if you ask for the distance traveled. I leave it as a (quite fun) exercise for the reader.
The point of problems like these is not that a long expression can be reduced to a short one. Instead, the idea is that expressions need to be well-defined, i.e., non-ambiguous. To attain such a goal, one introduces the notion of an "order of operations"; for you, this means BODMAS, while those in the U.S. are probably more familiar with PEMDAS. One way to get this point across would be to present a single expression and talk about different ways you could interpret it (i.e., depending on the order in which you carry out the various operations). You could then enter the expression into a calculator several times. Note that the calculator gives you the same answer every time, and ask the student if he thinks the calculator will ever give a different answer. It won't. Why? Because the calculator is programmed to follow a particular order of operations; that way, two people with the same computation to carry out will not somehow end up with different results (assuming that they don't make any errors of their own). Just as there is an order of operations implemented in the calculator, there is one to which we subscribe, as well. But the reasoning is the same: to eliminate ambiguity. You might go further and ask him about different values an expression can take on depending on where you put the brackets (or, as those in the U.S. would say, parentheses). Even for a single operation: Does it matter where the brackets go? This can lead naturally to a discussion of binary operations for which associativity holds. For example, $a+(b+c) = (a+b)+c$ for any $a,b,c \in \mathbb{Z}$; that is to say, for the set of integers, addition is an associative binary operation. But what happens when the operation is subtraction rather than addition? Similarly, $a\times(b\times c) = (a \times b)\times c$ for any $a,b,c \in \mathbb{Z}$; that is to say, for the set of integers, multiplication is an associative binary operation. But what about division? As to the student's astute observation: Expressions can be written in many ways. For example, the relatively "short" expression $3^{3^{3}}$ has only three numbers, but is equal to $7,625,597,484,987$. (If you raise it to $3$ once more, then you get an obscenely large number.) Meanwhile, you could write $1 - 1$ a bunch of times with a $+$ in between each of them; the sum would then be $0$. So here we could have a "long" expression that, when evaluated, gives a short answer. The salient point is that a well-defined expression for your purpose refers to exactly one number. There are many equivalent ways to write a single expression; some short, some long. In fact, our decimal notation is really a way of abbreviating, so that, e.g., $365 = (3\times 100) + (6 \times 10) + (5 \times 1)$. In fact, I could have written the previous expression without brackets, given that BODMAS makes it unambiguous. As a final note: Students are often asked to "simplify" expressions. The idea of such exercises is to remove ambiguity, and, the hope is, it will be clear for any particular class and teacher what constitutes a correct simplification. For example, many teachers ask that students "rationalize denominators" and re-write something like $\displaystyle \frac{1}{\sqrt{2}}$ as $\displaystyle \frac{\sqrt{2}}{2}$, thereby ensuring the denominator is an integer. Generalizing far beyond your student's query, one might ask whether this notion of simplifying is itself unambiguous. You can find more on this question in a MathOverflow post here, though I mention this more for the curious reader than as a matter of responding to your student.
I am currently reading some notes in QFT and I came across the conserved current equation for a complex scalar field that has a transformation given by $$ \hat{\phi}\longrightarrow\hat{\phi}+i\theta\hat{\phi} $$ The notes said that the current is: $$ J^{\mu}=\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi_a)}\delta\phi_a=i\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}\delta\phi-i\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi\dagger)}\delta\phi\dagger $$ but I thought that $\delta\hat{\phi}=i\theta\hat{\phi}$, then should there be a factor of $\theta$ on all terms? In the complex scalar field Lagrangian, you have two different dynamic fields: $\psi$ and $\psi^{\dagger}$. The conserved Noether current in this case is defined as $J^{\mu} = \sum_{i = 1}^2 \Pi^{\mu}_i \Delta \psi_i$, where the indices 1 and 2 denote the two different fields. The symmetry you want is $\psi \rightarrow \psi e^{i\theta}$, and therefore $\psi^{\dagger} \rightarrow \psi^{\dagger} e^{-i\theta}$. If you define $\delta\psi_i \equiv \Delta \psi \delta \theta$, you get that $\Delta \psi = i \psi$ and $\Delta \psi^{\dagger} = -i \psi^{\dagger}$, and you remove the $\delta \theta$ from the definition of the current. Replacing this into the current gives you $$J^{\mu} = i(\Pi^{\mu}_{\psi^{\dagger}}\psi - \Pi^{\mu}_{\psi}\psi^{\dagger})$$ which is what you obtained. The factor $\delta \theta$ (with $\theta$ a continuous parameter) is not included in the definition of the conserved current because it is just a variation of a parameter and it is irrelevant in the conservation equation $\partial_{\mu}J^{\mu} = 0$. If you were to include it (using $\delta \psi$ instead of $\Delta \psi$ in the definition of $J^{\mu}$), you would get $(\partial_{\mu}J^{\mu})\delta\theta = 0$, which just implies $\partial_{\mu}J^{\mu} = 0$ for non-zero variations. So you just eliminate $\delta \theta$ out of the definition of $J^{\mu}$ because it's irrelevant to the physics. The definitions on your question are somewhat ambiguous. Remember that $\delta \psi = \frac{\partial \psi}{\partial \theta} \delta \theta$, so you could never have factors of $\theta$ in the definition ($\frac{\partial}{\partial \theta} (e^{i \theta}) = ie^{i \theta}$, NOT $i \theta e^{i \theta}$), only factors of $\delta \theta$
Visiting addressNiels Henrik Abels hus Moltke Moes vei 35 (map) 0851 OSLO Norway Makoto Yamashita, Ochanomizu University, will give a talk with title: Drinfeld center and representation theory for monoidal categories Abstract: Motivated by the recently found relation between central completely positive multipliers and the spherical unitary representations of the Drinfeld double for discrete quantum groups, we construct and analyze the representations of fusion algebra of rigid C*-tensor category from the unitary half-braidings. Through the correspondence of Drinfeld center and the generalized Longo-Rehren construction in subfactor theory, these representations are also related to Popa’s theory of correspondences and subfactors. This talk is based on joint work with Sergey Neshveyev. Christian Voigt (Glasgow) will give a talk with title: The structure of quantum permutation groups Abstract: Quantum permutation groups, introduced by Wang, are a quantum analogue of permutation groups. These quantum groups have a surprisingly rich structure, and they appear naturally in a variety of contexts, including combinatorics, operator algebras, and free probability. In this talk I will give an introduction to these quantum groups, and review some results on their structure. I will then present a computation of the K-groups of the C*-algebras associated with quantum permutation groups, relying on methods from the Baum-Connes conjecture. Alfons van Daele, University of Leuven (Belgium), will give a talk with title: Separability idempotents and quantum groupoids Martijn Caspers (Münster) will give a talk with title: The Haagerup property for arbitrary von Neumann algebras Abstract: The Haagerup property is an approximation property for both groups and operator algebras that has important applications in for example the Baum-Connes conjecture or von Neumann algebra theory. In this talk we show that the Haagerup property is an intrinsic invariant of an arbitrary von Neumann algebra. We also discuss stability properties of the Haagerup property under constructions as free products, graph products and crossed products. Finally we discuss alternative characterizations in terms of the existence of suitable quadratic forms. Marco Matassa (UiO) will give a talk with title: Dirac Operators on Quantum Flag Manifolds Abstract: I will review the paper "Dirac Operators on Quantum Flag Manifolds" by Ulrich Krähmer. The aim is to define Dirac operators on quantized irreducible flag manifolds. These will yield Hilbert space realizations of some distinguished covariant first-order differential calculi. Adam Sørensen (UiO) will give a talk with title: Almost commuting matrices Abstract: Two matrices A,B are said to almost commute if AB is close to BA (in a suitable norm). A question of Halmos, answered by Lin, asks if two almost commuting self-adjoint matrices are always close to two exactly commuting self-adjoint matrices. We will survey what is known about this and similar questions, and report on recent work with Loring concerning how the questions change if we look at real rather than complex matrices. Abstract: We show that the discrete duals of the so called free orthogonal quantum groups have the completely contractive approximation property, analogous to the free groups. The proof relies on the structure of representation categories of these quantum groups, on the C*-algebraic structure of SUq(2), and on the free product techniques of Ricard and Xu. This talk is based on joint work with Kenny De Commer and Amaury Freslon. Abstract: Independence has been introduced as a regularity property for pairs of commuting injective group endomorphisms of a discrete abelian group with finite cokernel by Joachim Cuntz and Anatoly Vershik. We discuss various characterisations of this regularity property and show how the statements need to be adjusted when removing the restrictions that the group has to be abelian and that the cokernels have to be finite. Somewhat surprisingly, this leads to the concept of *-commutativity. This property is defined for pairs of commuting self-maps of an arbitrary set. As an examples of *-commutativity, we explain a construction related to the Ledrappier shift and indicate how one obtains examples for independent group endomorphisms from this construction. If time permits, we will point out instances where the two notions have been readily used to obtain C*-algebraic results. Roughly speaking, both notions are designed to give rise to pairs of doubly commuting isometries, which significantly simplifies the analysis of the constructed C*-algebras. This is particularly useful when one tries to generalise results from the case of a single transformation to an action generated by finitely many transformations. Abstract: We talk about independent resolutions for dynamical systems on totally disconnected spaces. Building on earlier work by Cuntz, Echterhoff and Li that allows one to compute the K-theory of totally disconnected systems that admit a so called independent invariant regular basis, we show how any totally disconnected dynamical system admits a resolution of such systems, which in some cases allows for K-theory computations. Based on work by me and X. Li. Marco Matassa (UiO) will give a talk with title: On dimension and integration for spectral triples associated to quantum groups Abstract: Abstract: I will discuss some aspects of the notions of spectral dimension and non-commutative integral in the context of modular spectral triples. I will focus on two examples: the modular spectral triple for SU_q(2) introduced by Kaad and Senior and the family of spectral triples for quantum projective spaces introduced by D'Andrea and Dąbrowski. Jens Kaad (Trieste), will give a talk with title "Joint torsion line bundles of commuting operators" Abstract: In this talk I’ll associate a holomorphic line bundle to any commuting tuple of bounded operators on a Hilbert space. The transition functions for this bundle are given by the joint torsion which compares determinants of Fredholm complexes. The joint torsion is an invariant of the second algebraic K-group of the Calkin algebra (bounded operators modulo trace class operators). The main step is to prove that the transition functions for the joint torsion line bundle are indeed holomorphic. This is carried out by studying the Quillen-Freed holomorphic determinant line bundle over the space of Fredholm complexes. In particular I will construct a holomorphic section of a certain pull-back of this bundle. The talk is based on joint work with Ryszard Nest. Magnus D. Norling will give a talk with title "Universal coefficient theorem in KK-theory". This presentation is part of the final act of the course on "The Baum-Connes conjecture and KK-theory". Bas Jordans will give a talk with title "Higson's characterization of KK-theory". This presentation is part of the final act of the course on "The Baum-Connes conjecture and Kasparov's KK-theory". Roberto Conti (Sapienza Università di Roma) will give a talk with title: Asymptotic morphisms in local quantum physics and study of some models Abstract: We discuss a notion of asymptotic morphisms that is suitable for a description of superselection sectors of a scaling limit theory. In some models, this leads to interesting questions about the explicit form of certain modular operators. (This talk is based on joint work with D. Guido and G. Morsella). Magnus Landstad will give a talk with title: Quantum groups from almost matched pairs of groups - the groupoid approach Abstract: If G is a locally compact group with two closed subgroups H,K s.t. G=HK, then (H,K) is called a matched pair of subgroups. The construction of a quantum group from such a pair goes back a long time. We shall look at the more general case where the subgroups are almost matched (the complement of HK in G has measure 0), then a groupoid approach to the construction is very useful and many formulas are obtained for free. I shall start with explaining the concepts needed (quantum groups, groupoids, etc) and then how the groupoid is constructed. Finally we shall look at the special case where G has a compact open subgroup. This is joint work with A. Van Daele. Antoine Julien (NTNU) will give a talk with title: Tiling spaces, groupoids and K-theory Abstract: In this talk, I will describe how spaces, groupoids and C*-algebras can be associated with aperiodic tilings. In some cases, it is possible to describe the structure of the groupoid combinatorially in terms of augmented Bratteli diagrams. (joint work with Jean Savinien) Time permitting, I will expose a strategy for computing the K-theory of the tiling algebra in terms of the K-theory of AF-algebras (work in progress). Takuya Takeishi, University of Tokyo, will give a talk with title: Bost-Connes system for local fields of characteristic zero Abstract: The Bost-Connes system, which describes the relation between quantum statistical mechanics and class field theory, was first constructed by Bost and Connes for the rational field, and generalized for arbitrary number fields by the contribution of many researchers. In this talk, we will introduce a generalization of the Bost-Connes sysmtem for local fields of characteristic zero, and introduce some properties. Judith Packer, University of Colorado (Boulder), will give a talk with title "Noncommutative solenoids and their projective modules" Abstract: ``Noncommutative solenoids" are certain twisted group $C^*$-algebras, where the groups in question are countably infinitely generated; these algebras can also be generated as direct limits of rotation algebras. From examining the range of the trace of the $K_0$-groups of the noncommutative solenoids, their finitely generated projective modules can be constructed. We also discuss a way to construct Morita equivalence bimodules between noncommutative solenoids that goes back to work of M. Rieffel, with the new wrinkle of $p$-adic analysis appearing. This work is joint with F. Latr\'emoli\'ere. Bas Jordans (UiO) will give a talk with title: Real dimensional spaces in noncommutative geometry Abstract: In noncommutative geometry geometric spaces are given by spectral triples. In this talk we consider a generalisation of these spectral triples to semifinite spectral triples. In analogy to the classical case it is possible to construct the product of two semifinite spectral triples. We will construct this product and derive properties thereof. Also we will describe for each z\in(0,\infty) a semifinite spectral triple which can be considered as having dimension z. As an application these "z-dimensional" semifinite triples will be used for two regularisation methods in physics. Yusuke Isono from the University of Tokyo will give a talk with title: Strong solidity of II_1 factors of free quantum groups Abstract: We generalize Ozawa's bi-exactness to discrete quantum groups and give a new sufficient condition for strong solidity, which implies the absence of Cartan subalgebras. As a corollary, we prove that II_1 factors of free quantum groups are strongly solid. We also consider similar conditions on non-Kac type quantum groups, namely, non finite von Neumann algebras. Erik Bédos will give a talk with title: On equivariant representations of C*-dynamical systems Abstract: Let \Sigma=(A, G, \alpha, \sigma) denote a unital discrete twisted C*-dynamical system. In our recent work with Roberto Conti (Rome), it has emerged that the so-called equivariant representations of \Sigma on Hilbert A-modules play an interesting role, complementing the one played by covariant representations. We will discuss some aspects of this notion and illustrate its usefulness in the study of the crossed products associated with \Sigma. Adam Skalski (IMPAN) will give a talk with title: Closed quantum subgroups of locally compact quantum groups and some questions of noncommutative harmonic analysis (based on joint work with Matt Daws, Pawel Kasprzak and Piotr Soltan) Abstract: The notion of a closed subgroup of a locally compact group is a very straightforward concept, often featuring in classical harmonic analysis. I will discuss the possible extensions of this notion to the quantum setting, focusing on the comparison of the two definitions proposed by S. Vaes and S.L. Woronowicz. I will describe some reformulations of these definitions and explain how they can beshown to be equivalent in many cases; I will also mention certain connections to other problems of quantum harmonic analysis. Nicolai Stammeier, Westfälische Wilhelms-Universität Münster, will give a talk with title: Product Systems of Finite Type for Certain Algebraic Dynamics and their C*-algebras Abstract: Let P be a lattice-ordered semigroup with unit acting on a discrete, abelian group G by injective endomorphisms with finite cokernel. Building on the work of Jeong Hee Hong, Nadia S. Larsen and Wojciech Szymanski on product systems of Hilbert bimodules and their KMS-states, one can associate a product system to this dynamical system that turns out to be of finite type. Imposing two additional conditions on the dynamics, namely independence of the endomorphisms for relatively prime elements in P and exactness, we derive presentations of the Nica-Toeplitz algebra and the Cuntz-Nica-Pimsner algebra. Moreover, the latter has lots of nice descriptions and is shown to be a unital UCT Kirchberg algebra.
Graph theory may be one of the most widely applicable topics I’ve seen in mathematics. It’s used in chemistry, coding theory, operations research, electrical and network engineering, and so many other places. The subject is mainly credited to have begun with the famous Seven Bridges of Königsberg problem posed by Leonard Euler in 1736. Frank Harary should also be credited with his massive work in bringing applications of graph theory to the sciences and engineering with his famous textbook written in 1969. My own research forced me to stumble into this area once my research partner, Jason Hathcock, suggested we explore the idea of viewing dependency relations in the sequences of variables we were studying as digraphs. Since then, I’ve been buried in graph theory texts, finding a wealth of fascinating topics to explore. Of this article’s particular interest is finding all maximally independent sets in a graph using Boolean algebra. What’s a maximally independent set? Firstly, what’s an independent set? Definition (Independent Set) : A set of vertices of a graph is independent if no two vertices in the set are adjacent. If we take a look at the digraph above (from our paper on vertical dependence), and look at the underlying graph 1, \{1,6,11\} form an independent set, as an example. There are lots more, and of varying sizes. Of particular interest here are maximal independent sets. An independent set to which no other vertex in the graph can be added to retain the independence property Definition:(Maximal Independent Set): An example from the graph above is \{2,3,4,5,13\}. If we added any other vertex to that set, it would be adjacent to some vertex already in there. A few notes: (1) There are many maximal independent sets in a graph, and they may not all have the same cardinality. (2) Maximal and maximum are not the same thing. An independent set may be a maximal independent set without being the largest independent set in the graph. The largest cardinality among all the maximal independent sets is called the independence number of the graph and is denoted \beta(G). Why do we care about maximal independent sets? Of the many applications that arise, one in particular is in coding theory. We want to find the largest error correcting codes we can, particularly in internet transmissions that can lose packets. A paper discussing this can be found here. (Paywall warning). We’ve discussed some basics of coding theory on this site as well. Finding error correcting codes with desirable properties is equivalent to solving the problem of finding maximal independent sets. The purpose of this article isn’t to discuss the applications here, but I’ve learned long ago that no one will keep reading unless I mention at least one application. Finding a maximal independent set Finding a maximal independent set is relatively simple. Start with any vertex v \in V(G). Add another vertex u that is not adjacent to v. Continue adding vertices that are not adjacent to any already in the set. For a finite graph 2, this process will terminate and the result will be a maximally independent set. Will it be one of largest cardinality? Not necessarily. For example, using one more of our dependency graphs generated by \alpha(n) = \sqrt{n}, we can take the order to be 24 as shown, and build a maximal independent set starting with vertex 3. Note that none of vertices 9-15 or 1 can be in the set, since they’re all adjacent to vertex 3. Vertex 2 is not adjacent to vertex 3, so we add it into our set: V = \{2,3\}. Now, the next vertex we add can’t be adjacent to either 2 or 3, so that rules out 1, 9-15, and 4-8. Grab vertex 16. Now V = \{2,3,16\}. Notice that none of the remaining vertices are adjacent to any of the previous vertices. Continuing this process, we’ll get that V = \{2,3,16,17,18,19,20,21,22,23,24\}. Notice that if we add any other vertices to this set, they’ll be adjacent to something already in it. Finding all Maximal Independent Sets We’re rarely interested in just finding one maximal independent set. We’d prefer to find them all, and doing it by inspection is not very palatable. The heart of the article is an admittedly not optimal but still interesting way to find all maximal independent sets for reasonably small graphs. We’ll illustrate the method on the 6-node graph above. Getting started First, we’ll assign a Boolean variable to each vertex according to its inclusion in a maximal independent set. For example A = 1 implies A is in the maximal independent set. Recall from Boolean algebra thatx+y = \left\{\begin{array}{lr}1, & x = 1 \text{ or } y = 1 \text{ or } (x=y=1)\\0,&x=0 \text{ and } y=0\end{array}\right. xy=\left\{\begin{array}{lr}1, & x = 1 =y\\0,&\text{ otherwise}\end{array}\right. Remark: x+y is just another way of writing a union. This isn’t addition mod 2 here. What we’ve done here is set up inclusion into our maximal independent sets in a Boolean fashion. So x+y = 1 corresponds to the inclusion of either vertex x OR vertex y OR both vertices x and y. Similarly, xy = 1 corresponds to the inclusion of both vertices x and y. Now, we can express an edge of a graph as a Boolean product xy, where x and y are the vertices at either end of the edge. Finally, set up the sum of all edges and call it \phi:\phi = \sum xy \text{ for all } (x,y) \in G For our graph above,\phi = AB + AD + AE + BC + CE + CF + DE + EF Why did we do this? For a vertex to be in an independent set, it can’t be adjacent to any other vertices in the set. Put another way, for each edge, we can only have at most one of the vertices that make it up. If we include A in the independent set V, then B cannot be in there. Returning to our \phi, note that its value under Boolean algebra can only be 0 or 1. If \phi = 1, then at least one edge has both of its vertices “on”. This means, only combinations of A, B, C, D, E, F that yield \phi = 0 will give us a maximally independent set. Solving the problem Our goal now is to find all combinations of our Boolean vertex variables that yield \phi = 0. As it turns out, solving this directly is pretty annoying 3. If we want \phi = 0, that’s logically equivalent to seeking \phi^{c} = 1, where \phi^{c} is the Boolean complement (or negation) of \phi. Recall from Boolean algebra the following 4: So, if we take \phi^{c} for our graph above,\begin{aligned}\phi^{c}&=(A^{c}+B^{c})(A^{c}+D^{c})(A^{c}+E^{c})(B^{c}+C^{c})(C^{c}+E^{c})\\&\quad(C^{c}+F^{c})(D^{c}+E^{c})(E^{c}+F^{c})\end{aligned} What does the negation here actually mean? By taking the complement, instead of finding vertices to include, now we’re finding vertices to exclude . When we multiply this expression out, we’ll get a sum of terms, where each term is a product of complements of our original Boolean variables. To get \phi^{c} = 1, all we need is one of those terms to be 1. To get a term to be 1, all members of the product must themselves be 1, meaning each term gives us a set of variables to exclude. Excluding these variables gives us one maximally independent set for each term, so this gives us all the maximally independent sets. The nice thing about dealing with Boolean arithmetic is that we can program a computer to do this for us. Any time we can invoke a relationship with Boolean algebra, we can enlist a friendly helpful computer. Finishing the example We’ll do it by hand here, because I’m old-school like that. For larger graphs, obviously one would want to enlist some computational help, or just be very patient. We’ll remember a few other rules for Boolean algebra before we finish 5: After an insane amount of tedious Boolean algebra,\phi^{c} = A^{c}C^{c}E^{c}+A^{c}B^{c}E^{c}F^{c}+A^{c}C^{c}D^{c}F^{c}+B^{c}C^{c}D^{c}E^{c}+B^{c}D^{c}E^{c}F^{c} Recall that each term now tell us which sets of vertices to exclude from a maximal independent set. We negated the question logically. That means we have 5 maximal independent sets: We can actually say what the independence number is as well, since we just have to find the maximum cardinality among the sets listed. For this graph, \beta(G) = 3. Conclusion I happened to find this interesting, and ended up obsessed with it for a day, much to the chagrin of my daily planner, which expected me to be working on writing my research monograph. I tried several different ways of solving this beyond the one given. I tried using the direct equation \phi, and tried using regular arithmetic on just \{0,1\}, setting up a sort-of structure function similar to the reliability block diagrams detailed here. I always hesitate to blame the method, and rather my own arithmetic errors, but I didn’t have much luck with the structure-function method, though I may try again to see if it’s an equivalent method. I believe it should be. Looking at \phi^{c} makes more sense after playing with this problem for some hours. The sum/union is quite nice, because it neatly separates out the various sets to exclude. It’s a better exploitation of Boolean algebra than trying to work with \phi but aiming for a sum of 0. I still think it should be possible to work with it directly, even if not advisable. If I decide to torture myself with it further, and end up with something to write about, perhaps I’ll append here. I always end up ending my articles with some takeaway. I don’t have much of one here, except it was a curiosity worth sharing. Perhaps a decent takeaway is to reveal a bit of the tedium and dead-ends mathematicians can run into when exploring something. That’s just part of research and understanding. It’s entirely possible to spend hours, days, weeks on something and all you conclude is that the original method you saw is definitely superior than the one you were trying to develop. Footnotes meaning we remove the directedness I’m going to only stick with finite graphs here I spent an amount of time I will not confess to trying to do it this way just to see if it could be done this way. These may not be the symbols you’re accustomed to. Just replaces with union and intersection and verify for yourself Again, play with these yourself substituting multiplication for intersection and + for union. All these say is that x AND x is just x, x OR x is just x, and x OR (x AND y) = x, since x also includes its intersection with y
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
In this section and the next two, we introduce families of common discrete probability distributions, i.e., probability distributions for discrete random variables. We refer to these as "families" of distributions because in each case we will define a probability mass function by specifying an explicit formula, but that formula will incorporate a constant (or set of constants) that are referred to as . By specifying values for the parameter(s) in the pmf, we define a specific probability distribution for a specific random variable. For each family of distributions introduced, we will list a set of defining characteristics that will help determine when to use a certain distribution in a given context. parameters Bernoulli Distribution Consider the following example. Example \(\PageIndex{1}\) Let \(A\) be an event in a sample space \(S\). Suppose we are only interested in whether or not the outcome of the underlying probability experiment is in the specified event \(A\). To track this we can define an indicator random variable, denoted \(I_A\), given by $$I_A(s) = \left\{\begin{array}{l l} 1, & \textrm{if}\ s\in A,\\ 0, & \textrm{if}\ s\in A^c. \end{array}\right.\notag$$ In other words, the random variable \(I_A\) will equal 1 if the resulting outcome is in event \(A\), and \(I_A\) equals 0 if the outcome is not in \(A\). Thus, \(I_A\) is a discrete random variable. We can state the probability mass function of \(I_A\) in terms of the probability that the resulting outcome is in event \(A\), i.e., the probability that event \(A\) occurs, \(P(A)\): \begin{align*} p(0) &= P(I_A = 0) = P(A^c) = 1 - P(A) \\ p(1) &= P(I_A = 1) = P(A) \end{align*} In Example 3.3.1, the random variable \(I_A\) is a Bernoulli random variable because its pmf has the form of the Bernoulli probability distribution, which we define next. Definition \(\PageIndex{1}\) A random variable \(X\) has a with parameter \(p\), where \(0\leq p\leq 1\), if it has only two possible values, typically denoted \(0\) and \(1\). The probability mass function (pmf) of \(X\) is given by Bernoulli distribution \begin{align*} p(0) &= P(X=0) = 1-p,\\ p(1) &= P(X=1) = p. \end{align*} The cumulative distribution function (cdf) of \(X\) is given by $$F(x) = \left\{\begin{array}{r r} 0, & x<0 \\ 1-p, & 0\leq x<1, \\ 1, & x\geq1. \end{array}\right.\label{Berncdf}$$ In Definition 3.3.1, note that the defining characteristic of the Bernoulli distribution is that it models random variables that have only two possible values. As noted in the definition, the two possible values of a Bernoulli random variable are usually 0 and 1. In the typical application of the Bernoulli distribution, a value of 1 indicates a "success" and a value of 0 indicates a "failure", where "success" refers that the event or outcome of interest. The parameter \(p\) in the Bernoulli distribution is given by the probability of a "success". In Example 3.3.1, we were interested in tracking whether or not event \(A\) occurred, and so that is what a "success" would be, which occurs with probability given by the probability of \(A\). Thus, the value of the parameter \(p\) for the Bernoulli distribution in Example 3.3.1 is given by \(p = P(A)\). Exercise \(\PageIndex{1}\) Derive the general formula for the cdf of the Bernoulli distribution given in Equation \ref{Berncdf}. Hint First find \(F(0)\) and \(F(1)\). Answer Recall that the only two values of a Bernoulli random variable \(X\) are 0 and 1. So, first, we find the cdf at those two values: \begin{align*} F(0) &= P(X\leq0) = P(X=0) = p(0) = 1-p \\ F(1) &= P(X\leq1) = P(X=0\ \text{or}\ 1) = p(0) + p(1) = (1-p) + p = 1 \end{align*} Now for the other values, a Bernoulli random variable will never be negative, so \(F(x) = 0\), for \(x<0\). Also, a Bernoulli random variable will always be less than or equal to 1, so \(F(x) = 1\), for \(x\geq 1\). Lastly, if \(x\) is in between 0 and 1, then the cdf is given by $$F(x) = P(X\leq x) = P(X=0) = p(0) = 1-p),\ \text{for}\ 0\leq x < 1.$$ Binomial Distribution To introduce the binomial distribution, we use our continuing example of tossing a coin, adding another toss. Example \(\PageIndex{2}\) Suppose we toss a coin three times and record the sequence of heads (\(h\)) and tails (\(t\)). Supposing that the coin is fair, each toss results in heads with probability \(0.5\), and tails with the same probability of \(0.5\). Since the three tosses are mutually independent, the probability assigned to any outcome is \(0.5^3\). More specifically, consider the outcome \(hth\). We could write the probability of this outcome as \((0.5)^2(0.5)^1\) to emphasize the fact that two heads and one tails occurred. Note that there are two other outcomes with two heads and one tails: \(hht\) and \(thh\). Recall from Example 2.1.2 in Section 2.1, that we can count the number of outcomes with two heads and one tails by counting the number of ways to select positions for the two heads to occur in a sequence of three tosses, which is given by \(\binom{3}{2}\). In general, note that \(\binom{3}{x}\) counts the number of possible sequences with exactly \(x\) heads, for \(x=0,1,2,3\). We generalize the above by defining the discrete random variable \(X\) to be the number of heads in an outcome. The possible values of \(X\) are \(x=0,1,2,3\). Using the above facts, the pmf of \(X\) is given as follows: \begin{align} p(\textcolor{red}{0}) = P(X=\textcolor{red}{0}) = P(\{ttt\}) = \textcolor{orange}{\frac{1}{8}} &= \binom{3}{\textcolor{red}{0}}(0.5)^{\textcolor{red}{0}}(0.5)^3 \notag \\ p(\textcolor{red}{1}) = P(X=\textcolor{red}{1}) = P(\{htt, tht, tth\}) = \textcolor{orange}{\frac{3}{8}} &= \binom{3}{\textcolor{red}{1}}(0.5)^{\textcolor{red}{1}}(0.5)^2 \notag \\ p(\textcolor{red}{2}) = P(X=\textcolor{red}{2}) = P(\{hht, hth, thh\}) = \textcolor{orange}{\frac{3}{8}} &= \binom{3}{\textcolor{red}{2}}(0.5)^{\textcolor{red}{2}}(0.5)^1 \label{binomexample} \\ p(\textcolor{red}{3}) = P(X=\textcolor{red}{3}) = P(\{hhh\}) = \textcolor{orange}{\frac{1}{8}} &= \binom{3}{\textcolor{red}{3}}(0.5)^{\textcolor{red}{3}}(0.5)^0 \notag \end{align} In the above, the fractions in orange are found by calculating the probabilities directly using equally likely outcomes (note that the sample space \(S\) has 8 outcomes, see Example 2.1.1). In each line, the value of \(x\) is highlighted in red so that we can see the pattern forming. For example, when \(x=2\), we see in the expression on the right-hand side of Equation \ref{binomexample} that "2" appears in the binomial coefficient \(\binom{3}{2}\), which gives the number of outcomes resulting in the random variable equaling 2, and "2" also appears in the exponent on the first \(0.5\), which gives the probability of two heads occurring. The pattern exhibited by the random variable \(X\) in Example 3.3.1 is referred to as the binomial distribution, which we formalize in the next definition. Definition \(\PageIndex{1}\) Suppose that \(n\) independent trials of the same probability experiment are performed, where each trial results in either a "success" (with probability \(p\)), or a "failure" (with probability \(1-p\)). If the random variable \(X\) denotes the total number of successes in the \(n\) trials, then \(X\) has a with parameters \(n\) and \(p\), which we write \(X\sim\text{binomial}(n,p)\). The probability mass function of \(X\) is given by binomial distribution $$p(x) = P(X=x) = \binom{n}{x}p^x(1-p)^{n-x}, \quad\textrm{for}\ x=0, 1, \ldots, n. \label{binompmf}$$ In Example 3.3.1, the independent trials are the three tosses of the coin, so in this case we have parameter \(n=3\). Furthermore, we were interested in counting the number of heads occurring in the three tosses, so a "success" is getting a heads on a toss, which occurs with probability 0.5 and so parameter \(p=0.5\). Thus, the random variable \(X\) in this example has a binomial\((3,0.5)\) distribution and applying the formula for the binomial pmf given in Equation \ref{binompmf} when \(x=2\) we get the same expression on the right-hand side of Equation \ref{binomexample}: $$p(x) = \binom{n}{x}p^x(1-p)^{n-x} \quad\Rightarrow\quad p(2) = \binom{3}{2}0.5^2(1-0.5)^{3-2} = \binom{3}{2}0.5^20.5^1 \notag$$ In general, we can connect binomial random variables to Bernoulli random variables. If \(X\) is a binomial random variable, with parameters \(n\) and \(p\), then it can be written as the sum of \(n\) independent Bernoulli random variables, \(X_1, \ldots, X_n\). (Note: We will formally define independence for random variables later, in Chapter 5.) Specifically, if we define the random variable \(X_i\), for \(i=1, \ldots, n\), to be 1 when the \(i^{th}\) trial is a "success", and 0 when it is a "failure", then the sum $$X = X_1 + X_2 + \cdots + X_n\notag$$ gives the total number of success in \(n\) trials. This connection between the binomial and Bernoulli distribution will be useful in a later section. One of the main applications of the binomial distribution is to model population characteristics as in the following example. Example \(\PageIndex{3}\) Consider a group of 100 voters. If \(p\) denotes the probability that a voter will vote for a specific candidate, and we let random variable \(X\) denote the number of voters in the group that will vote for that candidate, then \(X\) follows a binomial distribution with parameters \(n=100\) and \(p\).
Remember to register here for FREE to ask any questions you may come across in your QCE studies! The classic methods of elimination and substitution in simultaneous equations become more annoying when we have more than two variables. Whilst we can keep track of things, usually we don't want to because it is easy to get lost in it all. So we use Gaussian elimination as a more 'systematic' approach when we have lots of linear equations. The system of equations are presented in their corresponding augmented matrix form. There are three different row operations we can then use on each row. - Multiply an entire row by a constant: \( R_i = \alpha R_i \). However don't multiply the entire row by 0 - that wrecks everything! - Swap two rows: \( R_i \leftrightarrow R_j \). - Add a multiple of one row, to another row: \( R_i = R_i + \alpha R_j \). For an example, I purposely picked one where the numbers would not be nice. I personally like performing my Gaussian elimination in 'clever' ways, but there are still some important rules of thumb you're recommended to follow. We will attempt to solve the system \begin{align*} 3x+y+5z&=10\\ -x+4y+7z&=-7\\ 9x+2y+3z&=19\end{align*} The augmented matrix for this system is \[ \left(\begin{array}{c c c|c} 3 & 1 & 5 & 10\\ -1 & 4 & 7 & -7\\ 9 & 2 & 3 & 19 \end{array}\right)\] Here I do something convenient at the start. I see that the leading entry in the second row is already \(-1\). When some other row's leading entry is \(1\) or \(-1\), and the one in the first row is not, I like to swap it there. That way I have to deal with less fractions for as long as possible - I only have to put up with them for the easy bit. Note: In a row, the leading entry is the first entry that's not a 0, when you read from left to right. \[ \xrightarrow{R_2\leftrightarrow R_1} \left(\begin{array}{c c c|c} -1 & 4 & 7 & -7\\ 3 & 1 & 5 & 10\\ 9 & 2 & 3 & 19 \end{array}\right) \] Your teacher might disagree with me here now, and that's fair enough. If he/she does, listen to them. But personally when my leading entry on the highest row I require is \(-1\), I just leave it there. Some teachers would recommend multiplying by -1 to it right now (because ultimately we want 1's down the diagonal), but I'll just do it later to save writing. Now we wish for the first column to be filled with as many 0's as possible, excluding for in the first row. To do this, our leading entries are - \(-1\) in row 1. - \(3\) in row 2. - \(9\) in row 3. So to eliminate row 2, we can add \(3\) lots of row 1. In the first column, we'd be computing \(3 + 3\times(-1)\), which is \(0\)! Similarly, we add \(0\) lots of row 1 to row 3. \[ \xrightarrow[R_3 = R_3 + 9R_1]{R_2 = R_2+9R_1} \left( \begin{array}{c c c|c} -1 & 4 & 7 & -7\\ 0 & 13 & 26 & -11/13\\ 0 & 38 & 66 & -44 \end{array}\right) \] Now we move onto the second row. At this point, the standard thing to do is to multiply a number, so that the leading entry of row 2 becomes 1. Here, the leading entry is \(13\) - remember that we look at the first non-zero value! So we multiply \(R_2\) by \( \frac1{13} \). But I will also use a trick again. Just by looking at \(R_3\), I can clearly see that \(2\) is a common factor of every term! So I'll halve every term in \(R_3\) as well for some simplification. \[ \xrightarrow[R_3 = \frac12 R_3]{R_2 = \frac1{19}R_2} \left( \begin{array}{c c c|c} -1 & 4 & 7 & -7\\ 0 & 1 & 2 & -11/13\\ 0 & 19 & 33 & -22 \end{array}\right) \] Now we do the same thing to cut away as many 0's as possible in the second column. See if you can figure out why this works by yourself. Note how the leading entry in \(R_2\) is now \(1\), instead of \(-1\)! \[ \xrightarrow{R_3 = R_3 - 19R_2} \left( \begin{array}{c c c|c} -1 & 4 & 7 & -7\\ 0 & 1 & 2 & -11/13\\ 0 & 0 & -5 & -77/13 \end{array}\right) \] Once we're here, there's nothing else to eliminate. Here, I multiply whatever rows need to be multiplied, so that their leading entries will all be 1's. \[ \xrightarrow[R_3 = -\frac15 R_3]{R_1 = -R_1} \left( \begin{array}{c c c|c} 1 & -4 & -7 & 7\\ 0 & 1 & 2 & -11/13\\ 0 & 0 & 1 & 77/65 \end{array}\right) \] This augmented matrix is now in a row-echelon form . When this occurs, the entries down the main diagonal are all 1's, and everything to the left/below the diagonal is 0. Note that a matrix can have multiple row-echelon forms, so don't panic if you get a different one along the way. The important thing is that the final answer is the same. Once we are in a row-echelon form, we can perform back substitution . This involves converting each row back into a Cartesian equation, but we go from bottom to top due to the convenient form we have now. Here, we start with \(R_3\): \[ \boxed{z = \frac{77}{65}}. \] Basically, the key is to remember that the numbers are actually the coefficients of \(x\), \(y\) and \(z\)! Gaussian elimination does what substitution and (old) elimination did - the idea is that the manipulations are only done on the coefficients to help keep track of everything! So in a similar way, when we equate \(R_2\) we have: \[ y + 2z = -\frac{11}{13}. \] Of course, we already know what \(z\) is, so subbing that in: \[ y + \frac{144}{65} = -\frac{11}{13} \implies \boxed{y = -\frac{209}{65}} \] Then we do the same thing in \(R_1\): \[ x - 4y - 7z = 7. \] Since we know what \(y\) and \(z\) are, upon subbing them both in: \[ x + \frac{836}{65} - \frac{539}{65} = 7 \implies \boxed{x=\frac{158}{65}} \] Hence our solutions are \( x = \frac{158}{65} \), \(y = -\frac{209}{65} \) and \(z = \frac{77}{65}\). Of course, if you want to do it without clever tricks, that's still alright. Possibly recommended if you don't mind writing out more working, because it's even more systematic at least. Just look up some random row-echelon form calculator and sub your numbers in! Note that some calculators are programmed to find reduced row-echelon forms.
Say $$V=V_1\oplus V_2\oplus\dots \oplus V_n$$ where $V$ is a vector space and $V_1,V_2,\dots, V_n$ its subspaces. Is it necessary that $$V_i\cap V_j=\{0\}$$ I think it is, for unique representation of a vector, but I am not sure. Thanks Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Say $$V=V_1\oplus V_2\oplus\dots \oplus V_n$$ where $V$ is a vector space and $V_1,V_2,\dots, V_n$ its subspaces. Is it necessary that $$V_i\cap V_j=\{0\}$$ I think it is, for unique representation of a vector, but I am not sure. Thanks That any two distinct subspaces $V_i$ having a zero intersection is a necessary condition for the sum to be direct is immediate from the definition of a direct sum. If $v\neq 0$ were a vector in the $V_i\cap V_j$ then one could write $0=0+\cdots+0+v+0+\cdots+0-v+0+\cdots+0$ with the nonzero terms in positions $i,j$, showing that the expression for the zero vector as element of $V_1+\cdots+V_n$ is not unique. Beware though that this is very far from being a sufficient condition if $n>2$.
By the end of the 19th century all gasses had been liquefied apart from helium (He). What is it about helium that makes it so hard to liquefy compared to the other gases? And why does it need to be pre cooled in the Joule-Kelvin expansion? The next approximation beyond the ideal gas is given by the Van der Waals fluid equation. It is a phenomenological law which takes into account the finite size of the molecules and their interactions with themselves. When you plot several Van der Vaals isotherms for a given substance, you observe that some of them show a phase transition from gas to liquid while others do not. The ones which do not show a phase transition are above a so called critical temperature $T_c$. Above this temperature you can decrease the volume or increase the pressure of the gas and it will not liquefy. Actually, the isotherms below the critical temperature need a correction given by Maxwell. To avoid instability (lower pressure giving lower volume giving lower pressure...) the actual path in the $PV$ diagram must avoid the "bumps" and follow the dashed line, as in the figure below The dashed line is the phase transition region. To see this, notice that if you keep decreasing volume further below $V_L$ you will need a huge amount of pressure. This means we got a liquid. Also notice that if the substance is above the critical temperature there is no need to apply that Maxwell correction. So there is no phase transition. The phase transition prediction by Van der Waals gave him the 1910 Nobel prize in Physics. Examples of critical temperatures are (in degrees Celsius): \begin{align} T_c(H_2O)&=+374,35,\\ T_c(O_2)&=-118,55,\\ T_c(N_2)&=-147,15,\\ T_c(H_2)&=-240,17,\\ T_c(He^4)&=-267,96. \end{align} As you can see, we are only able to liquefy Helium when it is below $-267,96^oC$. For a long time chemists called the gases $O_2$, $N_2$, $H_2$ and $He^4$ as permanent gases, since they were not able to drop the temperature enough to turn them liquid. Edit: I basically said that the great difficulty in liquefying helium is due to its extremely low critical temperature. The next question would be: Why is the helium critical temperature so low? Let me try to answer to that question too. The van der Waals equation for one mol of gas reads $$\left(P+\frac{a}{v^2}\right)(v-b)=RT.$$ The parameter $a$ characterizes the strength of the attractive intermolecular interaction while $b$ is related to the effective volume occupied by the molecules. The critical temperature can be calculated in terms of these parameters (remember the temperatures are always given in Kelvin), $$T_c=\frac{8a}{27bR}.$$ So a small $T_c$ means either small $a$ (weak interaction) or high $b$ (big molecules) or a combination of both. For the gases above mentioned we have, \begin{array}{|c|c|c|} \hline & a(Pa\cdot m^3/mol^2) & b(m^3/mol) \\ \hline H_2O & 554\cdot 10^{-3} & 3.05\cdot 10^{-5} \\ \hline O_2 & 138\cdot 10^{-3} & 3.19\cdot 10^{-5} \\ \hline N_2 & 137\cdot 10^{-3} &3.87\cdot 10^{-5} \\ \hline H_2 & 24.8\cdot 10^{-3}& 2.66\cdot 10^{-5} \\ \hline He^4 & 3.46\cdot 10^{-3} & 2.38\cdot 10^{-5} \\ \hline \end{array} These data suggest that the extremely weak (compared to the other) intermolecular interaction is the reason it has such a low critical temperature. Getting from gas to liquid is a matter of interparticle interaction winning over thermal agitation. There are several reasons why interparticle interactions are very weak in the case of helium atoms. On one hand, it is a noble gas and thus cannot form covalent bonds. On the other hand, it is very light hence highly non-polarizable: its Van der Waals interactions are weak. Throttling the gas (Joule-Kelvin expansion) only lowers the temperature of the gas when the Joule–Thomson coefficient is positive. For Helium, that point (the "J–T inversion temperature") is reached at 43°K (source: Cryogenic Society of America; the wikipedia article gives an incorrect value of 51°K). Above that temperature, Joule-Kelvin expansion will increase the temperature of the gas instead of lowering it, that's why pre-cooling is required. Throttling is an isenthalpic process; definition and formula for the Joule–Thomson coefficient (see the link for more details): $\mu_{\mathrm{JT}} = \left( {\partial T \over \partial P} \right)_H = \frac{V}{C_{\mathrm{p}}}\left(\alpha T - 1\right)\,$ V is the gas volume, $C_p$ the heat capacity at constant pressure and $\alpha$ the coefficient of thermal expansion. $\mu_{\mathrm{JT}}$ gives the temperature drop in °K per bar. Only helium, hydrogen and neon have an inversion temperature below ambient (neon: 250°K) and require pre-cooling. For the first question, it is the low boiling temperature, 4.21K for Helium-4 and 3.19K for Helium-3, that makes helium difficult to be liquefied. Hydrogen's boiling temperature at 1 atm is 20.27K, or about 4-5 times higher. For pre-cooling, one can take a look of entropy $$\delta S = \frac {dQ}{T}$$ We can see that, because T is very small, a slight changing in heat will increases the entropy a lot, which makes liquefation difficult. Thus pre-cooling and creating a extremely cold environment is critical in the process. Simply Said: It doesn't attract itself enough to be solid. So much resistance to attraction that it is very hard to liquefy. The electrons prefer to repel because the electrons repel like the protons. Since it has electrons, sometimes it becomes polar for a very short amount of time. This help pull together the molecule, letting it liquefy but at very low temperature. This is call the London Dispersion Force. Since helium has a few amount of electrons and a full shell, it has a very low probability of an atom becoming polar for a second and thus reduces the boiling point even more. protected by Qmechanic♦ May 25 '16 at 13:15 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
You approach a bridge and see a troll. He throws down finite $n>0$ bags. Each bag, $B_i$, contain 1 or more of 4 possible buttons. $B_i$ contains $S_i \subseteq \{green, red, blue, yellow\} \land S_i \neq \emptyset$. The contents of each bag is written on the bag - e.g. bag $B_i$ may have $\{red, green\}$ written on it. The troll says, I know buttons are valuable in your neighbourhood and you can't get past me until you pick a set of bags such that you minimise $\frac{colours}{bags}$. What is the best way to find the set of bags you should take? Bonus points if you can tell me what this (NP?) problem is called.
Up to this point, we have discussed inferences regarding a single population parameter (e.g., μ, p, \(\sigma^2\)). We have used sample data to construct confidence intervals to estimate the population mean or proportion and to test hypotheses about the population mean and proportion. In both of these chapters, all the examples involved the use of one sample to form an inference about one population. Frequently, we need to compare two sets of data, and make inferences about two populations. This chapter deals with inferences about two means, proportions, or variances. For example: You are studying turkey habitat and want to see if the mean number of brood hens is different in New York compared to Pennsylvania. You want to determine if the treatment used in Skaneateles Lake has reduced the number of milfoil plants over the last three years. Is the proportion of people who support alternative energy in California greater compared to New York? Is the variability in application different between two mist blowers? These questions can be answered by comparing the differences of: Mean number of hens in NY to the mean number of hens in PA. Number of plants in 2007 to the number of plants in 2010. Proportion of people in CA to the proportion of people in NY. Variances between the mist blowers. This chapter is comprised of five sections. The first and second sections examine inferences about two means with two independent samples. The third section examines inferences about means with two dependent samples, the fourth section examines inferences about two proportions, and the fifth section examines inferences between two variances. Inferences about Two Means with Independent Samples (Assuming Unequal Variances) Using independent samples means that there is no relationship between the groups. The values in one sample have no association with the values in the other sample. For example, we want to see if the mean life span for hummingbirds in South Carolina is different from the mean life span in North Carolina. These populations are not related, and the samples are independent. We look at the difference of the independent means. In Chapter 3, we did a one-sample t-test where we compared the sample mean (\(\bar {x}\)) to the hypothesized mean (μ). We expect that \(\bar {x}\) would be close to μ. We use the sample mean, the sample standard deviation, and the sample size for the one-sample test. With a two-sample t-test, we compare the population means to each other and again look at the difference. We expect that \(\bar {x_1}-\bar {x_2}\) would be close to \(\mu_{1} – \mu_{2}\). The test statistic will use both sample means, sample standard deviations, and sample sizes for the test. For a one-sample t-test we used \(\frac {s}{\sqrt{n}}\)as a measure of the standard deviation (the standard error). We can rewrite $$\frac {s}{\sqrt{n}} \rightarrow \sqrt {\frac {s^2}{n}}$$. The numerator of the test statistic will be \((\bar {x_1} - \bar{x_2})-(\mu_{1} - \mu_{2})\) This has a standard deviation of \(\sqrt {\frac {s^2_1}{n_1}+\frac {s^2_2}{n_2}}\). A two-sample t-test follows the same four steps we saw in Chapter 3. Write the null and alternative hypotheses. State the level of significance and find the critical value. The critical value, from the student’s t-distribution, has the lesser of n1-1 and n2 -1 degrees of freedom. Compute the test statistic. Compare the test statistic to the critical value and state a conclusion. The assumptions we saw in Chapter 3 still must be met. Both samples come from independent random samples. The populations must be normally distributed, or both have large enough sample sizes (n1 and n2 ≥ 30). We will also use the same three pairs of null and alternative hypotheses. Table 1. Null and alternative hypotheses. Rewriting the null hypothesis of μ1 = μ2 to μ1 – μ2 = 0, simplifies the numerator. The test statistic is Welch’s approximation (Satterthwaite Adjustment) under the assumption that the independent population variances are not equal. $$t=\frac {(\bar {x_1}-\bar {x_2})-(\mu_{1}-\mu_{2})}{\sqrt {\frac {s^2_1}{n_1}+\frac {s^2_2}{n_2}}}$$ This test statistic follows the student’s t-distribution with the degrees of freedom adjusted by $$df=\frac {(\frac {S^2_1}{n_1} + \frac {S^2_2}{n_2})^2}{\frac {1}{n_1-1}(\frac {S^2_1}{n_1})^2+\frac {1}{n_2-1}(\frac {S^2_2}{n_2})^2}$$ A simpler alternative to determining degrees of freedom when working a problem long-hand is to use the lesser of n1-1 or n2-1 as the degrees of freedom. This method results in a smaller value for degrees of freedom and therefore a larger critical value. This makes the test more conservative, requiring more evidence to reject the null hypothesis. Example \(\PageIndex{1}\): A forester is studying the number of cavity trees in old growth stands in Adirondack Park in northern New York. He wants to know if there is a significant difference between the mean number of cavity trees in the Adirondack Park and the old growth stands in the Monongahela National Forest. He collects two independent random samples from each forest. Use a 5% level of significance to test this claim. Adirondack Park Monongahela Forest \(n_1\) = 51 stands \(n_2\) = 56 stands \(\bar {x_1}\)= 39.6 \(\bar {x_2}\)= 43.9 \(s_1\) = 9.4 \(s_2\) = 10.7 1) \(H_0: \mu_1 = \mu_2 or \mu_1 – \mu_2 = 0\) There is no difference between the two population means. \(H_1: \mu_1 ≠ \mu_2\) There is a difference between the two population means. 2) The level of significance is 5%. This is a two-sided test so alpha is split into two sides. Computing degrees of freedom using the equation above gives 105 degrees of freedom. $$df = \frac {(\frac {9.4^2}{51}+\frac {10.7^2}{56})^2}{\frac {1}{51-1}(\frac {9.4^2}{51})^2+\frac {1}{56-1}(\frac {10.7^2}{56})^2}=104.9$$ The critical value (\(t_{\frac {\alpha}{2}}\), based on 100 degrees of freedom (closest value in the t-table), is ±1.984. Using 50 degrees of freedom, the critical value is ±2.009. 3) The test statistic is $$t=\frac {(\bar {x_1} - \bar {x_2}) - (\mu _1 - \mu_2)}{\sqrt {\frac {s_1^2}{n_1}+\frac {s_2^2}{n_2}}} =\frac {(39.6-43.9)-(0)}{\sqrt{\frac {9.4^2}{51}+\frac {10.7^2}{56}}} = -2.213$$ 4) The test statistic falls in the rejection zone. Figure 1. A comparison of the critical values and test statistic. We reject the null hypothesis. We have enough evidence to support the claim that there is a difference in the mean number of cavity trees between the Adirondack Park and the Monongahela National Forest. Construct and Interpret a Confidence Interval about the Difference of Two Independent Means A hypothesis test will answer the question about the difference of the means. BUT, we can answer the same question by constructing a confidence interval about the difference of the means. This process is just like the confidence intervals from Chapter 2. Find the critical value. Compute the margin of error. Point estimate ± margin of error. Because we are working with two samples, we must modify the components of the confidence interval to incorporate the information from the two populations. The point estimate is \(\bar {x_1} -\bar {x_2}\). The standard error comes from the test statistic \(\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}\) The critical value \(t_{\frac {\alpha}{2}}\)comes from the student’s t-table. The confidence interval takes the form of the point estimate plus or minus the standard error of the differences. $$\bar {x_1} -\bar {x_2} \pm t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}$$ We will use the same three steps to construct a confidence interval about the difference of the means. critical value \(t_{\frac {\alpha}{2}}\) \(E = t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}\) \(\bar {x_1} -\bar {x_2} \pm E\) Example \(\PageIndex{2}\): Let’s look at the mean number of cavity trees in old growth stands again. The forester wants to know if there is a difference between the mean number of cavity trees in old growth stands in the Adirondack forests and in the Monongahela Forest. We can answer this question by constructing a confidence interval about the difference of the means. 1) \(t_{\frac {\alpha}{2}}\) = 2.009 2) \(E = t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}} = 2.009 \sqrt {\frac {9.4^2}{51}+\frac {10.7^2}{56}}=3.904\) 3) \(\bar {x_1} -\bar {x_2} \pm 3.904\) The 95% confidence interval for the difference of the means is (-8.204, -0.396). We can be 95% confident that this interval contains the mean difference in number of cavity trees between the two locations. BUT, this doesn’t answer the question the forester asked. Is there a difference in the mean number of cavity trees between the Adirondack and Monongahela forests? To answer this, we must look at the confidence interval interpretations. Confidence Interval Interpretations If the confidence interval contains all positive values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly greater than the mean of the second group. If the confidence interval contains all negative values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly less than the mean of the second group. If the confidence interval contains zero (it goes from negative to positive values), we find NO significant difference between the groups. In this problem, the confidence interval is (-8.204, -0.396). We have all negative values, so we can conclude that there is a significant difference in the mean number of cavity trees AND that the mean number of cavity trees in the Adirondack forests is significantly less than the mean number of cavity trees in the Monongahela Forest. The confidence interval gives an estimate of the mean difference in number of cavity trees between the two forests. There are, on average, 0.396 to 8.204 fewer cavity trees in the Adirondack Park than the Monongahela Forest. P-value Approach We can also use the p-value approach to answer the question. Remember, the p-value is the area under the normal curve associated with the test statistic. This example is a two-sided test (H1: μ1 ≠ μ2 ) so the p-value, when computed by hand, will be multiplied by two. The test statistic equals -2.213, so the p-value is two times the area to the left of -2.213. We can only estimate the p-value using the student’s t-table. Using the lesser of n1– 1 or n2– 1 as the degrees of freedom, we have 50 degrees of freedom. Go across the 50 row in the student’s t-table until you find the absolute value of the test statistic. In this case, 2.213 falls between 2.109 and 2.403. Going up to the top of each of those columns gives you the estimate of the p-value (between 0.02 and 0.01). Table 2. Student t-Distribution The p-value is 2x(0.01 – 0.02) = (0.02 < p < 0.04). The p-value is greater than 0.02 but less than 0.04. This is less than the level of significance (0.05), so we reject the null hypothesis. There is enough evidence to support the claim that there is a significant difference in the mean number of cavity trees between the areas. Example \(\PageIndex{3}\): Researchers are studying the relationship between logging activities in the northern forests and amphibian habitats. They were comparing moisture levels between old-growth and post-harvest habitats. The researchers believe that post-harvest habitat has a lower moisture level. They collected data on moisture levels from two independent random samples. Test their claim using a 5% level of significance. n1 = 26 n2 = 31 =0.62 g/cm3 = 0.56 g/cm3 s1 = 0.12 g/cm3 s2 = 0.17 g/cm3 H0: μ1 = μ2 or μ1 – μ2 = 0. There is no difference between the two population means. H1: μ1 > μ2. Mean moisture level in old growth forests is greater than post-harvest levels. We will use the critical value based on the lesser of n1– 1 or n2– 1 degrees of freedom. In this problem, there are 25 degrees of freedom and the critical value is 1.708. Now compute the test statistic. $$t=\frac {(0.62-0.56)-0}{\sqrt {\frac {0.12^2}{26}+\frac {0.17^2}{31}}} = 1.556$$ The test statistic does not fall in the rejection zone. We fail to reject the null hypothesis. There is not enough evidence to support the claim that the moisture level is significantly lower in the post-harvest habitat. Now answer this question by constructing a 90% confidence interval about the difference of the means. 1) \(t_{\frac {\alpha}{2}}\) = 1.708 2) E = \(t_{\frac {\alpha}{2}}\)\(\sqrt {\frac {s_1^2}{n_1}+\frac {s^2_2}{n_2}}=1.708\sqrt {\frac {0.12^2}{26}+\frac {0.17^2}{31}}=0.0658\) 3) \(\bar {x_1} -\bar {x_2} \pm E= (0.62-0.56) ±0.0658\) The 90% confidence interval for the difference of the means is (-0.0058, 0.1258). The values in the confidence interval run from negative to positive indicating that there is no significant different in the mean moisture levels between old growth and post-harvest stands. Software Solutions Minitab Two-sample T for old vs. post N Mean StDev SE Mean old 26 0.620 0.121 0.024 post 31 0.559 0.172 0.031 Difference = \(\mu_{(old)} – \mu_{(post)}\) Estimate for difference: 0.0603 95% lower bound for difference: -0.0049 T-Test of difference = 0 (vs >): T-Value = 1.55 p-Value = 0.064 DF = 53 The p-value (0.064) is greater than the level of confidence so we fail to reject the null hypothesis. Additional example: www.youtube.com/watch?v=7pIb-GVixFo. Excel Variable 1 Variable 2 Mean 0.619615 0.559355 Variance 0.014708 0.02948 Observations 26 31 Hypothesized Mean Difference 0 df 54 t Stat 1.557361 \(P(T\le t)\) one-tail 0.063809 t Critical one-tail 1.673565 \(P(T\le t)\) two-tail 0.127617 t Critical two-tail 2.004879 The one-tail p-value (0.063809) is greater than the level of significance, therefore, we fail to reject the null hypothesis.
Given irreducible quartic $f(x) \in F[x]$ with roots $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ and Galois group $G = S_4$, what is the degree of the extension $E = F(\alpha_1+\alpha_2)$ over $F$? Find all subfields of E. I began by trying to find the subgroup $H$ of $S_4$ that corresponds to $E$. I believe $H$ would need to fix the sum of the first two roots, which would automatically fix the sum of the remaining two roots. That is, $F(\alpha_1+\alpha_2) = F(\alpha_1+\alpha_2,\alpha_3+\alpha_4)$. Thus $H$ can permute each of the pairs of roots as well as the order of the pairs: $$H = \{ (),(12),(34),(12)(34),(13)(24),(14)(23),(1423),(1324) \} \simeq D_8. $$ This would mean that $[E:F] = 24/8 = 3$. And there are no subfields since $D_8$ is maximal in $S_4$. Is this correct? It seems to make sense to me, but I wonder since the question asks for the subfields and there do not appear to be any.
Let $S_n$ be the symmetric group on $\{1, \ldots, n\}$. Let \begin{align} T=\sum_{g\in S_n} g. \end{align} Are there some references about the factorization of $T$? In the case of $n=3$, we have \begin{align} & T=1 + (12) + (23) + (12)(23) + (23)(12) + (12)(23)(12) \\ & = 1 + (12) + (23) + (12)(23) + (23)(12) + (23)(12)(23) \\ & = (1 + (12))((12) + (23) + (23)(12)) \\ & = (1 + (23))((12) + (23) + (12)(23)). \end{align} Has this problem been studied in some references? Thank you very much. Edit: the group algebra I consider is $\mathbb{C} S_n$.
I'm working my way through Griffith's Introduction to Electrodynamics. In Ch. 10, gauge transformations are introduced. The author shows that, given any magnetic potential $\textbf{A}_0$ and electric potentials $V_0$, we can create a new set of equivalent magnetic and electric potentials given by: $$ \textbf{A} = \textbf{A}_0 + \nabla\lambda \\ V = V_0 - \frac{\partial \lambda}{\partial t}. $$ These transformations are defined as a "gauge transformation". The author then introduces two of these transformations, the Coulomb and Lorenz gauge, defined respectively as: $$ \nabla \cdot \textbf{A} = 0 \\ \nabla \cdot \textbf{A}= -\mu_0\epsilon_0\frac{\partial V}{\partial t}. $$ This is where I am confused. I do not understand how picking the divergence of $\textbf{A}$ to be either of these two values actually constitutes a gauge transformation, as in it meets the conditions of the top two equations. How do we know that such a $\lambda$ even exists for setting the divergence of $\textbf{A}$ to either of these values. Can someone convince me that such a function exists for either transformation, or somehow show me that these transformations are indeed "gauge transformations" as they are defined above.
Consider a system of interacting electrons. Using the path integral formalism, we introduce the Hubbard Stratonovich transformation to decouple the interaction in the density channel. Then, we integrate out the fermionic degrees of freedom and extremize the action. The new effective action involves a term of the form $$ \ln\left(-\hat G^{-1}\right) + \ln\left(1 - i\hat G\hat\phi\right)\,, $$ where $\hat G$ is the diagonal bare propagator for the electrons and $\hat\phi$ is the auxiliary field. The first term just gives the partition function for the non-interacting system. The trace of the second term can be expanded as $$ \mathrm{tr} \sum_n \frac{\left(-i\hat G\hat\phi\right)^n}{n}\,. $$ The first term is $$ \mathrm{tr}\left(\hat G\hat\phi\right) = \phi_0\sum_n G_n\,, $$ where $G_n$ is the diagonal element of the propagator matrix. The second term is $$ \sum_q \phi_q \phi_{-q}\left(\sum_p G_p G_{p+q}\right)\,. $$ The stuff inside the parentheses is the RPA polarization bubble. So far so good. The third term becomes $$ \sum_{kp}\phi_k\phi_p\phi_{-k-p}\left(\sum_qG_qG_{k+q}G_{q-p}\right)\,. $$ This is a triangular loop diagram. Intuitively, it seems that it should cancel. Odd powers just seem, well, odd. In fact, consulting Altland and Simons book (second edition, after Eq.(6.6)), it does say that the odd powers cancel by symmetry. However, I don't see it. Am I missing something obvious? Thank you
If I try to measure the field at one point in spacetime, I should get a real value which should be an eigenvalue of the quantum field, right? I guess the eigenvectors of the quantum field also live in Fock space? Yes, that's basically correct. If the value of the field at a point is observable, the eigenvalues of the operator representing it are the values the field can attain at that point. And the eigenvectors live in the Hilbert space of states, which you can think of (at least conceptually) as $L^2(\{\mbox{initial boundary conditions}\})$. This Hilbert space is a Fock space in free field theories. There's a couple of subtleties worth mentioning: The value of the field at a point might not be a physical observable. In electrodynamics, for example, you can't actually measure the value $A_\mu(x)$ of a component of the connection 1-form; instead, you can measure gauge invariant quantities like the curvature $F_A(x)$ and the holonomy $Hol_L(A)$ along a loop $L$. Likewise, in nonlinear sigma models, where the classical fields are maps $\phi: \Sigma \to X$ to some curved manifold, you can't measure the value $\phi(x)$. Eigenvalues are complex numbers, not points on a manifold. But you do get a real observable $\mathcal{O}_f(x)$ for every function $f: X \to \mathbb{R}$; measure the value of $f(\phi(x))$. It's also not strictly correct to say that quantum fields are operator-valued functions on spacetime. The physical problem is that if you measure the value of the field at one point, you'll disturb the field near that point, affecting the values at other nearby points. The closer you look to the place where you made the measurement, the bigger the disturbance; even in free scalar field theory, the 2-point correlation function $\langle \phi(x) \phi(y) \rangle$ blows up as $x \to y$. This tells you that the fields aren't quite functions, because you can't multiply the 'value at a point' observables when they live at the exact point. The mathematically correct thing to do is to think of the field (and more generally local observables constructed from fields) as an operator-valued distribution. Distributions are mild generalization of functions; they are objects which don't have values at a point, but which do have average values in an arbitrarily small (but finite) region. Basically, for any test function $f$ on your spacetime, you get an operator $\phi(f)$ which you can think of as measuring the value "$\int f(x) \phi(x) dx$" of $\phi$ sampled by a probe with resolution $f$. Distributions can only be multiplied when their singularities don't coincide; they exhibit the same obnoxious behavior that quantum field operators do. Probably you don't have to worry about this too much. For one thing, even if you can't (strictly speaking) define an operator $\phi(x)$, you can still safely talk about the correlation function $\langle \phi(x)\phi(y)\rangle$. (It's the kernel function of the multilinear map $(f,g)\mapsto \langle \phi(f)\phi(g) \rangle$.) Physicists don't spend a lot of time worrying about solving the eigenvalue problem for the field operators. Usually the spectrum is all of $\mathbb{R}$, and finding the eigenvectors isn't worth the trouble. There is one important exception though: In the Standard Model, it's pretty important that the vacuum vector is an eigenvector of the Higgs field operators, with non-zero eigenvalue.
The Annals of Applied Probability Ann. Appl. Probab. Volume 3, Number 4 (1993), 1151-1169. Greedy Lattice Animals I: Upper Bounds Abstract Let $\{X_\nu: \nu \in \mathbb{Z}^d\}$ be an i.i.d. family of positive random variables. For each set $\xi$ of vertices of $\mathbb{Z}^d$, its weight is defined as $S(\xi) = \sum_{\nu \in \xi}X_\nu$. A greedy lattice animal of size $n$ is a connected subset of $\mathbb{Z}^d$ of $n$ vertices, containing the origin, and whose weight is maximal among all such sets. Let $N_n$ denote this maximal weight. We show that if the expectation of $X^d_\nu(\log^+ X_\nu)^{d+ a}$ is finite for some $a > 0$, then w.p.1 $N_n \leq Mn$ eventually for some finite constant $M$. Estimates for the tail of the distribution of $N_n$ are also derived. Article information Source Ann. Appl. Probab., Volume 3, Number 4 (1993), 1151-1169. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoap/1177005277 Digital Object Identifier doi:10.1214/aoap/1177005277 Mathematical Reviews number (MathSciNet) MR1241039 Zentralblatt MATH identifier 0818.60039 JSTOR links.jstor.org Subjects Primary: 60G50: Sums of independent random variables; random walks Secondary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] Citation Cox, J. Theodore; Gandolfi, Alberto; Griffin, Philip S.; Kesten, Harry. Greedy Lattice Animals I: Upper Bounds. Ann. Appl. Probab. 3 (1993), no. 4, 1151--1169. doi:10.1214/aoap/1177005277. https://projecteuclid.org/euclid.aoap/1177005277 See also Part II: Alberto Gandolfi, Harry Kesten. Greedy Lattice Animals II: Linear Growth. Ann. Appl. Probab., Volume 4, Number 1 (1994), 76--107.
Almost every textbook in probability or statistics will speak of classifying distributions into two different camps: discrete (singular in some older textbooks) and continuous. Discrete distributions have either a finite or a countable sample space (also known as a set of Lebesgue measure 0), such as the Poisson or binomial distribution, or simply rolling a die. The probability of each point in the sample space is nonzero. Continuous distributions have a continuous sample space, such as the normal distribution. A distribution in either of these classes is either characterized by a probability mass function (pmf) or probability distribution function (pdf) derived from the distribution function via taking a derivative. There is, however, a third kind. One rarely talked about, or mentioned quickly and then discarded. This class of distributions is defined on a set of Lebesgue measure 0, yet the probability of any point in the set is 0, unlike discrete distributions. The distribution function is continuous, even uniformly continuous, but not absolutely continuous, meaning it’s not a continuous distribution. The pdf doesn’t exist, but one can still find moments of the distribution (e.g. mean, variance). They are almost never encountered in practice, and the only real example I’ve been able to find thus far is based on the Cantor set. This class is the set of red-headed step-distributions– the singular continuous distributions. Back up, what is Lebesgue measure? Measure theory itself can get extremely complicated and abstract. The idea of measures is to give the “size” of subsets of a space. Lebesgue measure is one type of measure, and is actually something most people are familiar with: the “size” of subsets of Euclidean space in n dimensions. For example, when n=1, we live in 1D space. Intervals. The Lebesgue measure of an interval [a,b] on the real line is just the length of that interval: b-a. When we move to two dimensions, \mathbb{R}\times \mathbb{R}, the Cartesian product of 1D space with itself, our intervals combine to make rectangles. The Lebesgue measure in 2D space is area; so a rectangle built from [a,b]\times [c,d] has Lebesgue measure (b-a)(d-c). Lebesgue measure in 3D space is volume. And so forth. Now, points are 0-dimensional in Euclidean space. They have no size, no mass. They have Lebesgue measure 0 1. Intuitively, we can simply see that Lebesgue measure helps us see how much “space” something takes up in the Euclidean world, and points take up no space, and hence should have measure 0. In fact, any countable set of points has Lebesgue measure 0. Even an infinite but countable set. The union of disjoint Lebesgue measurable sets has a measure equal to the sum of the individual sets. Points are certainly disjoint, and they each have measure 0, and summing 0 forever still yields 0. 2 So, the set \{0,1,2\} has Lebesgue measure 0. But so do the natural numbers \mathbb{N}, and the rational numbers \mathbb{Q}, even though the rational numbers contain the set of natural numbers. It is actually possible to construct an uncountable infinite set that has Lebesgue measure 0, and we will need that in constructing our example of a singular continuous distribution. For now, we’ll examine discrete and continuous distributions briefly. Discrete (Singular) Distributions These are the ones most probability textbooks begin with, and most of the examples that are familiar. Roll a fair die. The sample space for a roll of a fair die X is S =\{1,2,3,4,5,6\}. The PMF is P(X = x) = 1/6, where x \in S. The CDF is given by the function P(X\leq x) = \sum_{j\leq x}P(X=j) P(X \leq 4) = \sum_{j\leq 4}\frac{1}{6} = \frac{2}{3} Example: Binomial Distribution A binomial random variable X counts the number of “successes” or 1s in a binary sequence of n Bernoulli random variables. Think a sequence of coin tosses, and counting the number of heads. In this case, the sample space is infinite, but countable: S = \{0,1,2,\ldots\}. If the probability of a 1, or “success” is p, then the PMF of X is given byP(X=x) = {n \choose x}p^{x}(1-p)^{n-x} Note here again that the sample space is of Lebesgue measure 0, but the probability of any point in that space is a positive number. Continuous Distributions Continuous distributions operate on a continuous sample space, usually an interval or Cartesian product of intervals or even a union of intervals. Continuous distribution functions F are absolutely continuous, meaning that (in one equivalent definition), the distribution function has a derivative f=F' almost everywhere that is Lebesgue integrable, and obeys the Fundamental Theorem of Calculus: for a< b. This f is the probability distribution function (PDF), derived by differentiating the distribution function. Let’s mention some examples of these: The Continuous Uniform Distribution Suppose we have a continuous interval [a,b], and the probability mass is spread equally along this interval, meaning that the probability that our random variable X lies in any subinterval of size s has the same probability, regardless of location. Suppose we do not allow the random variable to take any values outside the interval. The sample space is continuous but over a finite interval. The distribution function for this X is given byF(x) = \left\{\begin{array}{lr}0&x< a\\\frac{x-a}{b-a}&a\leq x \leq b\\1&x > b\end{array}\right. This is an absolutely continuous function. Then we may easily derive the PDF by differentiating F:f(x) = \mathbb{1}_{x \in [a,b]}\frac{1}{b-a} where \mathbb{1}_{x \in [a,b]} is the indicator function that takes value 1 if x is in the interval, and 0 otherwise. This distribution is the continuous version of a die roll. The die roll is the discrete uniform distribution, and here we just allow for a die with uncountably many sides with values in [a,b]. The probability of any particular point is 0, however, even though it is possible to draw a random number from this interval. To see this, note that the probability that the random variable X lies between two points in the interval, say x_{1} and x_{2} is given by multiplying the height of the PDF by the length (Lebesgue measure) of the subinterval. The Lebesgue measure of a point is 0, so even though a value for the PDF exists at that point, the probability is 0. We don’t run into issues here mathematically because we are on a continuous interval. The Normal Distribution Likely the most famous continuous distribution, the normal distribution is given by the famous “bell curve.” In this case, the sample space is the entire real line. The probability that a normally distributed random variable X lies between any two points a and b is given byP(a\leq X \leq b) = \int_{a}^{b}\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{(x-\mu)^{2}}{2\sigma^{2}}\right)dx where \mu is the mean and \sigma^{2} is the variance. Singular Continuous Distributions We’re going to begin this section by discussing everyone’s favorite counterexample in mathematics: the Cantor set. The Cantor set The Cantor set is given by the limit of the following construction: Take the interval [0,1]. Remove the middle third: (1/3, 2/3), so you’re left with [0,1/3]\cup[2/3,1] Remove the middle third of each of the remaining intervals. So you remove (1/9,2/9) from [0,1/3] and (7/9,8/9) from [2/3,1], leaving you with the set [0,1/9]\cup[2/9,1/3]\cup[2/3,7/9]\cup[8/9,1] Continue this process infinitely. This is an example of a set that is uncountable, yet has Lebesgue measure 0. Earlier, when we discussed Lebesgue measure, we noted that all countable sets had measure 0. Thus we may conclude that only uncountable sets (like intervals) have nonzero Lebesgue measure. However, the Cantor set illustrates that not all uncountable sets have positive Lebesgue measure. To see why the Cantor set has Lebesgue measure 0, we will look at the measure of the sets that are removed (the complement of the Cantor set): At the first step, we have removed one interval of size 1/3. At the second step, we remove two intervals of size 1/9. At the third step, we remove four intervals of size 1/27. Let’s call S_{n} the subset removed from the interval [0,1] by the nth step. By the end of the third step, we have removed a set of sizem(S_{3}) = \frac{1}{3} + \frac{2}{3^{2}} + \frac{4}{3^{3}} By the nth step,m(S_{n}) = \sum_{j=0}^{n}\frac{2^{j}}{3^{j+1}} This is the partial sum of a geometric series, som(S_{n}) = 1-\left(\frac{2}{3}\right)^{n} Now, the Cantor set is formed when n \to \infty. The measure of the complement of the Cantor set, which we called S_{\infty} then has measurem(S_{\infty}) = \lim_{n \to \infty}m(S_{n}) = \lim_{n \to \infty}1-\left(\frac{2}{3}\right)^{n} = 1 But the original interval we started with had Lebesgue measure 1, and the union of the Cantor set with its complement S_{\infty} is the interval [0,1]. That means that the measure of the Cantor set plus the measure of its complement must add to 1, which implies that the Cantor set is of measure 0. However, since we removed open intervals during the construction, there must be something left; in fact, there are uncountably many points left. Now we have an uncountable set of Lebesgue measure 0. We’re going to use this set to construct the only example I could find of a singular continuous distribution. It is very important that the Cantor set is an uncountable set of Lebesgue measure 0. Building the Cantor distribution Update: Following a correction from an earlier version, I’m going to show how to construct this distribution directly and via the complement of the Cantor set. The latter was used in a textbook I found, and is a bit convoluted in its construction, but I’m going to leave it. The direct construction is to look at the intervals left behind at each stage n of constructing the Cantor set. Assign a probability mass of \frac{1}{2^{n}} to each of the 2^{n} intervals left behind, and this is your distribution function. It’s basically a continuous uniform distribution, but on stages of the Cantor set construction. Sending n \to \infty yields the Cantor set, but the probability distribution moves to 0 on a set of measure 0. Thus, unlike the continuous uniform distribution, where the probability of any single point was 0, but the support has positive measure, we essentially have the continuous uniform distribution occurring on a set of measure 0, which means we have a continuous distribution function on a singular support of measure 0 that is uncountable and thus not discrete. This distribution is therefore neither continuous nor discrete. Another way to construct this is by complement, via Kai Lai Chung’s A Course in Probability Theory. (Note: after a second glance at this, I found this to be a relatively convoluted way of constructing this distribution, since it can be fairly easily constructed directly. However, I imagine the author’s purpose was to be very rigid and formal to cover all his bases, so I present a review of it here:) Let’s go back to the construction of the Cantor set. At each step n we have removed in total 2^{n}-1 disjoint intervals. Let’s number those intervals, going from left to right as J_{n,k}, where k = 1,2,\ldots, 2^{n}-1. For example, at n=2 we have that J_{2,1} = (1/9,2/9),J_{2,2} = (1/3,2/3), and J_{2,3} = (7/9,8/9). Now let the quantity c_{n,k} = \frac{k}{2^{n}}. This will be the probability mass assigned to interval J_{n,k}. So we define the distribution function asF(x) = c_{n,k}, x \in J_{n,k} Let U_{n} = \cup_{k=1}^{2^{n}-1}J_{n,k}, and U = \lim_{n\to\infty}U_{n} The function F is indeed a distribution function and can be shown to be uniformly continuous on the set D = (-\infty,0)\cup U \cup (1,\infty). However, none of the points in D is in the support of F, so the support of F is contained in the Cantor set (and in fact is the Cantor set). The support (the Cantor set) has measure 0, so it is singular, but the distribution function is continuous, so it cannot be a discrete distribution. This distribution fits nowhere in our previous two classes, so we must now create a third class — the singular continuous distribution. (By the way, even though the PDF doesn’t exist, the Cantor distribution still has mean of 1/2 and a variance of 1/8, but no mode. It does have a moment generating function.) Any other examples? With some help, I spent some time poring through quite a few probability books to seek further study and other treatment of singular continuous distributions. Most said absolutely nothing at all, as if the class didn’t exist. One book, Modern Probability Theory and Its Applications has a rather grumpy approach: There also exists another kind of continuous distribution function, called singular continuous, whose derivative vanishes at almost all points. This is a somewhat difficult notion to picture, and examples have been constructed only by means of fairly involved analytic operations. From a practical point of view, one may act as if singular continuous distribution functions do not exist, since examples of these functions are rarely, if ever, encountered in practice. This notion also has led me to a couple papers, which I intend to review and continue presenting my findings. I happen to have a great fondness for these “edge cases” and forgotten areas of mathematics. I believe they are the most ripe for groundbreaking new material. Footnotes The proof for why this is true gets a bit abstract, dealing with first defining Lebesgue outer measure, and showing a point is covered by a sequence of closed intervals with measure as small as you want, with the smallest possible having outer measure 0. This isn’t a formal proof, merely a way to establish the intuition.
Let $G$ be a finite state automaton (FSA) with transfer function $\delta(\cdot,\cdot)$ and initial state $q_0$. Suppose also $\Sigma_{G}$ represents its alphabet. Assume that its closed behavior is a set of strings defined as: $L(G) := \{s \in \Sigma_{G}|\delta(q_0,s) \text{ is defined in }G\}$ Now, consider two particular FSA $G_1$ and $G_2$ such that $\Sigma_{G_{1}} \subseteq \Sigma_{G_{2}}$. Clearly, then we clearly have $\Sigma_{G_{1}}^{*} \subseteq \Sigma_{G_{2}}^{*}$. I'm wondering whether or not we necessarily conclude that $L(G_{1}) \subseteq L(G_{2})$. To me, it's not always true. ? Am I right Here is a simple example to contradict it: in which $\Sigma_{G_{1}}^{*} \subseteq \Sigma_{G_{2}}^{*}$, but $L(G_{1}) \supseteq L(G_{2})$.
Current browse context: cs.DS Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Data Structures and Algorithms Title: Explicit near-Ramanujan graphs of every degree (Submitted on 16 Sep 2019 (v1), last revised 5 Oct 2019 (this version, v2)) Abstract: For every constant $d \geq 3$ and $\epsilon > 0$, we give a deterministic $\mathrm{poly}(n)$-time algorithm that outputs a $d$-regular graph on $\Theta(n)$ vertices that is $\epsilon$-near-Ramanujan; i.e., its eigenvalues are bounded in magnitude by $2\sqrt{d-1} + \epsilon$ (excluding the single trivial eigenvalue of~$d$). Submission historyFrom: Sidhanth Mohanty [view email] [v1]Mon, 16 Sep 2019 05:03:38 GMT (57kb) [v2]Sat, 5 Oct 2019 22:03:49 GMT (57kb)
Inferential testing uses the sample mean (\(\bar{x}\)) to estimate the population mean (\(μ\)). Typically, we use the data from a single sample, but there are many possible samples of the same size that could be drawn from that population. As we saw in the previous chapter, the sample mean (\(\bar{x}\)) is a random variable with its own distribution. The distribution of the sample mean will have a mean equal to µ. It will have a standard deviation (standard error) equal to \(\frac{\sigma}{\sqrt {n}}\) Because our inferences about the population mean rely on the sample mean, we focus on the distribution of the sample mean. Is it normal? What if our population is not normally distributed or we don’t know anything about the distribution of our population? The Central Limit Theorem (CLT) The Central Limit Theorem states that the sampling distribution of the sample means will approach a normal distribution as the sample size increases. So if we do not have a normal distribution, or know nothing about our distribution, the CLT tells us that the distribution of the sample means ( x̄) will become normal distributed as n (sample size) increases. How large does n have to be? A general rule of thumb tells us that n ≥ 30. The Central Limit Theorem tells us that regardless of the shape of our population, the sampling distribution of the sample mean will be normal as the sample size increases. Sampling Distribution of the Sample Proportion The population proportion (\(p\)) is a parameter that is as commonly estimated as the mean. It is just as important to understand the distribution of the sample proportion, as the mean. With proportions, the element either has the characteristic you are interested in or the element does not have the characteristic. The sample proportion (\(\hat {p}\)) is calculated by $$ \hat {p} = \frac{x}{n} \label{sampleproption}$$ where \(x\) is the number of elements in your population with the characteristic and n is the sample size. Example \(\PageIndex{1}\): sample proportion You are studying the number of cavity trees in the Monongahela National Forest for wildlife habitat. You have a sample size of n = 950 trees and, of those trees, x = 238 trees with cavities. Calculate the sample proportion. A naturally formed tree hollow at the base of the tree. Image used with permission (CC BY 2.0; Lauren "Lolly" Weinhold). Solution This is a simple application of Equation \ref{sampleproption}: $$\hat {p} = \frac {238}{950} =0.25 \nonumber$$ The distribution of the sample proportion has a mean of $$\mu_{\hat{p}} = p$$ and has a standard deviation of $$\sigma_{\hat {p}} = \sqrt {\frac {p(1-p)}{n}}.$$ The sample proportion is normally distributed if \(n\) is very large and \(\hat{p}\) is not close to 0 or 1. We can also use the following relationship to assess normality when the parameter being estimated is p, the population proportion: $$n\hat {p} (1- \hat {p}) \ge 10$$
Inferences about Two Population Proportions We can apply the same methods we just learned with means to our two-sample proportion problems. We have two populations with two samples and we want to compare the population proportions. Is the proportion of lakes in New York with invasive species different from the proportion of lakes in Michigan with invasive species? Is the proportion of construction companies using certified lumber greater in the northeast than in the southeast? A test of two population proportions is very similar to a test of two means, except that the parameter of interest is now “ p” instead of “µ”. With a one-sample proportion test, we used \(\hat p =\frac {x}{n}\)as the point estimate of p. We expect that p̂ would be close to p. With a test of two proportions, we will have two p̂’s, and we expect that ( p̂1 – p̂2) will be close to ( p1 – p2). The test statistic accounts for both samples. With a one-sample proportion test, the test statistic is $$z = \frac {\hat p - p}{\sqrt {\frac {p(1-p)}{n}}}$$ and it has an approximate standard normal distribution. For a two-sample proportion test, we would expect the test statistic to be $$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {p_1(1-p_1)}{n_1}+\frac {p_2(1-p_2)}{n_2}}}$$ HOWEVER, the null hypothesis will be that p1 = p2. Because the H0 is assumed to be true, the test assumes that p1 = p2. We can then assume that p1 = p2 equals p, a common population proportion. We must compute a pooled estimate of p (its unknown) using our sample data. $$\bar p = \frac {x_1+x_2}{n_1+n_2}$$ The test statistic then takes the form of $$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {\bar p(1-\bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}}$$ The hypothesis test follows the same steps that we have seen in previous sections: State the null and alternative hypotheses State the level of significance and determine the critical value Compute the test statistic Compare the critical value and the test statistic and state a conclusion The assumptions that we set for a one-sample proportion test still hold true for both samples. Both must be random samples from normally distributed populations satisfying the following statements: \(n(p)(1 – p) \ge 10\) Each sample size is no more than 5% of the population size. We can again use the same three pairs of null and alternative hypotheses. Notice that we are working with population proportions so the parameter is p. Table 5. Null and alternative hypotheses. The critical value comes from the standard normal table and depends on the alternative hypothesis (is the question one- or two-sided?). As usual, you must state a conclusion. You must always answer the question that is asked in the alternative hypothesis. Example \(\PageIndex{1}\): A researcher believes that a greater proportion of construction companies in the northeast are using certified lumber in home construction projects compared to companies in the southeast. She collected a random sample of 173 companies in the southeast and found that 86 used at least 30% certified lumber. She collected another random sample of 115 companies from the northeast and found that 68 used at least 30% certified lumber. Test the researcher’s claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to the southeast. α = 0.05. Southeast Northeast \(n_1 = 173\) \(n_2 = 115\) \(x_1 = 86\) \(x_2 = 68\) Solution Write the null and alternative hypotheses: \(H_0: p_1 = p_2\) or \(p_1 – p_2 = 0\) \(H_1: p_1 < p_2\) The critical value comes from the standard normal table. It is a one-sided test, so alpha is all in the left tail. The critical value is -1.645. Compute the point estimates $$\hat {p_1} = \frac {86}{173}=0.497$$ $$\hat {p_2} = \frac {68}{115} = 0.591$$ Now compute p̄ $$\bar p = \frac {x_1+x_2}{n_1+n_2} = \frac {86+68}{173+115} = 0.535$$ The test statistic is $$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {\bar p(1-\bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}} = \frac {(0.497-0.591)-0}{\sqrt {\frac {0.535(1-0.535)}{173}+\frac {0.535(1-0.535)}{115}}} = -1.57$$ Now compare the critical value to the test statistic and state a conclusion. Figure 3. A comparison of the critical value and the test statistic. We fail to reject the null hypothesis. There is not enough evidence to support the claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast. Using the P-Value Approach We can also answer this question using the p-value approach. The p-value is the area associated with the test statistic. This is a left-tailed problem with a test statistic of -1.57 so the p-value is the area to the left of -1.57. Look up the area associated with the Z-score -1.57 in the standard normal table. The p-value is 0.0582. The hatched area (p-value) is greater than the 5% level of significance (red area). We fail to reject the null hypothesis. There is not enough statistical evidence to support the claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast. Figure 4. Comparison of p-value and the level of significance. Construct and Interpret a Confidence Interval about the Difference of Two Proportions Just like a two-sample t-test about the means, we can answer this question by constructing a confidence interval about the difference of the proportions. The point estimate is \(\hat {p_1} - \hat {p_2}\). The standard error is \(\sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2})}{n_2}} \)and the critical value \(z_{\alpha/2}\)comes from the standard normal table. The confidence interval takes the form of the point estimate ± the margin of error. $$(\hat {p_1}- \hat {p_2}) \pm z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1} + \frac {\hat {p_2}(1-\hat {p_2})}{n_2}}$$ We will use the same three steps to construct a confidence interval about the difference of the proportions. Notice the estimate of the standard error of the differences. We do not rely on the pooled estimate of p when constructing confidence intervals to estimate the difference in proportions. This is because we are not making any assumptions regarding the equality of p1 and p2, as we did in the hypothesis test. 1) critical value \(z_{\alpha/2}\) 2) \(E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2}}{n_2}}\) 3) \((\hat {p_1}-\hat {p_2}) \pm E\) Let’s revisit Ex. 6 again, but this time we will construct a confidence interval about the difference between the two proportions. Example \(\PageIndex{2}\): The researcher claims that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast. We can test this claim by constructing a 90% confidence interval about the difference of the proportions. 1) critical value \(z_{\alpha/2}= 1.645\) 2) \(E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2}}{n_2}}=1.645\sqrt {\frac {0.497(1-0.497)}{173}+\frac {0.591(1-0.591)}{115}}=0.098\) 3) \((\hat {p_1}-\hat {p_2}) \pm E= (0.497-0.591) ± 0.098\) The 90% confidence interval about the difference of the proportions is (-0.192, 0.004). BUT, this doesn’t answer the question the researcher asked. We must use one of the three interpretations seen in the previous section. In this problem, the confidence interval contains zero. Therefore we can conclude that there is no significant difference between the proportions of companies using certified lumber in the northeast and in the southeast. Example \(\PageIndex{3}\): A hydrologist is studying the use of Best Management Plans (BMP) in managed forest stands to protect riparian zones. He collects information from 62 stands that had a management plan by a forester and finds that 47 stands had correctly implemented BMPs to protect the riparian zones. He collected information from 58 stands that had no management plan and found that 26 of them had correctly implemented BMPs for riparian zones. Do these data suggest that there is a significant difference in the proportion of stands with and without management plans that had correct BMPs for riparian zones? α = 0.05. Plan No Plan \(x_1 = 47\) \(x_2 = 26\) \(n_1 = 62\) \(n_2 = 58\) Let’s answer this question both ways by first using a hypothesis test and then by constructing a confidence interval about the difference of the proportions. \(H_0: p_1 = p_2\) or \(p_1 – p_2 = 0\) \(H_1: p_1 \ne p_2\) Critical value: ±1.96 Test statistic: $$z=\frac {(\hat {p_1}-\hat {p_2})-(p_1 - p_2)}{\sqrt {\frac {\bar p (1- \bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}}= \frac {(0.758-0.448)-0}{\sqrt {\frac {0.608(1-0.608)}{62}+\frac {0.608(1-0.608)}{58}}}=3.48$$ The test statistic is greater than 1.96 and falls in the rejection zone. There is enough evidence to support the claim that there is a significant difference in the proportion of correctly implemented BMPs with and without management plans. Now compute the p-value and compare it to the level of significance. The p-value is two times the area under the curve to the right of 3.48. Look for the area (in the standard normal table) associated with a Z-score of 3.48. The area to the right of 3.48 is 1 – 0.9997 = 0.0003. The p-value is 2 x 0.0003 = 0.0006. The p-value is less than 0.05. We will reject the null hypothesis and support the claim that the proportions are different. Now, answer this question using a confidence interval. 1) critical value \(z_{\alpha/2}= 1.96\) 2) \(E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2})}{n_2}}=1.96\sqrt {\frac {0.758(1-0.758)}{62}+\frac {0.448(1-0.448)}{58}}=0.1666\) 3) \(\hat {p_1}-\hat {p_2} \pm E = (0.758,-0.448) \pm 0.1666\) The 95% confidence interval about the difference of the proportions is (0.143, 0.477). The confidence interval contains all positive values, telling you that there is a significant difference between the proportions AND the first group (BMPs used with management plans) is significantly greater than the second group (BMPs with no plans). This confidence interval estimates the difference in proportions. For this problem, we can say that correctly implemented BMPs with a plan occur in a greater proportion (14.3% to 44.7%) compared to those implemented without a management plan. Software Solutions Minitab Test and CI for Two Proportions Sample X N Sample p 1 47 62 0.758065 2 26 58 0.448276 Difference = p (1) – p (2) Estimate for difference: 0.309789 95% CI for difference: (0.143223, 0.476355) Test for difference = 0 (vs. not = 0): Z = 3.47 p-value = 0.001 Fisher’s exact test: p-value = 0.001 The p-value equals 0.001 which tells us to reject the null hypothesis. There is a significant difference in the proportion of correctly implemented BMPs with and without management plans. The confidence interval for the difference in proportions is also given (0.143223, 0.476355) which allows us to estimate the difference. Excel Excel does not analyze data from proportions.
Various weak theories of arithmetic have been partially motivated by a concern with numbers (or functions/proofs) that are feasible. This concern is sometimes connected to an interest in strictly finitistic approaches to arithmetic. While the precise account of feasibility varies across these systems, the general idea is that the theory should prove that 0 is feasible if $n$ is feasible then so is its successor $S(n)$ the feasible numbers are in some sense 'bounded' There are various ways of making this last statement precise: e.g. we can see it as a statement of the form $\exists xy \neg\exists z (z= \exp(x,y))$ stating that some fast-growing function is not total (or, as in the case of $I\Delta_{0}$, it may suffice to know that exponentiation -- and, by Parikh's Theorem, any function with superpolynomial growth -- is not provably total, so that the above is at least consistent with $I\Delta_{0}$). A different approach is to give some explicit upper bound on feasibility, and require the theory to prove $\forall x (\log_{2}\log_{2}x<10)$, as in Sazonov's $\texttt{FEAS}$ system. My question: is there any model-theoretic, or more broadly 'semantic', account of feasible numbers? Preferably, an account that would be (1) helpful in providing a clear mathematical picture of the structure of feasible numbers and/or (2) acceptable by the strict finitist's own lights? A word on the two desiderata: in the above systems, characterisations of feasibility are rather implicit, as well as very sensitive to the underlying language and proof systems. Moreover, models of those theories (when they exist) seem to fail both (1) and (2). For instance, models of $I\Delta_{0}$ where $\exp$ is not total (say, obtained via cuts of nonstandard models of PA) are not, presumably, objects to be taken seriously by the strict finitist as `concrete' objects, be it only due to their size. In addition, they hardly seem to be good models of 'intuitively feasible' numbers: their domains are basically given by (possibly nonstandard) integers bounded above by a power of some infinite nonstandard integer. The link to feasibility, or counting, or smallness, is very unclear, and it does not help building a mental picture consistent with the strict finitist's motivations. Sazonov's theory is downright inconsistent in the classical sense (i.e. if we allow unbounded proof length), so it admits no (classical) models. So: is there a serious mathematical account of 'feasibility' of this kind? Some additional remarks: Many logicians, like Gaifman, suggest a connection with vagueness, as the feasible numbers can be seen as forming a vagueset. But do we really need to resort to vagueness to provide semantics for feasible numbers? One possibility is to attempt an account in modal terms, where we imagine a Kripke frame where states are finite sets of integers representing 'the numbers we've counted to so far', and accessibility relations represent something like reaching further numbers via applying 'feasible' functions to the current (finite) domain. Of course, the Kripke frame would have infinite domain, but one could at least argue that it models feasibility in a way that gets things right 'locally', in providing an intuitive mental picture of the process of constructing numbers. But it is difficult to see how any construction of this sort could account in any way for the role played by particular notation systems or induction axioms (bounded induction). I understand that most strict finitists are not concerned with giving a semantic account of arithmetic; some (like Nelson) are explicit formalists, and regard `semantics' as an unnecessary, or perhaps even misleading, distraction. At the very least, the idea seems to be that feasibility depends on the notational system used. This makes good sense from a constructivist perspective; feasible numbers are not a finished collection that our formal theory describes; instead, the theory describes the rules that we can employ to 'construct' numbers. Nonetheless, there may be some intrinsic interest in the question of whether an elegant 'semantic' mathematical account of feasible numbers exists, or can be provided at all.
The global attractor for a class of extensible beams with nonlocal weak damping Department of Mathematics, Nanjing University, Nanjing, 210093, China $ \begin{eqnarray*} u_{tt}+\Delta^2 u-m(\|\nabla u\|^2)\Delta u +\| u_t\|^{p}u_t+f(u) = h, \rm{in}\; \Omega\times\mathbb{R^{+}}, p\geq0 \end{eqnarray*} $ $ \Omega\subset\mathbb{R}^{n} $ $ m(\|\nabla u\|^2) $ $ f(u) $ Keywords:Extensible beam equation, nonlocal weak damping, global attractor, subcritical growth exponent. Mathematics Subject Classification:Primary: 35B40, 35B41; Secondary: 37L30. Citation:Chunxiang Zhao, Chunyan Zhao, Chengkui Zhong. The global attractor for a class of extensible beams with nonlocal weak damping. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019197 References: [1] [2] E. H. de Brito, The damped elastic stretched string equation generalized: Existence, uniqueness, regularity and stability, [3] [4] [5] [6] [7] V. Barbu, [8] M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano, Global existence and asymptotic stability for the nonlinear and generalized damped extensible plate equation, [9] I. D. Chueshov, [10] [11] [12] [13] [14] [15] [16] J. K. Hale, [17] [18] J.-L. Lions, On some questions in boundary value problems in mathematical physics, in [19] [20] [21] [22] [23] F. J. Meng, M. H. Yang and C. K. Zhong, Attractors for wave equations with nonlinear damping on time-dependent space, [24] S. Kouémou Patcheu, On a global solution and asymptotic behaviour for the generalized damped extensible beam equation, [25] [26] I. Perai, [27] [28] M. A. Jorge Silva and V. Narciso, Attractors and their properties for a class of nonlocal extensible beams, [29] M. A. J. da Silva and V. Narciso, Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping, [30] J. Simon, Régularité de la solution d'une équation non linéaire dans ${{\mathbf{R}}^{N}}$, [31] [32] R. E. Showalter, [33] C. Y. Sun, D. M. Cao and J. Q. Duan, Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity, [34] R. Temam, [35] C. F. Vasconcellos and L. M. Teixeira, Existence, uniqueness and stabilization for a nonlinear plate system with nonlinear damping, [36] [37] L. Yang and X. Wang, Existence of attractors for the non-autonomous Berger equation with nonlinear damping, [38] L. Yang, Uniform attractor for non-autonomous plate equation with a localized damping and a critical nonlinearity, [39] show all references References: [1] [2] E. H. de Brito, The damped elastic stretched string equation generalized: Existence, uniqueness, regularity and stability, [3] [4] [5] [6] [7] V. Barbu, [8] M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano, Global existence and asymptotic stability for the nonlinear and generalized damped extensible plate equation, [9] I. D. Chueshov, [10] [11] [12] [13] [14] [15] [16] J. K. Hale, [17] [18] J.-L. Lions, On some questions in boundary value problems in mathematical physics, in [19] [20] [21] [22] [23] F. J. Meng, M. H. Yang and C. K. Zhong, Attractors for wave equations with nonlinear damping on time-dependent space, [24] S. Kouémou Patcheu, On a global solution and asymptotic behaviour for the generalized damped extensible beam equation, [25] [26] I. Perai, [27] [28] M. A. Jorge Silva and V. Narciso, Attractors and their properties for a class of nonlocal extensible beams, [29] M. A. J. da Silva and V. Narciso, Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping, [30] J. Simon, Régularité de la solution d'une équation non linéaire dans ${{\mathbf{R}}^{N}}$, [31] [32] R. E. Showalter, [33] C. Y. Sun, D. M. Cao and J. Q. Duan, Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity, [34] R. Temam, [35] C. F. Vasconcellos and L. M. Teixeira, Existence, uniqueness and stabilization for a nonlinear plate system with nonlinear damping, [36] [37] L. Yang and X. Wang, Existence of attractors for the non-autonomous Berger equation with nonlinear damping, [38] L. Yang, Uniform attractor for non-autonomous plate equation with a localized damping and a critical nonlinearity, [39] [1] Yanan Li, Zhijian Yang, Fang Da. Robust attractors for a perturbed non-autonomous extensible beam equation with nonlinear nonlocal damping. [2] Michele Coti Zelati. Global and exponential attractors for the singularly perturbed extensible beam. [3] Marcio Antonio Jorge da Silva, Vando Narciso. Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping [4] D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky. Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation. [5] Azer Khanmamedov, Sema Simsek. Existence of the global attractor for the plate equation with nonlocal nonlinearity in $ \mathbb{R} ^{n}$. [6] Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal [7] Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. [8] Marcio A. Jorge Silva, Vando Narciso, André Vicente. On a beam model related to flight structures with nonlocal energy damping. [9] [10] [11] [12] Jiayun Lin, Kenji Nishihara, Jian Zhai. Critical exponent for the semilinear wave equation with time-dependent damping. [13] Wenru Huo, Aimin Huang. The global attractor of the 2d Boussinesq equations with fractional Laplacian in subcritical case. [14] Marcio Antonio Jorge da Silva, Vando Narciso. Attractors and their properties for a class of nonlocal extensible beams. [15] Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. [16] Sergey Zelik. Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent. [17] Gui-Dong Li, Chun-Lei Tang. Existence of positive ground state solutions for Choquard equation with variable exponent growth. [18] Linfeng Mei, Wei Dong, Changhe Guo. Concentration phenomenon in a nonlocal equation modeling phytoplankton growth. [19] [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
I am struggling on this problem since days: $L = \{a^nba^nba^nb \mid n \in \Bbb N\}$. I have to give for this language a context-sensitive grammar. One possible grammar is: \begin{align} S&\rightarrow Tb &(1)\\ T&\rightarrow AXY &(2)\\ T&\rightarrow ATXY &(3)\\ YX&\rightarrow YZ &(4)\\ YZ&\rightarrow WZ &(5)\\ WZ&\rightarrow WY &(6)\\ WY &\rightarrow XY &(7)\\ AX &\rightarrow AbA_X &(8)\\ A_XX&\rightarrow A_XA_X &(9)\\ A_XY&\rightarrow A_XbA_Y &(10)\\ A_YY&\rightarrow A_YA_Y &(11)\\ A&\rightarrow a &(12)\\ A_X&\rightarrow a &(13)\\ A_Y&\rightarrow a &(14) \end{align} We can generate $A^n(XY)^n$ using Rule (1) to (3). Rule (4) to (7) are used to change $YX$ to $XY$, thus we can generate $A^nX^nY^n$. At last, using Rule (8) to (14) we can generate $a^nba^nba^nb$. Note we needn't worry that in a pattern $YX$, $Y$ yields to $A_Y$ (or $bA_Y$) before we exchange $X$ and $Y$, because otherwise there is no rule to eliminate $X$ in this pattern. Lemma 1: The non-contracting rule $XY\rightarrow YX$ can be rewritten as context-sensitive rules. Proof: If that rule is the only rule in the grammar where $Y$ appears on its left-hand side, we can replace $XY\rightarrow YX$ by the following three context-sensitive rules, $XY\rightarrow NY$, $NY\rightarrow NX$, and $NX\rightarrow YX$, where $N$ be a new non-terminal. We will not use the case when $Y$ also appears on the left-hand side of other rules. Lemma 2: The non-contracting rule $XY\rightarrow aX$ can be rewritten as context-sensitive rules. Proof: It is the same as the above. Because of the lemma, we will include rules like $XY\rightarrow YX$ or $XY\rightarrow aX$ in our context-sensitive grammar with the understanding that each of them represent three context-sensitive rules. The outline of the idea to build the grammar is to let non-terminal $T_1$"travel" from the left-hand side of ${A_1}^n{A_2}^n{A_3}^n$ all the way to the right-hand side, transforming each $A_1$, $A_2$, and $A_3$ to $a$ along the way as well as updating itself to $T_2$ and then $T_3$ appropriately so as to divide the phases definitively. Here is the full strategy in plain words. $S$ becomes $T_1A$ . $A$ is blown up to ${A_1}^n(A_2A_3)^n$ by rules $A\rightarrow A_1A(A_2A_3)\mid A_1(A_2A_3)$. Note "(" anf ")" are used to indicate operating precedence. They are not terminals nor non-terminals. $A_3A_2$ is transformed to $A_2A_3$ repeatedly so that $(A_2A_3)^{n}$ becomes ${A_2}^n{A_3}^n$. $T_1A_1$ is transformed to $aT_1$ repeatedly so that $T_1{A_1}^n$ becomes $a^nT_1$. $T_1A_2$ becomes $bT_2A_2$. $T_2A_2$ is transformed to $aT_2$ repeatedly so that $T_2{A_2}^n$ becomes $a^nT_2$. $T_2A_3$ becomes $bT_3A_3$. $T_3A_3$ is transformed to $aT_3$ repeatedly so that $T_3{A_3}^n$ becomes $a^nT_3$. $T_3$ is changed to b. Here is the full strategy in terms of formal generation. $$\begin{aligned} S &\Rightarrow T_1A\\ &\Rightarrow^* T_1A_1^n(A_2A_3)^n\\ &\Rightarrow^*T_1{A_1}^n{A_2}^n{A_3}^n\\ &\Rightarrow^*a^nT_1{A_2}^n{A_3}^n\\ &\Rightarrow^*a^nbT_2{A_2}^n{A_3}^n\\ &\Rightarrow^*a^nba^nT_2{A_3}^n\\ &\Rightarrow^*a^nba^nbT_3{A_3}^n\\ &\Rightarrow^*a^nba^nba^nT_3\\ &\Rightarrow a^nba^nba^nb \end{aligned}$$ Here is the context-sensitive grammar, where each of rule (3), rule (4), rule (6), and rule (8) stands for three context sensitive rules as given by the lemma above. In case where $\Bbb N$ is understood to include 0, we should add rule $S\rightarrow bbb$. \begin{align} S&\rightarrow T_1A &(1)\\ A&\rightarrow A_1AA_2A_3 \mid A_1A_2A_3 &(2)\\ A_3A_2&\rightarrow A_2A_3 &(3)\\ T_1A_1&\rightarrow aT_1 &(4)\\ T_1A_2&\rightarrow bT_2A_2 &(5)\\ T_2A_2 &\rightarrow aT_2 &(6)\\ T_2A_3 &\rightarrow aT_3A_3 &(7)\\ T_3A_3 &\rightarrow aT_3 &(8)\\ T_3&\rightarrow b &(9)\\ \end{align} Exercise 1. Explain why the grammar cannot generate any string that is not of the form $a^nba^na^nb$. Exercise 2. Write a grammar for $\{a^nb^{2n}a^{3n} \mid n \in \Bbb N\}$. Exercise 3. Write a grammar for $\{a^{n+n^2} \mid n \in \Bbb N\}$. I like the feature sensitive grammar notaion, means for each term is a set of features assigned, what must be matched inside a rule. The rule will be just: S[a_count = n]-> a{n}b a{n}b a{n}b , Compare it to notations above with 10 rules. While matching the feature rule, parser will mach amount of a's and assign the value to S.a_count field. Dont forget, a parser is a turing complete program in praxis. Further, arithmetical expression are possible : S[a_count = n]-> a{n}b{2*n}c{3*n}, Exercise 3 is not possible with this notation, it is something like : S[a_count = m]-> a{m} : m == n + n*n , n in N so equasion must be solved here
(→Siril processing tutorial) (40 intermediate revisions by 2 users not shown) Line 1: Line 1: − + + + + + * [[Siril:Tutorial_import|Convert your images in the FITS format Siril uses (image import)]] * [[Siril:Tutorial_import|Convert your images in the FITS format Siril uses (image import)]] * [[Siril:Tutorial_sequence|Work on a sequence of converted images]] * [[Siril:Tutorial_sequence|Work on a sequence of converted images]] * [[Siril:Tutorial_preprocessing|Pre-processing images]] * [[Siril:Tutorial_preprocessing|Pre-processing images]] − * [[Siril:Tutorial_manual_registration|Registration ( + * [[Siril:Tutorial_manual_registration|Registration (alignment)]] * → '''Stacking''' * → '''Stacking''' − ==Stacking== + ==Stacking== − The final + + The final to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. + Sum Stacking + is the stack . The + Stacking + is to + , + + and + is very . − + : + . − + + + + the in the + , is . + + to + the . + + is + the the . + + + + + + + + + + + + + + + + + + + + + + + [[File:Siril stacking result.png|700px]] [[File:Siril stacking result.png|700px]] − The + + + + + + + + The above the result in Siril the . Note the of the signaltonoise regarding the result given for one frame in the previous [[Siril:Tutorial_preprocessing|step]] + + + + + + . + + + + + + + − + Latest revision as of 10:34, 13 September 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixel Minimum Stacking This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower. In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]). The output console thus gives the following result: 14:33:06: Pixel rejection in channel #0: 0.181% - 1.184% 14:33:06: Pixel rejection in channel #1: 0.151% - 1.176% 14:33:06: Pixel rejection in channel #2: 0.111% - 1.118% 14:33:06: Integration of 12 images: 14:33:06: Pixel combination ......... average 14:33:06: Normalization ............. additive + scaling 14:33:06: Pixel rejection ........... Winsorized sigma clipping 14:33:06: Rejection parameters ...... low=4.000 high=3.000 14:33:07: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 14:33:07: Execution time: 9.98 s. 14:33:07: Background noise value (channel: #0): 9.538 (1.455e-04) 14:33:07: Background noise value (channel: #1): 5.839 (8.909e-05) 14:33:07: Background noise value (channel: #2): 5.552 (8.471e-05) After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 12 files. The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]21/5.1 = 4.11 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
I was reading through a proof that no group of order $400$ is simple which can be found here: https://math.stackexchange.com/a/79644/169389 Here is an outline for a solution. First of all, $|G| = 400 = 2^4 \cdot 5^2\ $. By Sylow's theorem we know that the number of Sylow 5-subgroups must be a divisor of $2^4$ and that it is $1$ modulo $5$. Thus it is either $1$ or $2^4$. If there is only one Sylow 5-subgroup, it must be normal. For the other case, suppose first that the intersections of different Sylow 5-subgroups are always trivial. By counting elements you can conclude that $G$ has exactly one Sylow 2-subgroup, which is then normal. If we have Sylow 5-subgroups $P$ and $Q$ such that $P \cap Q \neq \{1\}$, then $|P \cap Q| = 5$. Therefore $P \cap Q$ is normal in $P$ and $Q$, and thus is normal in the subgroup $\langle P, Q \rangle$ generated by $P$ and $Q$. Finally, show that either $\langle P, Q \rangle$ is normal in $G$ or equals $G$. I am trying to get to grips with general problems like this and feel that this is one of the best explanations, but I still feel that this argument goes too quickly and I still need some clarification on a few things. First problem: In the second paragraph we consider $n_5$(number of sylow 5-subgroups)$=2^4$ and suppose that the intersections of different Sylow $5$-subgroups $=1$. When he says "by counting the elements", I guess he means the intersections of different Sylow $5$-subgroups which would be $1$ which we have assumed, how can we now conclude that $n_2=1$? Second problem: I am wondering why $|P \cap Q| = 5$ and not $5$ or $25$, I think that it may be because they are different their intersection would have to be $5$ since if the intersection is only of order $25$ when the groups are the same and of order $25$, but if this could be clarified, it would be greatly appreciated Third problem: To show $\langle P, Q \rangle$ is normal in $G$ or equals $G$. How did a commenter discern that the possible orders for $\langle P, Q \rangle$ are $50,100$ and $200$, why not $40$ or $25$?
This source (PDF) gives the closed-form for vomma (or volga, i.e. the second derivative of price w.r.t. volatility) of the Black Scholes option pricing model as: $$S_{0}e^{-qT}\sqrt{T}\frac{1}{\sqrt{2\pi}}e^{-\frac{d_{1}^{2}}{2}}\frac{d_{1}d_{2}}{\sigma}$$ where $$d_{1} = \frac{ln(S_{0}/K)+(r-q)T + \sigma^{2}/2T}{\sigma\sqrt{T}}$$ and $$d_{2} = \frac{ln(S_{0}/K)+(r-q)T - \sigma^{2}/2T}{\sigma\sqrt{T}}$$ Two questions: Is this correct? Please provide additional source and/or proof. What is $q$? (it's not defined in the referenced document) Edit: I think there's a missing set of parentheses around $\sigma^{2}/2$ in the formulas for $d_{1}$ and $d_{2}$. E.g. $d_{1}$ should be $$d_{1} = \frac{ln(S_{0}/K)+(r-q)T + (\sigma^{2}/2)T}{\sigma\sqrt{T}}$$
Question: In the system shown in the figure below, a {eq}m = 13.5\ kg {/eq} mass is released from rest and falls, causing the uniform {eq}M = 11.5\ kg {/eq} cylinder of diameter {eq}27.5\ cm {/eq} to turn about a frictionless axle through its center. How far will the mass have to descend to give the cylinder {eq}225\ J {/eq} of kinetic energy? Conservation Of Energy: Energy is something that cannot be created by its own or can be destroyed by our own will but it can change itis from one form into another, for example, there are many forms of energy such as kinetic energy, potential energy, spring potential energy, etc. Let for instance an apple is dropped from the table then the Potential energy stored in the apple before falling is converted into kinetic energy and will be maximum at the lowest point where potential energy is zero hence potential is not destroyed it is just converted into the kinetic energy. Answer and Explanation: Given: Mass of the object released is {eq}m=13.5\ kg {/eq} Mass of the cylinder is {eq}M=11.5\ kg {/eq} The diameter of the cylinder is {eq}D=27.5\ cm {/eq} Kinetic energy to be gain by cylinder is {eq}K.E=225\ J {/eq} When the mass is released then the potential energy stored in the mass is converted into the kinetic energy of the mass Rotation kinetic energy of the cylinder. Now from the conservation of energy: {eq}P.E_i=K,E_f+R.E\\ mgh=\frac{1}{2}mv^2+\frac{1}{2}I\omega ^2 {/eq} Here the height that is descended by the mass is {eq}h {/eq} Moment of inertia for the cylinder is {eq}I=\frac{1}{2}MR^2 {/eq} Angular velocity of the cylinder is {eq}\omega =\frac{v}{R} {/eq} For the final velocity of the mass {eq}R.E=\frac{1}{2}I\omega^2\\ R.E=\frac{1}{2}\times \frac{1}{2}MR^2\times \frac{v^2}{R^2}\\ R.E=\frac{1}{2}Mv^2 {/eq} Since the rotational kinetic energy of the cylinder has to be equal to 225 J: {eq}225=\frac{1}{2}Mv^2\\ 225=\frac{1}{2}\times (11.5)v^2\\ 225=5.75\ v^2\\ v^2=39.13\\ v=6.25\ m/s {/eq} Thus, the velocity by which the mass is descending downward is 6.25 m/s Now for the height descended by the mass: {eq}mgh=\frac{1}{2}mv^2+\frac{1}{2}I\omega ^2\\ (13.5)\times 9.81\times h=\frac{1}{2}(13.5)\times (6.25)^2+225\\ 132.43\ h=263.67+225\\ 132.43\ h=488.67\\ h=3.69\ m {/eq} Thus, the height by which the mass is descended is 3.69 m Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from Geography 101: Human & Cultural GeographyChapter 13 / Lesson 9
As far as I know, the hamiltonian formulation is even more general than the lagrangian one, in the sense that you may not be able to find a lagrangian description for a particular system, which nonetheless can be treated in a hamiltonian framework. Remember how the hamiltonian formalism can be introduced: we define generalized momenta $p_k=\partial L/\partial \dot q_k$ and notice that \begin{equation}\frac{\partial L}{\partial \dot {q_k}}=\frac{\partial T}{\partial \dot{q_k}}=\frac{\partial}{\partial \dot{q_k}}\left(\frac{1}{2}a_{rs}(q,t)\dot{q_r}\dot{q_s}+b_r(q,t)\dot{q_r}+c(q,t)\right)=a_{ks}\dot{q_s}+b_k\end{equation} , where there is a sum on the indices $r,s$ and where we have decomposed the kinetic energy as the sum of a term quadratic in the generalized velocities, one linear and a constant (as it can always be done, given holonomic constraints). The symmetric matrix $\{a_{ks}\}$ is invertible so $\dot{q_s}=\phi_s(q,p,t)$. All this to say that Lagrange equations can always be put in normal form: \begin{equation}\dot{q_s}=\phi(q,p,t)\end{equation} \begin{equation}\dot{p_s}=\frac{\partial L}{\partial \dot{q_s}}\end{equation} We can define the hamiltonian $H(q,p,t)$ via the usual Legendre transformations and derive Hamilton's equations of motion. Once we have developed the hamiltonian formalism, we can forget how we get there and treat the $q$'s and the $p$'s as independent variables. Is it possible to get back to Lagrange equations and prove the two formalisms are equivalent? Yes, but only under a very general condition: given the hamiltonian and hamilton's equations, it must be possible to express the $\dot{q}$'s as functions of the canonical coordinates. If it is possible, define $L=p_k\partial h/\partial p_k-H$, where it is understood that now $L$ is thought of as a function of $(q,\dot q,t)$. It can be proven from here that then Lagrange equations must also hold. So, no, not all mechanical systems have a lagrangian description since you may start from a hamiltonian and find out that the relations that give the $\dot q$'s in terms of $(q,p)$ are not invertible. The hamiltonian can be a very general function, not necessarily decomposable in a kinetic and a potential term. By the way,the total derivative of $H$ is equal to its partial derivative with respect to time; so $H$ is the energy only if $H=H(q,p)$, that is the constraints do not depend on time.This post imported from StackExchange Physics at 2015-07-29 19:14 (UTC), posted by SE-user quark1245
For fixed $m = 0, 1, 2, ...$ $$f_m(k) = \prod_{j=1}^{m}(k+j).$$ Some examples of $f_m(k)$ are as following: $$f_0(k) = 1, \quad f_1(k) = (k+1), \quad f_2(k) = (k+1)(k+2).$$ The $s_m(n)$ is defined as following: $$s_m(n) = \sin\left(\frac{t}{2}\right)\sum_{k=0}^nf_m(k)\sin(k+0.5)t,\qquad t\in[0,\pi].$$ The $s_m(n)$ can also be defined as following: $$s_m(n) = \sum_{j=0}^n\frac{(-4)^j}{(2j+1)!}\left(\sum_{k=j}^n\frac{f_m(k)(2k+1)(k+j)!}{(k-j)!}\right)x^{2j+2},\qquad x\in[0,1].$$ I want to prove $$|s_m(n)| \le f_m(n), \forall x ~or ~t$$ I am sure the inequality holds but I am unable to prove it. I used MATLAB and verified the inequality for some values of $m$ and $n$ as presented below: \begin{array}{ccccccccc} n & \max(s_0(n)) & f_0(n) & \max(s_1(n)) & f_1(n) & \max(s_2(n))& f_2(n) & \max(s_3(n)) & f_3(n)\\ 0 & 1.00 & 1 & 1.00 & 1 & 2.00 & 2 & 6.00 & 6 \\ 1 & 1.00 & 1 & 1.53 & 2 & 4.17 & 6 & 18.00 & 24 \\ 2 & 1.00 & 1 & 2.07 & 3 & 8.00 & 12 & 42.00 & 60 \\ 3 & 1.00 & 1 & 2.60 & 4 & 12.46 & 20 & 78.30 & 120 \\ 4 & 1.00 & 1 & 3.13 & 5 & 18.03 & 30 & 132.00 & 210 \end{array} Any help will be greatly appreciated. PS: Please refer to this question. I asked for inductive proof so that I could use induction steps in the above inequality. But I did not get one.
For the following wave equation $ \frac{{\partial ^2 p}}{{\partial ^2 x}} + \frac{{\partial ^2 p}}{{\partial ^2 y}} = A\frac{{\partial ^2 p}}{{\partial ^2 t}} + B\frac{{\partial p}}{{\partial t}} $ is there a way to show that there are boundary conditions at or near positive and negative infinity, for both non-zero B and B=0 conditions, and for {A,B} as rational numbers? I believe that this should follow from Sommerfeld's condition of radiation, and should perhaps be similar to conditions for the ordinary wave equation. What are these boundary conditions? Ideally, I think that the boundary conditions should involve both time and spatial derivatives. By "positive and negative infinity" I mean that I am interested in what happens when $x \to \pm \infty $ and $y \to \pm \infty$. I've been working on a problem where I would like to computationally solve the wave equation with boundary conditions that approximate infinity. So I suppose that this would be an imposed compatibility condition.
Proving $$(P^T P^T) \Lambda P P \equiv \Lambda$$ where $P$ is an orthogonal matrix, $\Lambda$ is diagonal matrix. All matrices have dimensions $n \times n$. Since this is the last step of the proof shown in $\chi^2$ for dependent Gaussian distributions It is known that all diagonal elements of $\lambda_i \geq 0$ Multiplied orthogonal matrices give another orthogonal matrix Proof: $$ P \cdot P^T = I\\ Q := P \cdot P\\ P^{-1} = P^T\\ PP \cdot (PP)^T = PP \cdot P^T P^T = P I P^T = P \cdot P^T = I $$ So $Q$ is orthogonal as well. How can I now prove that $Q \Lambda Q^T = \Lambda$? For a full rank $\Lambda$ with equal diagonal elements and otherwise zero this can proven: $Q \Lambda Q^T = Q \lambda \cdot I Q^T = \lambda Q \cdot Q^T = \lambda \cdot I = \Lambda$ How can I prove this for the general case with differing diagonal elements?
Back to Continuous Optimization consider the problem of optimizing an objective function subject to constraints on the variables. In general terms, Constrained optimization problems \[ \begin{array}{lllll} \mbox{minimize} & f(x) & & & \\ \mbox{subject to} & c_i(x) & = & 0 & \forall i \in \mathcal{E} \\ & c_i(x) & \leq & 0 & \forall i \in \mathcal{I} \end{array} \] where \(f\) and the functions \(c_i(x) \,\) are all smooth, real-valued functions on a subset of \(R^n \,\) and \(\mathcal{E}\) and \(\mathcal{I}\) are index sets for equality and inequality constraints, respectively. The feasible setis the set of points \(x\) that satisfy the constraints. Constrained optimization covers a large number of subfields, including many important special cases for which specialized algorithms are available. Bound Constrained Optimization: the only constraints are lower and upper bounds on the variables Linear Programming: the objective function \(f\) and all of the constraints \(c_i\) are linear functions Quadratic Programming: the objective function \(f\) is quadratic and the constraints \(c_i\) are linear functions Semidefinite Programming: the objective function \(f\) is linear and the feasible set is the intersection of the cone of positive semidefinite matrices with an affine space Nonlinear Programming: at least some of the constraints \(c_i\) are nonlinear functions Semi-infinite Programming: there is an infinite number of variables or an infinite number of constraints (but not both)
Proper time The proper time \(τ\) is the time measured by an observer \(O\) (which can just be a particle) who “stands still” in space relative to a coordinate system. For example, let's suppose that the origin of an \(x\), \(y\), \(z\) rectangular coordinate system is at a tree on Earth's surface. For convenience, we'll call this coordinate system the \(x^i\) coordinate system where \(i=1,2,3\) such that \(x^1\) represents the "\(x\) coordinate," \(x^2\) represents the "\(y\) coordinate," and \(x^3\) represents the "\(z\) coordinate." If I am standing still on the surface of the Earth one meter away from a tree (where I am standing still and not moving away from the tree) where the origin of a coordinate system \(x^i\) is located at the tree (and the coordinate system is fixed at that location), I will measure the proper time in this reference frame. Any other observer \(O'\) who is moving at a relative velocity away from the tree will measure some time \(t'\); according to the definition of proper time, they will not be measuring the proper time \(τ\). Time Dilation In one of Einstein’s original thought experiments, he imagined an observer \(O'\) riding in a train which was moving past another observer \(O\) standing still on the side of the tracks. The train was moving at a velocity of \(\vec{v}\) relative to \(O\). Two reference frames \(R'\) and \(R\) (which should be thought of as coordinate systems moving through space) are attached to \(O'\) and \(O\) respectively. The train is moving along only the \(x\)-axis in the \(R\)-frame; in the \(R'\)-frame no part of the train is moving through space along the \(x'\)-, \(y'\)- and \(z'\)-axes and is at rest. Right at the moment when \(O'\) is at \(x=0\) both observers clocks are synchronized and \(t'=t=0\). At this moment in time a light pulse is emitted from a light source \(S'\). This light pulse travels upwards along the vertical, bounces off of the mirror, and then arrives back at \(S'\). Let the distance between \(S'\) and the mirror be \(d\). In the \(R'\)-frame the light pulse travels along the \(y\)-axis at a constant speed \(c\). Let the time interval for the light pulse to travel from \(S'\) to the mirror and then back to \(S'\) (in the \(R'\)-frame) be \(Δt'\). Then the amount of time necessary for the light pulse to travel from \(S'\) to the mirror (the “half-way distance”) in the \(R'\)-frame is \(\frac{Δt'}{2}\). Since the speed of the light pulse is constant relative to \(R'\) then, from kinematics, we know that \(d=c\frac{Δt'}{2}\) and that $$Δt'=\frac{2d}{c}.$$ We shall now see that the constancy of the speed of light with respect to both reference frames combined with the fact that the light pulse must travel through a greater distance with respect to the \(R\)-frame leads to \(O\) measuring a longer time interval \(Δt\) between event 1 (when the light pulse is emitted) and event 2 (when the light pulse arrives back at \(S'\)). If \(Δt\) is the time interval measured in the \(R\)-frame for the light pulse to travel from \(S'\) to the mirror and back to \(S'\), then the time interval measured in that frame for the light pulse to go from \(S'\) to the mirror is \(\frac{Δt}{2}\). By the time the light pulse reaches the mirror the train (and the mirror) will have moved a distance \(v\frac{Δt}{2}\) in the \(R\)-frame. Thus, in the \(R\)-frame, the light pulse must have traveled a horizontal distance \(v\frac{Δt}{2}\) and a vertical distance \(d\) in this time interval. Using the Pythagorean Theorem, the total distance the light pulse traveled is related to the horizontal and vertical distances by \(x^2=(v\frac{Δt}{2})^2+d^2\) where \(x\) is the total distance between \(S'\) and the mirror. The speed of light is a constant \(c=3×10^8\frac{m}{s}\) relative to \(R\) (as it is for any frame) and because this speed is constant it follows (from kinematics) that \(x=c\frac{Δt}{2}\) and \((c\frac{Δt}{2})^2=(v\frac{Δt}{2})^2+d^2\). If we solve for \(Δt\) we get $$Δt=\frac{2d}{\sqrt{c^2-v^2}}=\frac{2d}{c\sqrt{1-\frac{v^2}{c^2}}}.$$ Since \(Δt'=\frac{2d}{c}\) we have $$Δt=\frac{Δt'}{\sqrt{1-\frac{v^2}{c^2}}}.$$ Let's define a term called the Lorenz factor as being $$γ≡\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}.\tag{1}$$ In the \(R'\)-frame events 1 and 2 occur at the same points in space (since there \((x'\text{ ,}y'\text{, }z')\) points are identical). Any observer who measures the time interval between two events in a frame in which those two events occur at the same points in space are said to be measuring the proper time interval \(Δτ\) between those two events. We see that \(Δt'=Δτ\) and that $$Δt=γΔτ.\tag{2}$$ Because \(γ\) is always greater than one, it follows that \(Δt\) is always greater than \(Δτ\). Fundamentally, this means that whenever an observer is measuring the time interval between two events in a frame that are not at the same two spatial points, a time interval \(Δt\) will be measured that is always greater than \(Δτ\). The factor \(γ\) (which can be calculated using Equation (1)) represents how much longer \(Δt\) will be than \(Δτ\) based on the relative speed of both reference frames. (It is important to mention that this effect only becomes important when \(γ\) significantly deviates from equaling one which only happens, roughly speaking, when \(v>0.01c\). When \(v>0.9c\), the Lorenz factor begins to blow up very rapidly.) If \(O'\) started walking “carrying” his coordinate system (which is attached to him) around with him, he would be moving at a speed \(v\) relative to another frame corresponding to someone “standing still” on the train. If \(O'\) was carrying a clock, each tick would be occurring at the same coordinate point in the \(R'\)-frame and \(O'\) would therefore be measuring the proper time \(Δτ\); the person standing still on the train, however, would see each hand of the clock tick at a different coordinate point in his coordinate system and he would therefore be measuring the time \(Δt\). To the person standing still, the each hand of the clock would take a slightly longer time interval \(Δt\) to move from one position to another; to him, \(O'\)s clock would appear to be running a little slow. But since his walking speed is much slower than \(0.01c\), the effect of time dilation is negligible. Oftentimes, for practical purpose, if \(v<0.01c\), then we can assume that \(Δt=Δτ\). Only when \(v>0.01c\) does \(γ\) start to significantly deviate from being one. On the most fundamental level, photons are emitted from atoms composing the light source \(S'\). Those photons then travel through space, bounce off the atoms composing the mirror, and arrive back at the atoms composing \(S'\). All chemical and biological processes are, at the most fundamental level, due to atoms interacting with other atoms via photons of light and electromagnetic radiation. The fact that it takes a longer time \(Δt\) for any two atoms to interact with one another via a photon of light/radiation going from one atom to another means that it takes longer for chemical, and therefore biological, interactions to occur. Therefore, \(O\) in the \(R\)-frame will see all physical processes on the train (indeed, a relativistic train going very fast) take a longer time to happen and the march of time for all events occuring on the train will progress more slowly in his reference frame. This article is licensed under a CC BY-NC-SA 4.0 license.
Editor’s Note: This article is authored by Valentin Fadeev, a physics PhD candidate at The Open University, UK. You can reach him via Twitter at @valfadeev. Despite their undeniable and enormous role in applications, the methods of calculus are often seen by students present and past as a form of mathematical art removed from everyday life. Evaluating integrals may seem as a sport for the initiated. Extremal problems are mostly to demonstrated on the stripped-down textbook examples. Coming across a use case in an unusual and easily understandable context is rare. We present one such use case here and hope the reader finds it instructive. We begin with some background from maritime logistics. The term “bulk cargo” is used to denote the type of cargo, such as coal, ores, fertilizers, grain and the like, carried by merchant vessels “in bulk”, that is put directly in the holds and not stuffed in boxes or any other containers. In some cases, while waiting to be picked up by vessels or other modes of transportation, such cargoes will be stored in stacks spilled by belt conveyors or other handling mechanisms. We consider here a simplified problem of calculating the optimal dimensions of one such stack given its total volume. For the criteria of optimality we pick the ground area occupied by the stack and the work against gravity required to form it. We shall approximate the stack with a geometrical body in the shape of an obelisk of height H, having a rectangle L_2\times B at its base and 4 facets inclined towards the centre at equal angles, thus forming the upper horizontal edge of length L_1 (see Figure 1). The volume of the stack can be calculated as follows:\begin{aligned}V& =\int_0^HS(x)\,\mathrm{d}x , & (1)\end{aligned} where the x-axis is directed downwards and S(x) is the area of the horizontal cross-section as a function of x, given explicitly as \begin{aligned}S(x)&=a(x)\cdot b(x)\\&=\left(L_{1}+\frac{L_{2}-L_{1}}{H}x\right)\cdot\frac{Bx}{H}\\&=B\left(L_{1}\frac{x}{H}+\left(L_{2}-L_{1}\right)\left(\frac{x}{H}\right)^{2}\right).&(2)\end{aligned} Substituting (2) in (1) and integrating we find \begin{aligned}V&=BH\int_0^1\left(L_1\frac{x}{H}+(L_2-L_1)\left(\frac{x}{H}\right)^2\right)\,\mathrm{d}\left(\frac{x}{H}\right)\\&=\frac{HB}{6}(2L_2+L_1) &(3)\end{aligned} Now we shall determine the amount of work (energy spent) performed against the forces of gravity that is required to form the stack. Assume that density of the cargo is \gamma\; \mathrm{kg}/\mathrm{m}^3. The work \mathrm{d}A required to lift a layer of thickness \mathrm{d}x to height H-x is given as follows: dA=\gamma gdV\cdot(H-x)=\gamma gB\left(L_1\frac{x(H-x)}{H}+\frac{(L_2-L_1)x^2(H-x)}{H^2}\right)\,\mathrm{d}x, where g\approx 9.81 \,\mathrm{kg} \cdot \mathrm{m} / \mathrm{s}^2 is the acceleration due to gravity. Thus, the total work is \begin{aligned}A&=\gamma gB\int_0^H\left(L_1\frac{x(H-x)}{H}+\frac{(L_2-L_1)x^2(H-x)}{H^2}\right)\,\mathrm{d}x\\&=\frac{\gamma gBH^2}{12}(L_1+L_2). & (4)\end{aligned} Now we can ask the following question: if the volume V given, what should be base length L_{2} and width B of so that the stack occupies the smallest possible area? First, we simplify our parametrization of the problem by introducing the angle of the natural slope \chi. This is a term from the soil mechanics which denotes the angle that an unattached slope of the material forms with the horizontal surface at mechanical equillibrium. It depends on the properties of the material such as density and viscosity, and can be looked up in special tables. In our model \begin{aligned}\frac{2H}{B}=\tan{\chi}.\end{aligned}. We can, therefore, express everything in terms of L\equiv L_2, B and \chi \begin{aligned}H=\frac{1}{2}B\tan{\chi}, &\qquad L_1=L_2-\frac{2H}{\tan{\chi}}=L-B& (5).\end{aligned} Then the volume (3) can be rewritten as follows \begin{aligned}V&=\frac{B^2\tan{\chi}}{12}(3L-B).&(6)\end{aligned} Solving (6) for L we obtain \begin{aligned}L&=\frac{4V}{B^2\tan{\chi}}+\frac{B}{3}.&(7)\end{aligned} The base area S can then be expressed as the function of B S(B)=LB=\frac{4V}{B\tan{\chi}}+\frac{B^2}{3}, whereby taking the derivative with respect to B and solving S'(B) = 0 we find the stationary point B_0 \begin{aligned}B_0&=\sqrt[3]{\frac{6V}{\tan{\chi}}},& (8)\end{aligned} which happens to be a point of minimum for S. Using (8) L_0 is found as follows \begin{aligned}L_0&=\left(\frac{4}{\sqrt[3]{36}}+\frac{2}{3}\sqrt[3]{36}\right)\sqrt[3]{\frac{V}{\tan{\chi}}},&(9)\end{aligned} so that the minimal area is \begin{aligned}S_0=S(B_0)&=4\left(\frac{1}{\sqrt[3]{6}}+1\right)\left(\frac{V}{\tan{\chi}}\right)^{2/3}\approx 6.2\left(\frac{V}{\tan{\chi}}\right)^{2/3}.&(10)\end{aligned} Next we shall determine the values of L, B which minimize the amount of work required to form the stack. Rewriting (4) using (5) and (7) we obtain A=\frac{\gamma g \tan^2{\chi}}{144}\left(\frac{24VB}{\tan{\chi}}-B^4\right) Differentiating with respect to B we find the stationary point to be given precisely by (8) again! Thus we find that, given the volume and the natural slope (8) and (9) actually produce the optimal solution, both in terms of the ground area, and the energy cost. Finally, we can use the above results to determine the largest admissible volume of a single stack given the constraint on the ground pressure, q,\, \mathrm{kg}/\mathrm{m}^2. Given V, S and \gamma the safety condition is expressed by the inequality \frac{V\gamma}{S} < q Solving for V and using (10) we derive \begin{aligned}V&<\frac{qS}{\gamma}\approx\frac{q}{\gamma}6.2\left(\frac{V}{\tan{\chi}}\right)^{2/3},\\V&<\frac{(4.4q)^3}{\gamma^3\tan{\chi}}\approx 238.48\frac{q^3}{\gamma^3\tan{\chi}}.\end{aligned} The model can be modified to account for the fact that in practice the edges of the stack can develop surfaces that look like half-cones. In this case the reader will find, following the steps similar to the above that the total area and the work required to form the stack are given by \begin{aligned}S(B) = \frac{4V}{B\tan{\chi}} + \frac{\pi}{12}B^2, &\qquad A(B) = \frac{\gamma g \tan^2{\chi}}{144}\left(\frac{24VB}{\tan{\chi}}-\frac{3\pi}{4}B^4\right),\end{aligned} respectively. The values B_{S} and B_{A} minimizing the above expressions will not be the same anymore \begin{aligned}B_{S}=\sqrt[3]{3}B_{A}, &\qquad B_{A} = 2\sqrt[3]{\frac{V}{\pi\tan{\chi}}}.\end{aligned} We have thus determined the maximum allowed volume, given the safety constraint and the properties of the material, assuming the optimal dimensions of the stack. In conclusion, we have considered a simple but non-trivial application of the very basic tools of calculus to a problem that has important engineering applications. However, the main point is that calculus is ubiquitous and sometimes can deliver useful and elegant results in places one would least expect it to.
Everyone has solved some version of a linear system in either high school or college mathematics. If you’ve been keeping up with some of my other posts on algebra, you know that I’m about to either take something familiar away, or twist it into a different form. This time is no different; we’re going to change the field we operate over, and solve a basic linear system in a Galois field called GF(4). Linear Systems Let’s start with an example of a regular, simple linear system in two variables:\begin{aligned}2x+y&=3\\x+2y&=3\end{aligned} The goal here is to figure out all the possible (x,y) pairs that satisfy both of these equations at the same time. Both high school and college algebra teach several different methods to solve these types of equations. We can use substitution– solving perhaps the top equation for y to express y in terms of x, then substituting that expressing in the second equation to get an equation with all x. From there, we know how to solve for x. When we get a numeric answer, we can plug it back into our expression for y and get our full solution. Let’s solve the above system this way just to refresh. First, solve the top equation for y y = 3-2x Great. Now substitute the right-hand side of this equation wherever we see y in the second equation:x + 2(3-2x) = 3 Now, solve this equation for x to get x=1. Substitute this back and get y = 3-2 = 1. Matrix form – a different way to express We can actually express this equation in matrix form. This will look like A\mathbf{x} = \mathbf{v}, where \mathbf{x} and \mathbf{v} are vectors. The coefficients of the system of equations form the matrix A. The variables we want to solve for are the vector \mathbf{x}, and the vector \mathbf{v} represents the right hand sides of the equations. Our system in matrix form is Multiplying the left-hand side out will return the original form we saw earlier. This is simply a different way to express the same system of equations. We typically like to use matrix form because arithmetic and algebra on matrices yields many useful properties and methods for solving systems much larger than our example here. 1 Solving linear systems in matrix form There are several different methods for solving linear systems once we have them in matrix form. Linear algebra courses will discuss a technique called row reduction, which is incredibly useful. We won’t discuss that one in this post. We’re going to use a really handy rule called Cramer’s rule. Basically, if we take a linear system with as many equations as we have variables, and a unique solution exists 2, then we actually have an explicit formula that gives us the numerical answer for each variable in our system. We’ll quickly discuss how to compute a determinant of a 2\times 2 matrix so we can use Cramer’s rule. 3 Computing a 2\times 2 determinant The entries of a matrix A are written as a_{ij}, where i tells you the row it’s in, and j tells you the column. A general 2\times 2 matrix looks like this:A = \begin{bmatrix}a_{11} & a_{12}\\a_{21}&a_{22}\end{bmatrix} The determinant of a 2\times 2 matrix is a number, and is calculated as 4\text{det}(A) = a_{11}\cdot a_{22} - a_{12}\cdot a_{21} Back to our linear system, \text{det}(A) = 2\cdot 2 - 1\cdot 1 = 4-1=3. Back to Cramer’s rule Now, the two variables we need to solve for are x and y. We put them in that order for a reason. The right-hand side was given as the vector \mathbb{v} =\begin{bmatrix}3\\3\end{bmatrix}. We’re going to substitute this column in to either the first or second column of A when we’re computing the answer for x and y, respectively. Let’s say we’re focusing on x, the first variable. Put that \mathbb{v} where the first column of A was. Then we’ll get a new matrixA_{x} = \begin{bmatrix}3 & 1\\ 3 & 2\end{bmatrix} Let’s do the same for y, only this time we’ll put \mathbb{v} in for the 2nd column, since it’s our 2nd variable.A_{y} = \begin{bmatrix}2 & 3\\1&3\end{bmatrix} Now, Cramer’s rule tells us thatx = \text{det}(A)^{-1}\cdot\text{det}(A_{x}) \text{ and } y =\text{det}(A)^{-1}\cdot\text{det}(A_{y}) Nice and easy. We already know how to compute the determinant of a 2\times 2 matrix, so we’re basically done. Now, I used very particular notation. I wrote \text{det}(A)^{-1} instead of \frac{1}{\text{det}(A)} on purpose. I wanted to indicate that it is the multiplicative inverse of the number that is \text{det}(A). We have to be careful when discussing multiplication and division, because division as an operation doesn’t always exist. This will be important when I twist around addition and multiplication soon. Using our newfound Cramer’s rule, we can solve our equation this way. The multiplicative inverse of 3 is the fraction 1/3, because multiplying these two together gives us 1, the multiplicative identity of the real numbers.x = \frac{1}{3}\cdot \text{det}(A_{x}) = \frac{1}{3}\cdot 3 = 1 y = \frac{1}{3}\cdot\text{det}(A_{y}) = \frac{1}{3}\cdot 3 = 1 And we have our answer again, just obtained in a different way. GF(4): our first sighting of a Galois field We’ve seen groups before in several posts. We know that a group is a set of things combined with an operation that satisfies certain properties. I’m no longer satisfied with having just one operation on a set anymore. I want two operations – I want addition and multiplication. 5 Now, we’ll call these two operations + and \cdot. I’m going to jump ahead a bit of building groups into rings then fields and just go ahead and define a field: A field is a set F together with two operations + and \cdot that satisfy the following properties: (1) (F,+) is an abelian group. 6 (2) F is closed under multiplication. 7 (3) The nonzero 8 elements of F form an abelian group under \cdot (4) We get the distributive law: (a+b)\cdot c = (a\cdot c) + (b\cdot c) We just added an operation and some requirements to make sure nothing too weird happens. We can create addition and multiplication tables just like we did here. We’re going to take a look at an example of a very specific field called a Galois field. This is simply a finite field with q elements, if it exists. 9 We denote a Galois field with q elements \text{GF}(q). The field \text{GF}(4) has addition and multiplication tables that look like this. 10\begin{array}{l|rrrr}\textcolor{#893B2F}{+}&\textcolor{#893B2F}{0}&\textcolor{#893B2F}{1}&\textcolor{#893B2F}{2}&\textcolor{#893B2F}{3}\\\textcolor{#893B2F}{0}&0&1&2&3\\\textcolor{#893B2F}{1}&1&0&3&2\\\textcolor{#893B2F}{2}&2&3&0&1\\\textcolor{#893B2F}{3}&3&2&1&0\end{array}\qquad\begin{array}{l|rrrr}\textcolor{#893B2F}{\cdot}&\textcolor{#893B2F}{0}&\textcolor{#893B2F}{1}&\textcolor{#893B2F}{2}&\textcolor{#893B2F}{3}\\\textcolor{#893B2F}{0}&0&0&0&0\\\textcolor{#893B2F}{1}&0&1&2&3\\\textcolor{#893B2F}{2}&0&2&3&1\\\textcolor{#893B2F}{3}&0&3&1&2\end{array} Let’s note that the additive identity is indeed the “number” 0. 11 The multiplicative identity is the “number” 1. 12 Now, spend some time getting comfortable with this table and let’s answer a few questions: (1) What’s the additive inverse of each element? For each of 0, 1, 2, and 3, which element returns the additive identity 0 when we add it to 0, 1, 2, and 3? 0 is clearly its own additive inverse. 0+0 = 0 For 1, which element added to 1 returns 0? Looking at the addition table, we see that 1+1 = 0. Thus, 1 is its own additive inverse. Another way to write it is that -1 = 1, where -x denotes the additive inverse of the element x and notthe number -1 multiplied by x Continuing this exercise, we find that each element of GF(4) is its own additive inverse. (2) What’s the multiplicative inverse of each element? We’ll repeat the same exercise as in (1), except using the multiplication table. The multiplicative identity is 1, so to find the multiplicative inverse of each element x, we find another element y such that x\cdot y = y\cdot x = 1. We can read this off our nice multiplication table: 0 has no multiplicative inverse. Uh oh! Is GF(4) broken? No. Recall part (3) of the field definition above. The elements excluding0 must form an abelian group under multiplication. That means it’s totally ok for 0 times everything to return 0, and for 0 to have no multiplicative inverse. 13 1 is its own multiplicative inverse. Makes sense. It’s the multiplicative identity. The multiplicative inverse of 2 is 3 from the table, so the multiplicative inverse of 3 is 2. 2\cdot 3 = 3\cdot 2 = 1. This is fine. We’re defining it this way, and we are allowed to do that, provided anything we define fits the definition of a field. We aren’t in the real numbers anymore, friends. The symbols look like real numbers, but they no longer act like real numbers. When we are in GF(4), this is how these numbers behave under addition and multiplication as we define it for GF(4). Solving our earlier system, but now in GF(4) Let’s return to our earlier system of equations that we solved using Cramer’s rule. The equations look the same, but will we get the same solution if we solve it over GF(4), with our new addition and multiplication tables? Here’s our equation again in matrix form.\begin{bmatrix}2&1\\1&2\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}3\\3\end{bmatrix} A_{x} and A_{y} are also the same. A_{x}=\begin{bmatrix}3&1\\3&2\end{bmatrix}~\qquad~A_{y}=\begin{bmatrix}2&3\\1&3\end{bmatrix} To apply Cramer’s rule, we need the determinant of A, A_{x}, and A_{y} under the rules of arithmetic in GF(4). The method of computing a 2\times 2 determinant has not changed, but our answers will. Compute \text{det}(A), \text{det}(A_{x}), and \text{det}(A_{y})\text{det}(A)=2\cdot 2-1\cdot 1 Be careful here, and let’s use our addition and multiplication tables for reference. 2\cdot 2 = 3 now, and 1\cdot 1 = 1. So\text{det}(A)=2\cdot 2-1\cdot 1=3-1 We need to read carefully again here too. a-b in abstract algebra is a plus the additive inverse of b, which isn’t the number -a here. Remember, in GF(4) we only get 4 elements: 0, 1, 2, and 3. No negative numbers here. They don’t exist. Check above to note that the additive inverse of 1 is 1, so where 3+1 can be read from the addition table for GF(4). Pause. Let that sink in. We’ve twisted arithmetic a bit from what you’re used to with real numbers. There were some properties you took for granted, like the existence of negative numbers. The only tools we have are those two group tables above. That tells us everything we need to know to do arithmetic over GF(4). Let’s do this again and find the determinants of A_{x} and A_{y}, respectively in the same way we did \text{det}(A) \text{det}(A_{x})=3\cdot 2-3\cdot 1=1-3=1+3=2 (Remember that 1-3 is 1 plus the additive inverse of 3, which is 3, so 1-3=1+3)\text{det}(A_{y})=2\cdot 3-3\cdot 1=1-3=1+3=2 Again, remember that 1-3 is 1 plus the additive inverse of 3. 3 is its own additive inverse, so 1-3=1+3=2. Find the solution We’re almost done; we have all the pieces. Cramer’s rule tells us thatx=\text{det}(A)^{-1}\cdot\text{det}(A_{x})\text{ and }y =\text{det}(A)^{-1}\cdot\text{det}(A_{y}) See, there was a reason I wrote it the way I did. There are no fractions in GF(4); fractions are the multiplicative inverses of real numbers. I wrote it in the more general way so we could see that. The multiplicative inverse of \text{det}(A)=2 can be read off the multiplication table for GF(4): \text{det}(A)^{-1}=3, because 2\cdot 3 gives us the multiplicative identity 1. Finally then, we’re down to simple multiplication to get our new solutions in GF(4):x=\text{det}(A)^{-1}\cdot\text{det}(A_{x})=3\cdot 2=1 y=\text{det}(A)^{-1}\cdot\text{det}(A_{y})=3\cdot 2=1 Conclusion Notice that this solution is the same as when we lived in our comfortable world of real numbers, but this is a total coincidence. The equations were the same, the numbers involved were the same, but we changed what addition and multiplication did by moving to a new field called GF(4). The purpose of this exercise was to get used to the idea of arithmetic in a new space, and to see what an example of a Galois field looks like. Explaining how to generate these Galois fields in general and defining their addition and multiplication tables will get a bit involved; we’ll tackle these soon. For now, it’s important just to let go of our tightly held arithmetic notions that are really special properties of real numbers. Systems of equations can yield different solutions when we move to a new world. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Footnotes I doubt anyone wants to solve a linear system in 100 variables by substitution, for example. A linear system may have one of three possible options: no solution, one solution, or infinitely many solutions. Because this isn’t really a post on linear algebra per se, we’re going to safely assume that anything I throw at you in this post will have a unique solution. Matrix theory and determinants deserve their own study for sure. The point of this post isn’t really to dive into those. Right now, determinants are just computational tools for us. It’s ok to just use a tool and get used to it before we dive into how it works. The theory of exactly what a determinant is and does can get quite deep and require a bit of legwork to get there. I’m sidestepping it on purpose, because it’s not illustrative for this post. Addition and multiplication are just the things we’re going to name our two operations. They don’t have to mean exactly the same thing as addition and multiplication on real numbers, as we’ll see shortly. A group where a+b = b+a The product of two elements doesn’t leave the set F 0 is the identity element for the operation + Fun fact: Galois fields only exist if the number of elements is a prime or a power of a prime. So there is no Galois field with 6 elements. Right now, just work with them as given. There is a way to generate the arithmetic over Galois fields in general, but we’ll begin tackling that in later posts. We just want to get used to the idea that numbers don’t add and multiply the same way anymore. They’re just symbols now. 2+3 in GF(4) doesn’t have a tactile interpretation like “2 things and three more things”. When you add it to anything, you get that anything back. Again, multiplying it by anything gives the anything back. We did this on purpose. If we didn’t have this definition, the real numbers wouldn’t be a field either. When generalizing mathematics, we don’t want to break what we already have.
Research Open Access Published: The stable equilibrium of a system of piecewise linear difference equations Advances in Difference Equations volume 2017, Article number: 67 (2017) Article metrics 886 Accesses 2 Citations Abstract In this article we consider the global behavior of the system of first order piecewise linear difference equations: \(x_{n+1} = \vert x_{n}\vert - y _{n} +b\) and \(y_{n+1} = x_{n} - \vert y_{n}\vert - d\) where the parameters b and d are any positive real numbers. We show that for any initial condition in \(R^{2}\) the solution to the system is eventually the equilibrium, \((2b + d, b)\). Moreover, the solutions of the system will reach the equilibrium within six iterations. Introduction In applications, difference equations usually describe the evolution of a certain phenomenon over the course of time. In mathematics, a difference equation produces a sequence of numbers where each term of the sequence is defined as a function of the preceding terms. For the convenience of the reader we supply the following definitions. See [1, 2]. A system of difference equations of the first order is a system of the form where f and g are continuous functions which map \(\mathbf{R}^{2}\) into R. A solution of the system of difference equations (1) is a sequence \(\{(x_{n},y_{n})\}_{n = 0}^{\infty}\) which satisfies the system for all \(n \geq0\). If we prescribe an initial condition then and so the solution \(\{(x_{n},y_{n})\}_{n = 0}^{\infty}\) of the system of difference equations (1) exists for all \(n\geq0\) and is uniquely determined by the initial condition \((x_{0}, y_{0})\). A solution of the system of difference equations (1) which is constant for all \(n\geq0\) is called an equilibrium solution. If Known methods to determine the local asymptotic stability and global stability are not easily applied to piecewise systems. This is why two of the most famous and enigmatic systems of difference equations are piecewise: the Lozi Map where the initial condition \((x_{0},y_{0}) \in\mathbf{R}^{2}\) and the parameters \(a,b \in\mathbf{R}\), and the Gingerbreadman map where the initial condition \((x_{0},y_{0}) \in\mathbf{R}^{2}\). See [3–5] for more information regarding the Lozi map and Gingerbreadman map. In the last 30 years there has been progress in determining the local behavior of such systems but only limited progress in determining the global behavior. See [1, 6]. Ladas and Grove developed the following family of 81 piecewise linear systems: where the initial condition \((x_{0}, y_{0}) \in R^{2}\) and the parameters a, b, c, and \(d \in\{-1,0,1\}\), in the hope of creating prototypes that will help us understand the global behavior of more complicated systems such as the Lozi map and the Gingerbreadman map. See ([7–9]). In 2013, Lapierre found in [8] that the solutions of the following system of piecewise linear difference equations: are eventually the unique equilibrium for every initial condition \((x_{0}, y_{0}) \in\mathbf{R}^{2}\). In this paper we extend the results by examining a generalization of System (3), that is, where the initial condition \((x_{0}, y_{0}) \in\mathbf{R}^{2}\) and the parameters b and d are any positive real numbers. Main results Set The proof of the theorem below uses the result from the four lemmas that follow. They show that if \((x_{0}, y_{0})\in R^{2}\) then \((x_{1}, y _{1})\) is an element of Condition (1). Theorem 1 Let \(\{(x_{n}, y_{n})\}_{n=0}^{\infty}\) be the solution of System (4). with \((x_{0}, y_{0}) \in\mathbf{R}^{2}\) and \(b,d \in(0, \infty)\). Then \(\{(x_{n}, y_{n})\}_{n=6}^{\infty}\) is the equilibrium \((2b + d, d)\). Proof Suppose \((x_{0}, y_{0})\in R^{2}\). First we will show that \((x_{2}, y _{2})\) is an element of Condition (2), that is, Then Therefore \((x_{2},y_{2})\) is an element of Condition (2), as required. Next, we will show that \((x_{3}, y_{3})\) is an element of Condition (3), that is Since \((x_{2}, y_{2})\) is an element of Condition (2), we have and we have Then Therefore \((x_{3},y_{3})\) is an element of Condition (3), as required. Next, we will show that \((x_{4}, y_{4})\) is an element of Condition (4), that is Since \((x_{3}, y_{3})\) is an element of Condition (3) and \(x_{4}=\vert x_{3}\vert -y_{3}+b\) and \(y_{4}= x_{3} - \vert y_{3}\vert -d\), we see that \(x_{4} \geq0\) and \(y_{4} \geq0\). Also since \((x_{3}, y_{3})\) is an element of Condition (3), we have and so Note that Then Therefore \((x_{4},y_{4})\) is an element of Condition (4), as required. Next, we will show that \((x_{5}, y_{5})\) is an element of Condition (5), that is Since \((x_{4}, y_{4})\) is an element of Condition (4) and \(x_{5}=\vert x_{4}\vert -y_{4}+b\) and \(y_{5}= x_{4} - \vert y_{4}\vert -d\), we see that \(x_{5} \geq0 \) and \(y_{5} \geq0\). Consider Therefore \((x_{5},y_{5})\) is an element of Condition (5), as required. Finally, it is easy to show by direct computations that \((x_{6}, y_{6}) = (2b + d, b)\). This completes the proof of the theorem. □ The following four lemmas will show that if \((x_{0}, y_{0})\in R^{2}\) then \((x_{1}, y_{1})\) is an element of Condition (1). Set and recall that Lemma 1 Let \(\{ (x_{n},y_{n})\}_{n=0}^{\infty}\) be a solution of System (4) with \((x_{0}, y_{0})\) in \(\mathcal{Q}_{1}\). Then \((x_{1}, y_{1})\) is an element of Condition (1). Proof Suppose \((x_{0}, y_{0}) \in\mathcal{Q}_{1}\) then \(x_{0} \geq0\) and \(y_{0} \geq0\). Thus Case 1 Suppose further \(x_{0} \geq y_{0} +d \). We have \(x_{1} = x_{0} - y_{0} + b > 0\) and \(y_{1} = x_{0} - y_{0} - d \geq0\). Note that and Hence \((x_{1},y_{1})\) is an element of Condition (1) and Case 1 is complete. Case 2 Suppose \(x_{0} < y_{0} +d \) but \(x_{0} +b \geq y _{0} \). We have \(x_{1} = x_{0} - y_{o} + b \geq0\) and \(y_{1} = x_{0} - y_{0} - d < 0\). Note that and Case 2A Suppose further \(2x_{0}-2y_{0}+b-2d \geq0\). Then Since \(y_{1} = x_{0} - y_{0} - d < 0\), we have \(2x_{0}-2y_{0}-2d - b < 0\). Also note that \(\vert y_{1}\vert - y_{1} + 2b > 0\), so Case 2A is complete. Case 2B Suppose \(2x_{0}-2y_{0}+b-2d < 0\). Then Hence \((x_{1},y_{1})\) is an element of Condition (1) and Case 2 is complete. Case 3 Finally suppose \(x_{0} < y_{0} +d \) and \(x_{0} +b < y_{0} \). We have \(x_{1} = x_{0} - y_{o} + b < 0\) and \(y_{1} = x_{0} - y_{0} - d < 0\). Note that and Since \(x_{0} + b < y_{0}\), we have \(y_{0} > x_{0}\). Thus \(4y_{0} - 4x _{0} > 0\). We note that \(-b < 0\). Then Hence \((x_{1},y_{1})\) is an element of Condition (1) and Case 3 is complete. □ Lemma 2 Let \(\{ (x_{n},y_{n})\}_{n=0}^{\infty}\) be a solution of System (4) with \((x_{0}, y_{0})\) in \(\mathcal{Q}_{2}\). Then \((x_{1}, y_{1})\) is an element of Condition (1). Proof Suppose \((x_{0}, y_{0}) \in\mathcal{Q}_{2}\) then \(x_{0} \leq0\) and \(y_{0} \geq0\). Thus Case 1 Suppose further \(-x_{0} + b < y_{0}\). We have \(x_{1} = - x_{0} - y_{0} + b < 0\) and \(y_{1}= x_{0} - y_{0} - d < 0\). Note that and Since \(y_{0} \geq0\) and \(-b < 0\), we see that \((x_{1}, y_{1})\) is an element of Condition (1) and so Case 1 is complete. Case 2 Suppose \(-x_{0} + b \geq y_{0}\). We have \(x_{1} = - x_{0} - y_{0} + b \geq0\) and \(y_{1} = x_{0} - y_{0} - d < 0\). Note that and Case 2A Suppose further \(b \geq2y_{0}+d\). Then Since \(y_{1} = x_{0} - y_{0} - d < 0\), we have \(2x_{0}-2y_{0}- b -3d< 0\). Hence \((x_{1}, y_{1})\) is an element of Condition (1). Case 2A is complete. Case 2B Finally suppose \(b < 2y_{0}+d\). Then Since \(x_{1} = -x_{0} - y_{0} + b \geq0\), we have \(2x_{0}+2y_{0}-3b < 0\). Hence \((x_{1}, y_{1})\) is an element of Condition (1) and the proof to Lemma 2 is complete. □ Lemma 3 Let \(\{ (x_{n},y_{n})\}_{n=0}^{\infty}\) be a solution of System (4) with \((x_{0}, y_{0})\) in \(\mathcal{Q}_{3}\). Then \((x_{1}, y_{1})\) is an element of Condition (1). Proof Suppose \((x_{0}, y_{0}) \in\mathcal{Q}_{3}\) then \(x_{0} \leq0\) and \(y_{0} \leq0\). Thus Then and Case 1 Suppose \(b-2d \geq0\). Then Hence \((x_{1}, y_{1})\) is an element of Condition (1) and Case 1 is complete. Case 2 Suppose further \(b-2d < 0\). Then Since \(-2x_{0} - 2y_{0} + 2b >0\), we have \(2x_{0} + 2y_{0} - 3b < 0\). Hence \((x_{1}, y_{1})\) is an element of Condition (1) and the proof to Lemma 3 is complete. □ Lemma 4 Let \(\{ (x_{n},y_{n})\}_{n=0}^{\infty}\) be a solution of System (4) with \((x_{0}, y_{0})\) in \(\mathcal{Q}_{4}\). Then \((x_{1}, y_{1})\) is an element of Condition (1). Proof Suppose \((x_{0}, y_{0}) \in\mathcal{Q}_{4}\) then \(x_{0} \geq0\) and \(y_{0} \leq0\). Thus Case 1 Suppose further \(y_{1} = x_{0} + y_{0} - d \geq0\). Then and Hence \((x_{1}, y_{1})\) is an element of Condition (1) and Case 1 is complete. Case 2 Suppose \(y_{1} = x_{0} + y_{0} - d < 0\). Then and Case 2A Suppose further \(2x_{0}+b-2d \geq0\). Then Since \(2x_{0}+b-2d \geq0\), \(b > -2x_{0}\). Thus \(-2x_{0} -2y_{0} +2b +d > 0\). Since \(y_{1} = x_{0} + y_{0} - d < 0\), we have \(2x_{0} + 2y_{0} -3d - b <0\). Hence \((x_{1}, y_{1})\) is an element of Condition (1) and Case 2A is complete. Case 2B Now suppose \(2x_{0}+b-2d < 0\). Then Since \(y_{0} \leq0\) and \(b > 0\), we see that \((x_{1}, y_{1})\) is an element of Condition (1) and the proof of Lemma 4 is complete. □ Discussion and conclusion In this paper we showed that for any initial value \((x_{0}, y_{0}) \in R^{2}\) we have the following sequence: In addition, if we begin with an initial condition that is an element of Condition ( N) for \(N \in\{1,2,3,4,5\}\), then it requires \(6-N\) iterations to reach the equilibrium point. The generalized system of piecewise linear difference equations examined in this paper was created as a prototype to understand the global behavior of more complicated systems. We believe that this paper contributes broadly to the overall understanding of systems whose global behavior still remains unknown. References 1. Grove, EA, Ladas, G: Periodicities in Nonlinear Difference Equations. Chapman Hall, New York (2005) 2. Kocic, VL, Ladas, G: Global Behavior of Nonlinear Difference Equations of Higher Order with Applications. Kluwer Academic, Boston (1993) 3. Barnsley, MF, Devaney, RL, Mandelbrot, BB, Peitgen, HO, Saupe, D, Voss, RF: The Science of Fractal Images. Springer, New York (1991) 4. Devaney, RL: A piecewise linear model of the zones of instability of an area-preserving map. Physica 10D, 387-393 (1984) 5. Lozi, R: Un attracteur etrange du type attracteur de Henon. J. Phys. (Paris) 39, 9-10 (1978) 6. Botella-Soler, V, Castelo, JM, Oteo, JA, Ros, J: Bifurcations in the Lozi map. J. Phys. A, Math. Theor. 44, 1-17 (2011) 7. Grove, EA, Lapierre, E, Tikjha, W: On the global behavior of \(x_{n+1} = \vert x_{n}\vert - y_{n} -1 \) and \(y_{n+1} = x_{n} + \vert y_{n}\vert \). CUBO 14, 125-166 (2012) 8. Lapierre, EG: On the global behavior of some systems of difference equation. Doctoral dissertation, University of Rhode Island (2013) 9. Tikjha, W, Lapierre, EG, Lenbury, Y: On the global character of the system of piecewise linear difference equations \(x_{n+1} = \vert x_{n}\vert - y_{n} - 1\) and \(y_{n+1} = x_{n} - \vert y_{n}\vert \). Adv. Differ. Equ. 2010(2010) Acknowledgements This work was supported by the Thailand Research Fund [MRG5980053], National Research Council of Thailand and Pibulsongkram Rajabhat University. The first author is supported by the Centre of Excellence in Mathematics, CHE, Thailand. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
This question already has an answer here: Sorting functions by asymptotic growth 6 answers Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: You can use this fact that $\log(n) = O(n^k)$ for any constant $k > 0$ and $k \in \mathbb{R}$. $k$ in your case is $\frac{1}{2}$. Hence, $\sqrt{n}$ is not $O(2\log(n))$.
$A'$ denotes the set of limit points of $A$ and $\partial A$ is the boundary of $A$. $A^{o}$ denotes the set of interior points of $A$. Show that $\overline{A} \subseteq A^{o} \cup \partial A$. Suppose $x \in \overline{A} \implies x \in A \cup A'$. Suppose $x \not\in A$ (What if $x\in A$), then $x\in A'$. $\implies$ for all open sets $U$ containing $x$, $U$ has another point in $A$ not equal to $x$. $\implies U \cap A \neq \varnothing$ and $U \cap (X\setminus A) \neq \varnothing$. $\implies x \in \partial A$. This proof is not complete; what if $x\in A$? Can someone help me out?
If you could travel to the center of the Earth (or any planet), would you be weightless there? Correct. If you split the earth up into spherical shells, then the gravity from the shells "above" you cancels out, and you only feel the shells "below" you. When you are in the middle there is nothing "below" you. {I am using some simplistic terms, but I don't want to break out surface integrals and radial flux equations} Edit: Although the inside of the shell will have zero gravity classically, it will also have non zero gravity relativistically. At the perfect center the forces may balance out, yielding an unstable solution, meaning that a small perturbation in position will result in forces that exaggerate this perturbation. The simplest way to think about it is that there is mass all around you in the center of the Earth so you get an equal gravitational "pull" from all directions. The pulls cancel out so you get no acceleration. If one assumes constant density for the Earth (which isn't strictly speaking true but it is close enough for this illustration) the gravitational acceleration drops linearly from 1g at the surface to 0 at the center of the Earth. So you'd get a zero if you stepped on a scale at the center of the Earth. The more complicated explanation is that acceleration due to gravity is the derivative of the gravitational potential. This potential is a minimum at the center of the Earth and grows quadratically up to the surface. It then continues to increase at a lower rate. Since at the exact center is flat (like the bottom of a valley), the derivative which is a measure of the rate of change is zero, and there is no acceleration. Interestingly, even though you would be weightless there, the effects of gravity are highest at the center of the Earth. You get more gravitational time dilation, for example, than you do at the surface. I like answers that appeal to symmetry, so I answer this one with a question: If you were at the center, which way would you fall? That tells us you could stay floating there. In the following the term "charge" refers either to mass or to electric charge and the term "Inverse Square Law" refers either to Newton's Gravitational Law or to Coulomb's Law, respectively. SECTION 1 A. The Inverse Square Law for Spheres with uniform surface charge density Proposition A: Let a sphere of radius $\:\rm{R}\:$ with uniform surface charge density $\:\rho_{s}\:$ and empty interior. Then: (a1) the force exerted upon a point charge $\:\xi\:$ in the interior or on the surface of the sphere, as in Fig. 01, is zero (cancels out). In terms of potentials, the whole sphere (surface + interior) is an equi-potential region. (a2) the force exerted upon a point charge $\:\xi\:$ in the exterior of the sphere, as in Fig. 03, is equal to the force exerted by a point particle at the center of the sphere with charge equal to its total surface charge $\:\Xi_{s}=\rho_{s}\cdot4\pi{\rm{R}}^{2}\:$. In terms of potentials, the potential outside the sphere equals that created by its total surface charge $\:\Xi_{s}\:$ concentrated on its center. An intermediate conclusion in the proof of this Proposition is that the magnitude of the force exerted by the "cup" AKBMA of the sphere on the point charge $\:\xi\:$ in Fig. 02 is proportional to $\:\sin^{2}\omega\:$, where $\:\omega\:$ is the angle by which any point of the cyclic edge AMBA of the cup observes the line segment $\:b\:$ (that between the charge $\:\xi\:$ and the center of the sphere). More exactly this force is by magnitude: \begin{equation} \vert \mathbf{f}_{AKBMA}\vert=k \cdot \dfrac{\Xi_{s}\cdot \xi}{{\rm{b}}^{2}}\sin^{2}\left(\dfrac{\omega}{2}\right)=\left(k \cdot \dfrac{4\pi\rho_{s}\xi{\rm{R}}^{2}}{{\rm{b}}^{2}}\right)\sin^{2}\left(\dfrac{\omega}{2}\right)=constant\cdot \sin^{2}\left(\dfrac{\omega}{2}\right) \tag{A-01} \end{equation} But this force is cancelled by the force exerted by the "cup" CLDNC of the sphere which is equal in magnitude, but opposite direction: \begin{equation} \mathbf{f}_{CLDNC}=\;-\;\mathbf{f}_{AKBMA} \tag{A-02} \end{equation} So, if we remove these two "cups" the force doesn't change. But if we enlarge "cup" AKBMA by moving its cyclic edge AMBA to the left, this last will coincide with the cyclic edge CNDC of the left cup CLDNC. Then removing the two cups is as if we remove the whole sphere leaving the net force unchanged, that is, zero. Also in Fig. 02 we have \begin{equation} \mathbf{f}_{AMBDNCA}=\;\mathbf{0} \tag{A-03} \end{equation} B. The Inverse Square Law for Spheres with uniform volume charge density Proposition B: Let a sphere of radius $\:\rm{R}\:$ with uniform volume charge density $\:\rho_{v}\:$. Then: (b1) the force exerted upon a point charge $\:\xi\:$ in the interior of the sphere located at a radial distance $\:\rm{r}\:$ from is center is equal, according to Proposition A, to that exerted by the volume charge density of a sphere of radius $\:\rm{r}\:$, $\:\Xi_{v}\left(\rm{r}\right)=\rho_{v}\cdot \dfrac{4}{3}\pi{\rm{r}}^{3}\:$, concentrated on the center. The magnitude of this force is: \begin{equation} \vert f_{inside} \vert =k \cdot \dfrac{\Xi_{v}\left(\rm{r}\right)\cdot \xi}{{\rm{r}}^{2}}=k \cdot \dfrac{\rho_{v}4\pi\xi{\rm{r}}^{3}}{3{\rm{r}}^{2}}=constant \cdot \rm{r}\:,\quad \rm{r}\le \rm{R} \tag{B-01} \end{equation} (b2) the force exerted upon a point charge $\:\xi\:$ in the exterior of the sphere and at radial distance $\:\rm{r}\:$ from is center is equal, according to Proposition A, to that exerted by the volume charge density of a sphere of radius $\:\rm{R}\:$, $\:\Xi_{v}\left(\rm{R}\right)=\rho_{v}\cdot \dfrac{4}{3}\pi{\rm{R}}^{3}\:$, concentrated on the center. The magnitude of this force is: \begin{equation} \vert f_{outside} \vert =k \cdot \dfrac{\Xi_{v}\left(\rm{R}\right)\cdot \xi}{{\rm{r}}^{2}}=k \cdot \dfrac{\rho_{v}4\pi\xi{\rm{R}}^{3}}{3{\rm{r}}^{2}}=constant \cdot \rm{r}^{-2}\:,\quad \rm{r}>\rm{R} \tag{B-02} \end{equation} SECTION 2 Suppose that Earth is a perfect sphere with uniform volume mass density. Then: Proposition C: (c1) A body located at the center of the Earth is weightless. (c2) Imagine a tunnel of small cross section running along a whole diameter, so passing through the center of the Earth. A body placed in the tunnel at a radial distance $\:{\rm{r}}_{0}\:$ from the center will execute a simple rectilinear harmonic oscillation with center the center of the Earth, since in the case of gravity the force is always attractive to the center and according to equation (B-01) proportional in magnitude to the distance from this center of attraction. You would not be weightless at the center of the Earth. In other words, the Earth does not follow a geodesic. Let me explain. The Earth is not spherical, it is an oblate spheroid. The acceleration of a uniform non-spherical body in a spherical gravitational field does not follow an inverse square law. The acceleration of the center of mass does not equal the acceleration at the center of mass. An accelerometer fixed at the center of the Earth would read approx 1.75 pgal (1.75e-14 m/$\mathrm{s^2}$), not zero. protected by Qmechanic♦ Apr 24 '13 at 16:05 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
You are given a square matrix of width \$\ge2\$, containing square numbers \$\ge1\$. Your task is to make all square numbers 'explode' until all of them have disappeared. You must print or return the final matrix. More specifically: Look for the highest square \$x^2\$ in the matrix. Look for its smallest adjacent neighbor \$n\$ (either horizontally or vertically and without wrapping around). Replace \$x^2\$ with \$x\$ and replace \$n\$ with \$n\times x\$. Repeat the process from step 1 until there's no square anymore in the matrix. Example Input matrix: $$\begin{pmatrix} 625 & 36\\ 196 & 324 \end{pmatrix}$$ The highest square \$625\$ explodes into two parts of \$\sqrt{625}=25\$ and merges with its smallest neighbor \$36\$, which becomes \$36\times 25=900\$: $$\begin{pmatrix} 25 & 900\\ 196 & 324 \end{pmatrix}$$ The highest square \$900\$ explodes and merges with its smallest neighbor \$25\$: $$\begin{pmatrix} 750 & 30\\ 196 & 324 \end{pmatrix}$$ The highest square \$324\$ explodes and merges with its smallest neighbor \$30\$: $$\begin{pmatrix} 750 & 540\\ 196 & 18 \end{pmatrix}$$ The only remaining square \$196\$ explodes and merges with its smallest neighbor \$18\$: $$\begin{pmatrix} 750 & 540\\ 14 & 252 \end{pmatrix}$$ There's no square anymore, so we're done. Rules The input matrix is guaranteedto have the following properties: at each step, the highest square will always be unique at each step, the smallest neighbor of the highest square will always be unique the sequence will not repeat forever The initial matrix may contain \$1\$'s, but you do not have to worry about making \$1\$ explode, as it will never be the highest or the only remaining square. I/O can be processed in any reasonable format This is code-golf Test cases Input : [[16,9],[4,25]]Output: [[24,6],[20,5]]Input : [[9,4],[1,25]]Output: [[3,12],[5,5]]Input : [[625,36],[196,324]]Output: [[750,540],[14,252]]Input : [[1,9,49],[1,4,1],[36,25,1]]Output: [[3,6,7],[6,2,7],[6,5,5]]Input : [[81,4,64],[16,361,64],[169,289,400]]Output: [[3,5472,8],[624,323,1280],[13,17,20]]Input : [[36,100,1],[49,144,256],[25,49,81]]Output: [[6,80,2],[42,120,192],[175,21,189]]Input : [[256,169,9,225],[36,121,144,81],[9,121,9,36],[400,361,100,9]]Output: [[384,13,135,15],[24,1573,108,54],[180,11,108,6],[380,209,10,90]]Input : [[9,361,784,144,484],[121,441,625,49,25],[256,100,36,81,529],[49,4,64,324,16],[25,1,841,196,9]]Output: [[171,19,700,4032,22],[11,210,525,7,550],[176,60,6,63,23],[140,112,1152,162,368],[5,29,29,14,126]]
Rewriting the assumptions in a more formal form: $\{pencils\}\cap\{pens\}\neq\emptyset$ $(pen)\Rightarrow(not\,\,eraser)$ $(sharpener)\Rightarrow(eraser)$ Now let's look at the two possible conclusions. Conclusion 2 is unclearly phrased. If it means that "the statement that all pencils are sharpeners is untrue", then it does follow from the assumptions, by the following argument. Assume all pencils are sharpeners. Then by assumption 3, all pencils are erasers. So by assumption 1, some pens are erasers. This contradicts assumption 2. If it means that "no pencils are sharpeners", then it doesn't necessarily follow, because some non-pen pencils could still be sharpeners without contradicting the three given assumptions. As for conclusion 1, it too doesn't necessarily follow from the given assumptions. It would be fine for some eraser to be a pencil, just as long as it's not also a pen. For an example of a scenario in which conclusion 1 and the second interpretation of conclusion 2 don't follow from the three assumptions, see the following Venn diagram: As you can see, in this model some pencils are pens, no pens are erasers, and all sharpeners are erasers, but some pencils are sharpeners and some erasers are pens.
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Detaljert visning - Lignende elementer 2018-08-25 06:58 Detaljert visning - Lignende elementer 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Detaljert visning - Lignende elementer 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Detaljert visning - Lignende elementer 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Detaljert visning - Lignende elementer 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Detaljert visning - Lignende elementer 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Detaljert visning - Lignende elementer 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Detaljert visning - Lignende elementer 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Detaljert visning - Lignende elementer 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Detaljert visning - Lignende elementer
I am having an issue with Tikz. Here is the code: \begin{tikzpicture}[scale=0.6]% Variables\def\mu{0.1}\def\R{10}%Celestial bodies\draw [thick, fill=yellow] (0,0) circle (1);\draw [thick, fill=cyan] (\R,0) circle (0.25);% Lagrangian points\node at (\R*{1-{\mu/3}^{1/3}},0) {\color{orange}{\huge$\bullet$}}; %L1\node at (\R*{1+{\mu/3}^{1/3}},0) {\color{orange}{\huge$\bullet$}}; %L2\node at (-\R*{1+5/12*\mu},0) {\color{orange}{\huge$\bullet$}}; %L3\node at (\R*{1/2*{1-2*\mu}},\R*sqrt(3)/2) {\color{orange}{\huge$\bullet$}}; %L4\node at (\R*{1/2*{1-2*\mu}},-\R*sqrt(3)/2) {\color{orange}{\huge$\bullet$}}; %L5\end{tikzpicture} Here is the error I get: ! Missing number, treated as zero.<to be read again>{l.87 \node at (\R*{1-{\mu/3}^{1/3}},0){\color{orange}{\huge$\bullet$}}; %L1A number should have been here; I inserted `0'. I see this is a common error with Tikz but I can't find out why it doesn't work. It is propably a silly mistake, can someone tell me what's wrong ?
In the classic crypto textbook "Introduction to Modern Cryptography" by Jonathan Katz and Yehuda Lindell, there is a definition for indistinguishable encryption in the presence of an eavesdropper as such that for every probabilistic polynomial time adversary A there is a negligible function negl(n) such that $\Pr[PrivK_{A,\Pi}=1] \leq negl(n)$ where PrivK is the indistinguishability experiment and for the purpose of this question we only need to know that the experiment outcome is 1 iff the adversary makes the correct guess. My doubts are as follows. Consider a sequence of probabilistic polynomial time adversaries $\{A_i\}_{i>=1}$ whose advantage in the indistinguishability experiment is bounded by the following sequence of negligible functions $\Pr[PrivK_{A,\Pi}=1] \leq negl_i(n) = \frac{1}{(1+1/i)^n}$ Clearly it is necessary for the above conditions to hold for a indistinguishable encryption. But is it a correct model/condition for real-world applications? For example, in practice we typically choose a sufficiently large n and set up some encryption scheme. However, there is the always some adversary $A_i$ that wins the experiment with probability close to one. So what's wrong?
Wenhua Gao Articles written in Proceedings – Mathematical Sciences Volume 124 Issue 2 May 2014 pp 193-203 Let $L=-\Delta +V$ be a Schr\"odinger operator, where 𝛥 is the Laplacian on $\mathbb{R}^n$, while nonnegative potential 𝑉 belongs to the reverse H\"older class. In this paper, we will show that Marcinkiewicz integral associated with Schr\"odinger operator is bounded on $BMO_L$, and from $H^1_L(\mathbb{R}^n)$ to $L^1(\mathbb{R}^n)$. Volume 129 Issue 5 November 2019 Article ID 0074 Research Article Let $L=-\Delta+V$ be a Schrödinger operator, where $\Delta$ is the Laplacian operator on $\mathbb{R}^{d}$ , while the nonnegative potential $V$ belongs to the reverse Hölder class $B_{q}(q\geq1)$. In this paper, we will show that Marcinkiewicz integrals associated with Schrödinger operator are bounded from ${\rm BMO}_{L}$ to ${\rm BLO}_{L}$, when $V\in B_{d}$ Current Issue Volume 129 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
While reading through John D. Anderson Jr.’s derivation of minimum induced drag, I thought of a cool application of the calculus of variations in one of the equations to deduce the required condition. The equation that determines the downwash at a point is: $$w(y_0) = -\frac{1}{4\pi }\int^{b/2}_{-b/2} \frac{(\mathrm{d}\Gamma/\mathrm{d}y)}{y_0 - y}\mathrm{d}y = \int^{b/2}_{-b/2} \mathcal{L}(\Gamma,\Gamma’,y)\;\mathrm{d}y$$ This effectively implies that the downwash can be expressed as a functional of $\Gamma$, i.e. $w\left[\Gamma(y)\right]$, and one can find the functional derivative to find the extremal point. There also exists a constraint on this system, the total lift across the span must be constant: $$ L = \rho_{\infty} V_{\infty}\int^{b/2}_{-b/2} \Gamma(y)\;\mathrm{d}y = \int^{b/2}_{-b/2} \mathcal{G}(\Gamma,\Gamma’,y)\;\mathrm{d}y$$ The Euler-Lagrange equations thus take the following form: $$ \frac{\partial{\mathcal{L}}}{\partial{\Gamma}} - \frac{\mathrm{d}}{\mathrm{d}y}\left(\frac{\partial{\mathcal{L}}}{\partial{\Gamma’}}\right) + \lambda\left[\frac{\partial{\mathcal{G}}}{\partial{\Gamma}} - \frac{\mathrm{d}}{\mathrm{d}y}\left(\frac{\partial{\mathcal{G}}}{\partial{\Gamma’}}\right)\right]= 0 $$ Substituting the expressions: $$ -\frac{1}{4\pi (y_0 - y)^2} + \rho_{\infty}V_{\infty}\lambda = 0$$ This doesn’t contain any useful information about the downwash. Let’s try something else. Trying to minimise the induced drag formula directly as given by Anderson: $$ C_{D,i} = \frac{2}{V_{\infty}S}\int^{b/2}_{-b/2} \Gamma(x)\alpha_{i}(x)\;\mathrm{d}x = \frac{1}{2\pi V_{\infty}^2 S}\int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\frac{\Gamma(x)\Gamma’(y)}{x - y}\;\mathrm{d}y\;\mathrm{d}x $$ Getting rid of the constants and performing a variation on the coefficient of induced drag, we get: $$ \delta C_{D,i} = \int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\left(\delta\Gamma(x)\frac{\Gamma’(y)}{x - y} + \delta\Gamma’(y)\frac{\Gamma(x)}{x - y}\right) \;\mathrm{d}y\;\mathrm{d}x $$ Performing integration by parts on the second expression, keeping in mind that the first term of the evaluation disappears because the circulation at the endpoints (the boundary conditions of this problem) is zero: $$ = \int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\delta\Gamma(x)\frac{\Gamma’(y)}{x - y}\;\mathrm{d}y\;\mathrm{d}x - \int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\delta\Gamma(y)\cdot\frac{\mathrm{d}}{\mathrm{d}y}\left(\frac{\Gamma(x)}{x-y}\right)\;\mathrm{d}y\;\mathrm{d}x$$ A little rearranging provides the more useful form: $$ = \int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\delta\Gamma(x)\frac{\Gamma’(y)}{x - y}\;\mathrm{d}y\;\mathrm{d}x + \int^{b/2}_{-b/2}\delta\Gamma(y)\cdot\frac{\mathrm{d}}{\mathrm{d}y}\int^{b/2}_{-b/2}\frac{\Gamma(x)}{y-x}\;\mathrm{d}x\;\mathrm{d}y$$ A change of variables $y-x = q$ is required to evaluate the last integral: $$ \frac{\mathrm{d}}{\mathrm{d}y}\int^{b/2}_{-b/2}\frac{\Gamma(x)}{y-x}\;\mathrm{d}x = \frac{\mathrm{d}}{\mathrm{d}y}\int^{y-b/2}_{y+b/2}\frac{\Gamma(y-q)}{q}\;\mathrm{d}q$$ Feynman’s favourite trick, differentiating under the integral sign: $$ \require{cancel} \frac{\mathrm{d}}{\mathrm{d}y}\int^{y-b/2}_{y+b/2}\frac{\Gamma(y-q)}{q}\;\mathrm{d}q = \cancel{\frac{\Gamma(b/2)}{y-b/2}} - \cancel{\frac{\Gamma(-b/2)}{y+b/2}} + \int^{y-b/2}_{y+b/2}\frac{\partial}{\partial y}\frac{\Gamma(y-q)}{q}\;\mathrm{d}q $$ Mapping the variables back: $$ \int^{y-b/2}_{y+b/2}\frac{\partial}{\partial y}\frac{\Gamma(y-q)}{q}\;\mathrm{d}q = \int^{b/2}_{-b/2}\frac{\Gamma’(x)}{y-x}\;\mathrm{d}x$$ Substituting this into the original expression: $$ = \int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\delta\Gamma(x)\frac{\Gamma’(y)}{x - y}\;\mathrm{d}y\;\mathrm{d}x + \int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\delta\Gamma(y)\frac{\Gamma’(x)}{y - x}\;\mathrm{d}x\;\mathrm{d}y $$ Switching the variables of integration in the second expression, we get: $$\delta C_{D,i} = 2\int^{b/2}_{-b/2}\int^{b/2}_{-b/2}\delta\Gamma(x)\frac{\Gamma’(y)}{x - y}\;\mathrm{d}y\;\mathrm{d}x $$ Reintroducing the constants and combining this with the constraint, $\delta C_{D,i} - \lambda\delta L = 0 $ becomes: $$ \int^{b/2}_{-b/2}\delta\Gamma(x)\;\mathrm{d}x \left[\int^{b/2}_{-b/2}\frac{2\Gamma’(y)}{x - y}\;\mathrm{d}y\ - {2\pi\lambda}\right] = 0 $$ Using the constraint on the lift across the wing, this results in: $$ \int^{b/2}_{-b/2}\frac{\Gamma’(y)}{x - y}\;\mathrm{d}y\ = \pi\lambda $$ The first term is part of the integral from the downwash expression at the beginning of the post, indicating that the downwash across the lifting line for minimum induced drag is constant: $$ w = -\frac{\lambda}{4} = w_0 $$ The same result as seen in Anderson, more rigorously!
Let me set up the notation I am using. $(abc,de)$ denotes the standard Young tableau where the first row is $abc$ and the second row is $de$. Each young tableau corresponds to the young symmetriser, and I use the convention that given the Young tableau $\lambda$ young symmetriser is given by, $P_{\lambda} = Na.b$ where $a = \sum_{\sigma \in Rowgroup} \sigma$, and $b = \sum_{\sigma \in Column-group} (sgn \,\sigma) \sigma$. N stands for normaliser such that $P_{\lambda}$ is idempotent ($P_{\lambda}^2 = P_{\lambda}$). The action of the symmetric group on an abstract tensor is by means of shuffling the indices. I know that the following identity is true. $\mathbb{1} = \sum_iP_{\lambda_i}$ Here $\lambda_i$ are all the Standard young tableau of a given number of boxes. Applying this identity to any abstract tensor should give the same tensor back. And if this abstract tensor is completely arbitrary, i.e does not have any symmetries in its indices this decomposition yields components living in each irreps. OK. My question is when we have tensor products, then we use Littlewood-Richardson rule to find a one of the basis tensors representing each irreducible space. However I am interested in not just treating the tensor product as a vector space and finding the disjoint union of irreducible vector spaces, that each young tableau corresponds to and acting with the Young symmetriser yields one of the basis tensors. I would like to find out the given tensor in each irreducible space. Let me give an example. Suppose I have a rank 3 tensor that is antisymmetric in the first 2 indices. I can treat this as a tensor product of $(a,b) \otimes c$(Here ab is a column as per my notation.) Applying LRH I get, $(ac,b),(a,b,c)$. Using the young symmetriser I defined above, I act with this normalised young symmetriser on the tensor $T^{ab|c}$ (here the bar is used to group anti-symmetric indices). I get, $T_1^{abc}= P_{ac,b} T^{ab|c} = \frac{2}{3}(T^{ab|c} + T^{cb|a})$ (1) $T_2^{abc}=P_{a,b,c} T^{ab|c} = \frac{1}{3}(T^{ab|c} + T^{bc|a} + T^{ca|b})$ (2) Now $T_1 + T_2 \neq T$ as is clear. However when I project each component using the young symmetrisers associated with the factors of the tensor product, in this case it is $P_{a,b}$ I get T back. That is $T = P_{a,b}(T_1 + T_2) $ Explicitly, $P_{a,b} T_1^{abc} = \frac{2}{3} T^{ab|c} + \frac{1}{3}(T^{cb|a} - T^{ca|b})$, Therefore, $P_{a,b}(T_1^{abc} + T_2^{abc}) = \frac{2}{3} T^{ab|c} + \frac{1}{3}(T^{cb|a} - T^{ca|b})+ \frac{1}{3}(T^{ab|c} + T^{bc|a} + T^{ca|b}) = T^{ab|c}$. Is this relationship true in general i,e suppose I have a tensor product which I write as $T^{\lambda_1|\lambda_2}$ where $\lambda_1,\lambda_2$ denote the factors belonging to irreps corresponding to the Standard tableaux. Then is the following relationship TRUE, and how do I prove this? $T^{\lambda_1|\lambda_2} = P_{\lambda_1}P_{\lambda_2}\times$(irreducible components we get by using young symmetrisers generated by LRR and acting on $T^{\lambda_1|\lambda_2}$). In short I want to know if the component of this tensor product living in the irreducible rep characterised by the tableau $\nu$ generated by LRR is, $\mathbf{P_{\lambda_1}P_{\lambda_2} T_{\nu}}?$
Computer networks are something most of us take for granted–speed, reliability, availability are expectations. In fact, network problems tend to make us very angry, whether it’s dropped packets (yielding jittery Skype calls), congestion (that huge game download eating all the bandwidth), or simply a network outage. There’s an awful lot going on underneath the hood of all devices on a network to load that webpage or download that song for you. Much of the reliability in networking relies on maintaining good Quality of Service (QoS) policies, which involve buffering and queue management. Networks aren’t unlike the roads we travel on; traffic isn’t steady, and congestion at interfaces (or intersections on the road) happen. How do we handle that? We’ll explore some basic networking principles in order to uncover some interesting mathematics governing buffering and queue management at congested network interfaces. Update: Mr. Fred Baker reached out to me with a few corrections regarding my interchangeable use of queue and buffer. I’ve inserted his comments into the “Buffer” section. Incidentally, Mr. Baker was the inventor of the WRED (Weighted Random Early Detection) algorithm mentioned as an extension below. Networking Terms First, we need to get a few definitions out of the way. One of the main references I’m using is a book called Computer Networking Problems and Solutions: An Innovative Approach to Building Modern, Resilient Networks, by Russ White and Ethan Banks 1. TCP (Transport Control Protocol) TCP is the protocol that controls the flow of information across a network connection between two hosts. It runs on top of IP (Internet Protocol), which is responsible for the actual transport of data and multiplexing (the ability for multiple entities to communicate over the shared network) 2 . TCP and IP work together to get packets of data from one place to another. For example, TCP/IP worked together to bring you this page, from the server where it is hosted to your computer, phone, or tablet you’re reading on. How fast should packets be transmitted? If packets are transmitted too fast, the receiver may not be able to keep up, and packets may get dropped and transmission data lost. Packet loss manifests in a voice call by jittery video, or perhaps a temporary loss of audio. If packets are transmitted too slowly, there is a lag, or the transmission just isn’t as efficient as it could be. The goal is to strive for the fastest transmission we can have without packet loss. TCP uses a windowing algorithm with a changing window size to constantly react to changing network conditions. When packets are all being transmitted successfully in sequence, the window widens and the transmitter is allowed to send a larger amount of data before the receiver is required to acknowledge receipt. The window increases until a packet is lost, at which point the window size will sharply decrease. Then we start the cycle again, slowly ramping up the window size (and hence amount of data transmitted before acknowledgement) until we experience another packet loss. Buffers How many times have you had to wait for something to buffer when watching Netflix? A buffer is created at network interfaces to handle congestion. Think about an on-ramp to a freeway with the traffic lights on at rush hour, controlling the rate at which cars enter a crowded interstate. If we let all the cars on at the rate they want to go during rush hour, the interstate traffic would be even worse. So we buffer them at on-ramps, controlling the flow at the congested interface. What happens when a buffer gets full? In basic queuing theory, we typically get around this by assuming a queue (buffer in this case) has infinite capacity. For the most part, we know that’s not true. 3 If the buffer is full, the most recent packets into the buffer will get turned away, or dropped. There’s just no room for them. (Network engineers call this tail drop.) (Editor’s note: Here I’ll insert the comments Mr. Fred Baker sent to me regarding my mistaken conflation of buffer and queue. I chose not to change the article itself, but rather insert his comments of correction, for the sake of full transparency.) From Mr. Baker: A buffer is a container, much like a prescription bottle is a container. If a buffer “forms” when data arrives, the corollary would be a prescription bottle coming into being when a pharmacist attempted to put pills into it. It doesn’t work that way. A buffer as a section in memory in which messages are stored, and has a maximum size. When the number of messages or number of bytes exceeds the maximum, one can’t put more messages into it. The organization of the buffer is usually some form of queuing system, as simple as a single FIFO queue (common) and as complex as a hierarchy of queues with different methodologies. Each queue has some service discipline, which may be that it is “work conserving”, meaning that it rattles data through as quickly as it can, or may not be “work conserving”, meaning that it passes data through at some slower rate. A well known example of a non work conserving system is called Virtual Clock, published by Lixia Zhang in SIGCOMM 1990 (IIRC). A queue has a minimum depth (zero), while the buffer containing it has a maximum depth, and individual queues in the buffer may have maximum depths smaller than the buffer’s depth or service disciplines (such as RED) that moderate queue depth in some other way (RED and WRED interact with TCP, moderating the TCP window and as a result the amount of data the session keeps in flight at any given time). The Differentiated Services Architecture (RFCs 2474 and 2475) looks at quite a few other aspects of service and queue management as well. Tail Drop and TCP Synchronization Not all packets are created equal. Some packets, when delayed in a buffer, lose their purpose for existence. VoIP calls are the perfect example here. VoIP requires packets to be delivered, and delivered on time. A delayed packet is useless to the end user–the conversation has moved on. This means that a packet at the front of the queue can be useless, and tail-dropped packets (the most recent bits of your Skype conversation) are more needed. Passive Queuing Messes up TCP Simply letting a buffer get full and drop the tail packets is passive queue management. The problem with this goes back to how we explained TCP’s functionality above. If a packet gets dropped, TCP shrinks the window size and decreases the amount of data allowed per transmission before the receiver has to acknowledge receipt of packets, effectively throttling traffic. Throttle traffic enough, and we can empty the buffer and packets flow normally without congestion. But then as TCP ramps the window size up again, our buffer gets full, resulting in tail drop, and we start that whole cycle again. The end result is a bandwidth oscillation wherein the poor network goes from highly congested to empty to congested again, because all the TCP-traffic is synchronized. Random Early Detection “Get to the math please!” So how can we avoid this TCP synchronization phenomenon? We can take what might seem to be a counterintuitive approach and never let the buffer get full. How? We drop packets on purpose, with some probability, which will depend on the queue length inside the buffer. The Random Early Detection (RED) algorithm provides a way to randomly select packets to drop in order to prevent a full buffer and resulting tail-drop. Dropping random packets also desynchronizes different TCP streams, since some packet sequences will have their windows decreased upon packet drop, while those streams whose packets are not selected for drop maintain or increase their window size. RED [5] calculates a probability for marking a packet for drop based on the current average queue length, calculated by a moving average. (1) Calculate the new average queue size Let \bar{Q}_{n} be the average queue size at discrete time n, and let Q be the current queue length. Then\bar{Q}_{n} = (1-w)\bar{Q}_{n-1} + wQ Here, w is a weight we get to choose to decide how much weight we want to give to the current queue length, typically w\ll 1. If w is chosen too small, then RED will react too slowly to current congestion. If w is too large, then RED is sensitive to noise. Recommendations for choices of w vary from 0.001 to 0.07 [3,4]. (2) Set minimum and maximum thresholds Next, we set minimum and maximum thresholds for the tolerance of \bar{Q}_{n}. We’ll call these T_{\min} and T_{\max}. These thresholds will depend on network capabilities. (3) Calculate the probability of marking an incoming packet for drop When each packet comes in, we have to have a way to calculate a probability of dropping it or letting it join the buffer. The original RED first sets a maximum possible drop probability we’ll call p_{\max}, and then calculates the drop probability p_{d} in two stages Computing an intermediate value p_{a} that grows linearly with the average queue length \bar{Q}_{n} p_{a} = p_{\max}\frac{\bar{Q}_{n}-T_{\min}}{T_{\max}-T_{\min}} Computing the final probability based on the number of packets since the last one that was actually marked (we’ll call this c) and p_{a} above: p_{d} = \frac{p_{a}}{1-c\cdot p_{a}} So why do that second step? Since we’re ultimately choosing randomly whether to mark a packet for drop or not, it’s possible we don’t mark several packets in a row even if p_{a} is high. The idea of Step (2) is to increase the probability of actually marking a packet as the number not marked increases. The purpose of this is to ensure that our interface doesn’t wait too long before marking a packet. 4 Putting it all together We only invoke Step (3)- calculating the probability of marking a packet if our average queue length \bar{Q}_{n} is inside our boundaries T_{\min} and T_{\max}. If \bar{Q}_{n} < T_{\min}, then there’s no congestion, so we don’t need to drop anything at all. The traffic lights at the interstate on-ramp aren’t turned on in the middle of the night when traffic is light. Now, if \bar{Q}_{n} > T_{\max}, then we’re really congested, then we mark the incoming packet. Period. This means we have to clear out the buffer ASAP. Conclusion and Future Stuff Other Variants RED has another variation, WRED (Weighted Random Early Detection), which takes into account the class of the packet arriving. Some packets really are more important than others. When you’re in the middle of a VoIP call, those packets are way more important than perhaps some email coming in, because a slight delay of a few milliseconds in email delivery isn’t noticed, whereas a few millisecond packet delay causes jitter in your video. WRED deals with classed traffic but is basically the same as RED explored here. Things we can change about RED Notice above that the function to calculate p_{a} was linear in terms of the average queue length \bar{Q}_{n}. Why linear? Well, for one, when RED was first created in 1993, it was easier. Absent further information, simple is best, and linear is simple. There are other functions to calculate the packet drop probability p_{d} that are nonlinear. We’ll explore one of those papers next [2], which takes us into the notion of orthogonal polynomials. We can also discuss the fact that the average queue length was computed by a weighted moving average. Other works out there have looked at the impact of a weighted moving average [1], and other drop functions on the performance of the RED algorithm[2]. Where else can we look? Analysis of queues is a huge field. Since everything about network traffic (and general queues as well) is based on random variables, the mathematical study of queuing theory is a rich environment. We can view traffic as discrete random processes, like a birth-death process. We can assume the process is stationary, or we can look into studying traffic the way we study fluid flow, typically using differential equations. As networks get more and more complicated, we need this more sophisticated (and hopefully elegant) mathematics to help us understand traffic flow. We never really discussed in this article how to set the thresholds T_{\min} and T_{\max}. Those require a good model and understanding of the particular type of traffic flow in a specific network. Good understanding of queuing behavior yields good threshold design, which yields good queue management schemes, which ultimately yields a better user experience. Not just for networks, but any kind of traffic. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. References Domanska, J., Domanski, A., Augustyn, D.R.: the Impact of the Modified Weighted Moving Average on the Performance of the RED Mechanism. CN 2011. CCIS, vol. 160, pp. 27-44 Augustyn, D.R., Domanski, A., Domanska, J.: Active Queue Management with non linear packet dropping function. 6th International Conference on Performance Modeling and Evaluation of Heterogenous Networks (2010) S. Floyd. Discussions of Setting Parameters, http://www.icir.org/floyd/RED-parameters.txt (1997) Zheng, B. and Atiquzzaman, M.: A Framework to Determine the Optimal Weight Parameter of RED in Next-Generation Internet Routers. The University of Dayton, Department of Electrical and Computer Engineering, Tech. Rep., 2000 Floyd, S., Jacobson, V.: Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM Transactions on Networking 1(4) (1993) Footnotes I give a good recommendation for this book. Currently I’m about 300 pages in. The book is aimed for the newbie to networking. I wouldn’t call it particularly deep or rigorous, but honestly, that’s not its point. There are so many resources on computer networking it’s impossible to know where to start learning just the basics. This book does an excellent job of familiarizing the reader with networking terms, protocols, general issues, solutions, and integrates it all together to give a great overview of the field. The excellent analogy for multiplexing given in CNPSis air. When multiple people in a room are having simultaneous conversations, they share a medium (air) to do it. Communication gets garbled if everyone shouts across the room, and there’s no way to organize the flow of conversations so thoughts get where they’re supposed to go. There are mathematical ways to study finite capacity queues. We’ll get to these. Remember, all of these things we’re calculating are probabilities. Even a high probability doesn’t guarantee that an event will occur.
Summary: Proteins are one of the major macro molecules that are present in all biological organisms. They engage into virtually every process within biological systems, such as catalyzing biochemical reactions, transporting and storing chemical compounds, signaling and translating the information from other proteins, maintaining the structures of biological components (e.g. cells, tissues), converting chemical energy into mechanical energy causing muscular movement, and generating immune responses to the harmful foreign bodies within the organism. The function that a protein assumes depends on its structure. Therefore, protein structure determination is of utmost importance for drug design and protein design studies. Envelope glycoprotein GP120GP120 is embedded on the surface of HIV envelope. It attaches to the CD4 receptors of T helper cell, a type of white blood cells, facilitating entry of the HIV virus to the host cell. Fusion inhibitor drugs prevent GP120 from attaching itself to the T4 receptors of the T helper cells. X-Ray Crystallography and Nuclear Magnetic Resonance (NMR) spectroscopy are the major experimental techniques for 3D protein structure determination. X-Ray Crystallography has been a more commonly used technique and obtaining protein structure information is a routine, highly automated procedure. Yet, it requires crystallization of the protein, which can take months. On the other hand, NMR Spectroscopy allows the study of a protein nearly under physiological conditions. However, it is hard to automate the NMR Spectroscopy experiments. An important bottleneck in NMR protein structure determination is the assignment of NMR spectrum peaks to the underlying atoms. This bottleneck can be represented as an assignment problem with the help of a homologous protein (protein with a similar structure as the protein under experimentation), which is called Structure Based Assignment (SBA) problem. 1D NMR SpectrumThe peaks in the spectrum are easily observable for the 1D NMR spectrum. In this case study, we present two formulations from [1] and [2] for this problem and an interactive demo that solves the NMR SBA problem with the preferred formulation solver. The interactive demo uses the GAMS model representations for the formulations presented in the following sections. The NMR SBA problem in [1], [2] and [3] is constructed by the Nuclear Vector Replacement (NVR) framework. The goal is to find a mapping between the set of peaks and the set of amino acids that minimizes the total mapping cost. Each peak-amino matching has an assignment probability, which is then converted into an assignment cost. The available assignments are restricted by the Nuclear Overhauser Effect constraints that make the problem considerably harder to solve. In the NMR SBA problem, each peak pair has a binary relation called a NOE relation, i.e. for any given two peaks, they either have an NOE relation or not. The amino acids also have a similar binary relation, i.e. for any given two amino acids, the distance between the (amide) protons of the amino acids is either less than a threshold value (NTH) or not. The NOE constraints imply that for any given pair of peak - amino acid assignments (e.g. \(p_i \rightarrow a_i\) and \(p_j \rightarrow a_j\), if \(p_i \) and \(p_j \) have an NOE relation, then the distance of the protons of the amino acids that are assigned to those peaks, \(a_i\) and \(a_j\), must be less than the threshold value. In the NOE constraints illustration figure, peaks 1 and 2 have an edge in between implying they have an NOE relation, and they are mapped to amino acids 1 and 2, respectively. Amino acids 1 and 2 also have an edge in between implying their distance from each other is under the threshold value. Hence, assigning peaks 1 and 2 to amino acids 1 and 2, respectively, is feasible. On the other hand, peaks 1 and 3 also have an NOE relation. However, the amino acids that are assigned to them, amino acids 1 and 4 do not have an edge in between, which means the distance between their amide protons is more than the threshold value. Thus, the assignments of peak 1 to amino acid 1 and peak 3 to amino acid 4 cause infeasibility. NOE constraints illustrationIf there is an edge between two peaks, the amino acids that are assigned to them should have an edge between them. Binary Integer Programming Formulation The NMR SBA problem can be formulated as a Binary Integer Program (BIP) as follows: Parameters \(P\) = set of peaks \(A\) = set of amino acids \(NOE(i)\) = set of peaks that have an NOE relation with peak \(i\), \(\forall i \in P\) \(N\) = number of peaks to be assigned \(NTH\) = distance threshold for an NOE relation \(c_{ij}\) = cost of assigning peak \(i\) to amino acid \(j\), \(\forall i \in P\), \(\forall j \in A\) \(d_{kl}\) = distance between amide proteins of amino acids \(k\) and \(l\), \(\forall k, l \in A\) \(b_{kl} = \left\{ \begin{array}{ll} 1 & \mbox{if \(d_{kl}\) < \(NTH\), \(\forall k, l \in A\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) Decision Variables \(x_{ij} = \left\{ \begin{array}{ll} 1 & \mbox{if peak \(i\) is assigned to amino acid \(j\), \(\forall i \in P\), \(\forall j \in A\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) Model Minimize \( \sum_{i \in P} \sum_{j \in A} c_{ij} x_{ij} \) subject to: \( \sum_{i \in P} x_{ij} \leq 1, \quad \forall j \in A\) subject to: \( \sum_{i \in A}x_{ij} \leq 1, \quad \forall j \in P\) subject to: \( \sum_{i \in P} \sum_{j \in A} x_{ij} = N\) subject to: \( x_{ij}+x_{kl} \leq b_{jl}+1, \quad \forall j, l \in A, \forall i, k \in P, \forall k \in NOE(i)\) subject to: \( x_{ij} \in \mathcal{B}, \forall i \in P, \forall j \in A\) The first two constraints ensure that each NMR peak is assigned to at most one amino acid and, similarly, each amino acid is assigned to at most one peak. The third constraint determines the number of peak-amino acid assignments. Although \(N\) is usually equal to the number of peaks, in rare cases, mapping all of the peaks could be infeasible. In such cases, \(N\) allows us to obtain a partial solution. The next set of constraints are the NOE constraints. Finally, the last constraints restrict the variable only to take binary values. Mixed Integer Nonlinear Programming Formulation We can also model this problem as Mixed Integer Nonlinear Program (MINLP). In this formulation, instead of having a large number of constraints for NOE violations, we add a penalty term to the objective function for each violation. We choose the penalty term as the maximum non infinity assignment cost so that any solution with an NOE violation will be less favorable. Parameters \(BD(j)=\{l \in A \ | \ d_{jl}\) < \(NTH \} \) Decision Variables \(x_{ij} = \left\{ \begin{array}{ll} 1 & \mbox{if peak \(i\) is assigned to amino acid \(j\), \(\forall i \in P\), \(\forall j \in A\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) Model Minimize \( \sum_{i \in P} \sum_{j \in A} c_{ij} x_{ij}+ \sum_{i \in P} \sum_{j \in A} \sum_{k \in NOE(i)} \sum_{l \in BD(j)} px_{ij}x_{kl} \) subject to: \( \sum_{i \in P}x_{ij} \leq 1, \quad \forall j \in A \) subject to: \( \sum_{i \in A} x_{ij} \leq 1, \quad \forall j \in P \) subject to: \( \sum_{i \in P}\sum_{j \in A} x_{ij} = N \) subject to: \( x_{ij} \in \mathcal{B}, \quad \forall i \in P, \forall j \in A\) In the model above, the objective function minimizes the total score associated with the assignment of NMR peaks to amino acids and the additional score (penalty) resulting from NOE relation violations. The first and second sets of constraints guarantee that each amino acid is assigned to at most one NMR peak and each NMR peak is assigned to at most one amino acid. Similarly, we determine the number of peak-amino acid assignments by the third constraint, and finally, the last constraints make sure that all the variables are binary. [1] Apaydın et. al. 2010. Nvr-bip: Nuclear vector replacement using binary integer programming for NMR structure-based assignments. The Computer Journal. [2] Cavuslar, G., Catay B., Apaydin M. S. 2011. A Tabu Search Approach for the NMR Protein Structure-Based Assignment Problem. Working Paper/Technical Report. Sabanci University. ID:SU_FENS_2011/0001 [3] Apaydin, M.S., Conitzer V., Donald B.R. 2008. Structure-based protein NMR assignments using native structural ensembles. Journal of Biomolecular NMR, 40(4):263–276.
First I have to say I asked this question in physicsSE but afterwards somebody advised me to ask it here. Do I have to remove it from SE ? I'm trying to get the solution of the Cahn-Hilliard equation in 1d with a certain mass $C$. We have two components, and let's assume we have the relation $c_1+c_2=1$.Hence we take only the variable $c=c_1$. The total energy with the Lagrange parameter $\tilde{\mu}$ (which is a sort of non-local chemical potential) writes : $$ F[c(\mathbf{r})]=\int \{f(c(\mathbf{r}))+\frac{\epsilon^2}{2} (\nabla c)^2 \}d\Omega -\tilde{\mu}\int (c(\mathbf{r}) -C) d\Omega $$ In 1 dimension : $$ \frac{\delta F}{\delta c}=0\implies \frac{df}{dc}-\tilde{\mu}-\epsilon^2 \frac{d^2c}{dx^2}=0$$ Multiplying with $dc/dx$ leads to : $$\frac{\epsilon}{\sqrt{2}}\frac{dc}{\sqrt{f-\tilde{\mu}(c-C)}}=dx $$ Symmetry imposes $$c'(0)=0\implies f(c(0))-\tilde{\mu}(c(0)-C)=0 $$ At infinity, we also have $c'(\infty)=0 \; ;\;c(\infty)=-1$ (or $0$ depending on the potential you're using). This equation is solvable for the classical Cahn-Hilliard with $f-\tilde{\mu}(c-C)=(c^2-c_0^2)^2$. The classical way is to get $x(c)$ and then invert it. You find a $\tanh$ solution. But this solution does not respect the symmetry condition $c'(0)=0$ (right you can make it very very close to $0$ by building manually a solution with tanh functions... but I'm looking for an exact solution of the equation). Meaning it only gives the profile of an interface between 2 semi-infinite media. What I don't understand is how to get a profile respecting the symmetry condition, meaning with a nucleus/aggregate of one phase into the other phase. Meaning a phase of finite size (for example $c=1$) into the other phase ($c=-1$). I'm wondering wether my problem is overconstrained since the equation $\frac{\epsilon}{\sqrt{2}}\frac{dc}{\sqrt{f-\tilde{\mu}(c-C)}}=dx $ admits only one new constant and there are 3 constraints : $c'(0)=c'(\pm \infty)=0$ and $\int_{\mathbb{R}}c dx=C$ (about this one I have a doubt since $C$ enters the potential). Could you help please ? I'm also surprised I didn't find any litterature about this problem. REMARK : I was wondering maybe there was something missing in the equations. But actually no, since the dynamical equation used in simulations is :$\partial_t c = \nabla.(M(c)\nabla((f'(c)-\tilde{\mu)}-\epsilon^2\Delta c))$, so it's logical that the static picture is given by $(f'(c)-\tilde{\mu)}-\epsilon^2\Delta c=0$. However what could be is that indeed the system is overconstrained and there is no stable solution. Fortunately the $\tanh$ function provides a landscape that is "quasi-stable" (very very slowly unstable) in the sense that beyond the size of the interface it's as if we had a semi-infinite domain since we are very close to it and that's why we use this model in simulations. What do you think about it ? If this proposition were to be right, what could be a formalism with whom we could build a solution for a finite domain ?
For the full text of the paper, including all proofs and supplementary lemmata, click to download thesis-ch-2 Abstract Editor’s note: This paper comprises the second chapter of the PhD dissertation by Rachel Traylor. Cha and Lee defined a mathematical notion of server performance by measuring efficiency \psi defined as the long run average number of jobs completed per unit time. The service time distribution heavily influences the shape of the server efficiency as a function of a constant arrival rate \lambda. Various classes of distributions are studied in order to find sufficient conditions for the existence of a single maximum. The existence of a maximum allows for simple binary control policies to handle traffic and optimize performance. Introduction, Motivation, and Background Cha and Lee [1] studied the reliability of a single server under a constant stress workload, and also defined a notion of server efficiency \psi for a given intensity \lambda(t) as the long-run average number of jobs completed per unit time as a way to measure server performance. With the number of jobs completed as M, the efficiency is defined\psi := \lim\limits_{t \to \infty}\frac{E[M(t)]}{t} Upon breakdown and rebooting, the server is assumed to be ‘as good as new’, in that performance of the server does not degrade during subsequent reboots. In addition, the model assumes the arrival process after reboot, denoted \{N^{*}(t), t \geq 0\}, is a nonhomogenous Poisson process with the same intensity function \lambda(t) as before, and that \{N^{*}(t), t \geq 0\} is independent of the arrival process before reboot. In a practical setting, this model assumes no ‘bottlenecking’ of arrivals occurs in the queue during server downtime that would cause an initial flood to the rebooted server. In addition, the reboot time is assumed to follow a continuous distribution H(t) with expected value \nu. This process is a renewal reward process, with the renewal \{R_{n}\} = \{M_{n}\}, the number of jobs completed. The length of a renewal cycle is Y_{n} + H_{n}, where Y_{n} is the length of time the server was operational, and H_{n} is the time to reboot after a server crash. Then, by [2],\psi = \frac{E[M]}{E[Y]+ \nu} where M is the number of jobs completed in a particular renewal cycle, \nu is the mean time to reboot of the server, and Y is the length of a particular renewal cycle. Then, using the definition of \psi the following closed form of the efficiency of a server under all assumptions of Cha and Lee’s model is derived. \begin{aligned}\psi&=\frac{1}{\int_{0}^{\infty}S_{Y}(t)dt + \nu}\left[\exp\left(-\int_{0}^{t}r_{0}(x)dx-\int_{0}^{t}\lambda(x)dx + a(t) + b(t)\right)\right.\\&\qquad\qquad\left.\times\left(r_{0}(t)a(t)+\eta a(t)b(t) \right)\right]\end{aligned}where a(t) = \int_{0}^{t}e^{-\eta v}g_{W}(v)m(t-v)dv, b(t) = \int_{0}^{t}e^{-\eta(t-r)}\bar{G}_{W}(t-r)\lambda(r)dr, \bar{G}_{W}(x) = 1-\int_{0}^{x}g_{W}(s)ds, and m(x) = \int_{0}^{x}\lambda(s)ds. Theorem 1 (Server Efficiency under Cha/Lee) Suppose \{N(t), t \geq 0\} is a nonhomogenous Poisson process with intensity \lambda(t)\geq 0. Then the efficiency is given by Numerical Example and Control Policies As an illustrative example, Cha and Lee considered the case when \lambda(t) \equiv \lambda, r_{0}(t) \equiv r_{0} = 0.2, \eta = 0.01, \nu = 1, and g_{W}(w) = we^{-w^{2}/2} (the PDF of the Rayleigh distribution). As shown in Figure 1, there exists a \lambda^{*} such that \psi(\lambda) is maximized. Thus one may implement the obvious optimal control policy for server control to avoid server overload: (1) If the real time arrival rate \lambda < \lambda^{*}, do not interfere with arrivals. (2) If \lambda \geq \lambda^{*}, facilitate some appropriate measure of interference. Examples of interference for a web server in particular include rejection of incoming requests or possible re-routing. Cha and Lee give an interference policy of rejection with probability 1-\frac{\lambda^{*}}{\lambda}. The Rayleigh distribution used in Figure 1 has applications in physics, typically when the magnitude of a vector is related to its directional components. It is a special case of the Weibull distribution which is widely used in survival analysis, failure analysis, weather forecasting, and communications. These distributions are not typically used to model service times. The exponential distribution is the most common due to its memoryless properties, followed by the Erlang and uniform distributions. The efficiency, \psi, under the Rayleigh distribution example in Figure 1 shows the existence of a 0 < \lambda^{*} < \infty such that \psi(\lambda) is maximized at \lambda^{*}. This useful feature of \psi(\lambda) in this case allows for the implementation of a simple control policy for arrivals to the server to prevent overload, given above. Numerical simulations under a variety of possible distribution classes, including convex, concave, exponential, uniform, and Erlang suggest that the mathematical properties of \psi are heavily influenced by the choice and characteristics of service time distribution g_{W}(w). In particular, it is of interest to seek sufficient conditions of g_{W}(w) that will guarantee the existence of a \lambda^{*} that maximizes \psi. This is done for the uniform, compact support, and Erlang classes. Furthermore, it is shown under certain conditions, not only does the server efficiency lack a maximum, but \psi increases without bound. This is not representative of real server behavior, and thus of mathematical and practical interest to note for further modeling. Efficiency of a Server under Uniform Service Life Distribution Suppose \lambda(x) \equiv \lambda, and suppose r_{0}(x) \equiv r_{0} = \max_{x \in (0,\infty)}r_{0}(x). The efficiency \psi is given for constant \eta and \lambda here: \psi(\lambda) = \frac{1}{\int_{0}^{\infty}S_{Y}(t)dt + \nu}\left[\int_{0}^{\infty}\exp\left(-r_{0}t-\lambda t + a(t)+b(t)\right)(r_{0}+b(t))a(t)dt\right] where S_{Y}(t) is the survival function of the node, a(t) = \int_{0}^{t}e^{-\eta v}g(v)(t-v)dv, b(t) = \int_{0}^{t}e^{-\eta(t-r)}\bar{G}(t-r)dr, g(v) is the pdf of the service time distribution, and \bar{G}(x) = 1-\int_{0}^{x}g(s)ds. The following theorem gives sufficient conditions for the uniform distribution and \eta that guarantee the existence of a finite maximum efficiency. Theorem 2(Efficiency under Uniform Service Distribution) Suppose the service life distribution is given by Uniform(c,d) for some 0<c<d. Then if \sigma > \frac{ce^{-c\eta}}{\sqrt{12}\phi(-\eta)(1+\eta(c+d))+c\eta - 1} where \phi(-\eta) the standard deviation of the service life W, \psi(\lambda) has a maximum on (0,\infty) is the moment generating function of a uniform distribution evaluated at -\eta. Numerical simulations suggest that \psi increases without bound for c=0, d>1. The following lemma proves this fact. Lemma Suppose the service life distribution is given by Uniform(0,d), with d>1. Then \psi increases without bound. This is worth discussing here. It makes no sense whatsoever for the efficiency of a server to increase forever as the arrival rate increases. So what’s happening here? Notice that if the uniform distribution include 0 as an endpoint, then there is a positive probability that the service time for a job will be exactly 0 (in whatever time units you like). This is impossible in reality. A small service time is possible, but it’s still strictly greater than 0. What we see here is that using distributions with positive support at 0 causes issues in the efficiency function; it’s not the fault of the definition of efficiency, but rather a consequence of using distributions that cannot mirror reality. The next section explores this further with a more broad class of distributions.
Survey Calibration Introduction Calibration is a widely used technique in survey sampling. Suppose \(m\) sampling units in a survey have been assigned initial weights \(d_i\) for \(i = 1,\ldots,m\), and furthermore, there are \(n\) auxiliary variables whose values in the sample are known. Calibration seeks to improve the initial weights \(d_i\) by finding new weights \(w_i\) that incorporate this auxiliary information while perturbing the initial weights as little as possible, , the ratio \(g_i = w_i/d_i\) must be close to one. Such reweighting improves precision of estimates (Chapter 7, Lumley (2010)). Let \(X \in {\mathbf R}^{m \times n}\) be the matrix of survey samples, with each column corresponding to an auxiliary variable. Reweighting can be expressed as the optimization problem (see Davies, Gillard, and Zhigljavsky (2016)): \[ \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^m d_i\phi(g_i) \\ \mbox{subject to} & A^Tg = r \end{array} \] with respect to \(g \in {\mathbf R}^m\), where \(\phi:{\mathbf R} \rightarrow {\mathbf R}\) is a strictly convex function with \(\phi(1) = 0\), \(r \in {\mathbf R}^n\) are the known population totals of the auxiliary variables, and \(A \in {\mathbf R}^{m \times n}\) is related to \(X\) by \(A_{ij} = d_iX_{ij}\) for \(i = 1,\ldots,m\) and \(j = 1,\ldots,n\). Raking A common calibration technique is raking, which uses the penaltyfunction \(\phi(g_i) = g_i\log(g_i) - g_i + 1\) as the calibrationmetric. We illustrate with the California Academic Performance Index data inthe survey package (Lumley (2018)) which also supplies facilities forcalibration via the function calibrate. Both the population dataset( apipop) and a simple random sample of \(m = 200\) ( apisrs) areprovided. Suppose that we wish to reweight the observations in thesample using known totals for two variables from the population: stype, the school type (elementary, middle or high) and sch.wide,whether the school met the yearly target or not. This reweightingwould make the sample more representative of the general population. The code below estimates the weights using survey::calibrate. data(api)design_api <- svydesign(id = ~dnum, weights = ~pw, data = apisrs)formula <- ~stype + sch.wideT <- apply(model.matrix(object = formula, data = apipop), 2, sum)cal_api <- calibrate(design_api, formula, population = T, calfun = cal.raking)w_survey <- weights(cal_api) The CVXR formulation follows. di <- apisrs$pwX <- model.matrix(object = formula, data = apisrs)A <- di * Xn <- nrow(apisrs)g <- Variable(n)constraints <- list(t(A) %*% g == T)## RakingPhi_R <- Minimize(sum(di * (-entr(g) - g + 1)))p <- Problem(Phi_R, constraints)res <- solve(p)w_cvxr <- di * res$getValue(g) We compare the results below in a table which show them to be identical. ## Using functions in the *un echoed* preamble of this document...build_table(d1 = build_df(apisrs, "Survey", w_survey), d2 = build_df(apisrs, "CVXR", w_cvxr), title = "Calibration weights from Raking") stype sch.wide Survey wts. Frequency CVXR wts. Frequency E No 28.911 15 28.911 15 E Yes 31.396 127 31.396 127 H No 29.003 13 29.003 13 H Yes 31.497 12 31.497 12 M No 29.033 9 29.033 9 M Yes 31.529 24 31.529 24 Other Calibration Metrics Two other penalty functions commonly used are: Quadratic\[ \phi^{Q}(g) = \frac{1}{2}(g-1)^2; \] Logit\[ \phi^{L}(g; l, u) = \frac{1}{C}\biggl[ (g-l)\log\left(\frac{g-l}{1-l}\right) + (u-g)\log\left(\frac{u-g}{u-1}\right) \biggr] \mbox{ for } C = \frac{u-l}{(1-l)(u-1)}. \] It is again easy to incorporate these in our example and compare to survey results. Quadratic The survey function for this calibration is invoked as cal.linear. ## QuadraticPhi_Q <- Minimize(sum_squares(g - 1) / 2)p <- Problem(Phi_Q, constraints)res <- solve(p, solver = "SCS")w_cvxr_q <- di * res$getValue(g)w_survey_q <- weights(calibrate(design_api, formula, population = T, calfun = cal.linear)) Note the use of the SCS solver above; the default ECOS solverproduces a different number of unique weights, for reasons we have notfully investigated yet. ( Such differences are not unheard of amongsolvers!) stype sch.wide Survey wts. Frequency CVXR wts. Frequency E No 28.907 15 28.907 15 E Yes 31.397 127 31.397 127 H No 29.005 13 29.005 13 H Yes 31.495 12 31.495 12 M No 29.037 9 29.037 9 M Yes 31.528 24 31.528 24 Logistic Finally, the logistic, which requires bounds \(l\) and \(u\) on the coefficients; we use \(l=0.9\) and \(u=1.1\). u <- 1.10; l <- 0.90w_survey_l <- weights(calibrate(design_api, formula, population = T, calfun = cal.linear, bounds = c(l, u)))Phi_L <- Minimize(sum(-entr((g - l) / (u - l)) - entr((u - g) / (u - l)))) p <- Problem(Phi_L, c(constraints, list(l <= g, g <= u)))res <- solve(p)w_cvxr_l <- di * res$getValue(g) stype sch.wide Survey wts. Frequency CVXR wts. Frequency E No 28.907 15 28.929 15 E Yes 31.397 127 31.394 127 H No 29.005 13 28.995 13 H Yes 31.495 12 31.505 12 M No 29.037 9 29.014 9 M Yes 31.528 24 31.536 24 Further Metrics Following examples in survey::calibrate, we can try a few other metrics. First, the hellinger distance. hellinger <- make.calfun(Fm1 = function(u, bounds) ((1 - u / 2)^-2) - 1, dF= function(u, bounds) (1 -u / 2)^-3 , name = "Hellinger distance")w_survey_h <- weights(calibrate(design_api, formula, population = T, calfun = hellinger))Phi_h <- Minimize(sum((1 - g / 2)^(-2)))p <- Problem(Phi_h, constraints)res <- solve(p)w_cvxr_h <- di * res$getValue(g) stype sch.wide Survey wts. Frequency CVXR wts. Frequency E No 28.913 15 28.890 15 E Yes 31.396 127 31.399 127 H No 29.002 13 29.011 13 H Yes 31.498 12 31.488 12 M No 29.031 9 29.056 9 M Yes 31.530 24 31.521 24 Next, the derivative of the inverse hyperbolic sine. w_survey_s <- weights(calibrate(design_api, formula, population = T, calfun = cal.sinh, bounds = c(l, u)))Phi_s <- Minimize(sum( 0.5 * (exp(g) + exp(-g))))p <- Problem(Phi_s, c(constraints, list(l <= g, g <= u)))res <- solve(p)w_cvxr_s <- di * res$getValue(g) stype sch.wide Survey wts. Frequency CVXR wts. Frequency E No 28.911 15 28.904 15 E Yes 31.396 127 31.397 127 H No 29.003 13 29.006 13 H Yes 31.497 12 31.494 12 M No 29.033 9 29.041 9 M Yes 31.529 24 31.526 24 Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] grid stats graphics grDevices datasets utils methods ## [8] base ## ## other attached packages:## [1] survey_3.36 survival_2.44-1.1 Matrix_1.2-17 dplyr_0.8.1 ## [5] kableExtra_1.1.0 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] tidyselect_0.2.5 xfun_0.7 purrr_0.3.2 ## [4] mitools_2.4 splines_3.6.0 lattice_0.20-38 ## [7] colorspace_1.4-1 htmltools_0.3.6 viridisLite_0.3.0## [10] yaml_2.2.0 gmp_0.5-13.5 rlang_0.3.4 ## [13] R.oo_1.22.0 pillar_1.4.1 glue_1.3.1 ## [16] Rmpfr_0.7-2 DBI_1.0.0 R.utils_2.8.0 ## [19] bit64_0.9-7 scs_1.2-3 stringr_1.4.0 ## [22] munsell_0.5.0 blogdown_0.12.1 rvest_0.3.4 ## [25] R.methodsS3_1.7.1 evaluate_0.14 knitr_1.23 ## [28] highr_0.8 Rcpp_1.0.1 readr_1.3.1 ## [31] scales_1.0.0 webshot_0.5.1 bit_1.1-14 ## [34] hms_0.4.2 digest_0.6.19 stringi_1.4.3 ## [37] bookdown_0.11 ECOSolveR_0.5.2 tools_3.6.0 ## [40] magrittr_1.5 tibble_2.1.2 crayon_1.3.4 ## [43] pkgconfig_2.0.2 MASS_7.3-51.4 xml2_1.2.0 ## [46] assertthat_0.2.1 rmarkdown_1.13 httr_1.4.0 ## [49] rstudioapi_0.10 R6_2.4.0 compiler_3.6.0 Source References Davies, G., J. Gillard, and A. Zhigljavsky. 2016. “Comparative Study of Different Penalty Functions and Algorithms in Survey Calibration.” In Advances in Stochastic and Deterministic Global Optimization, 87–127. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-29975-4_6. Lumley, Thomas. 2018. “Survey: Analysis of Complex Survey Samples.” Lumley, Thomas S. 2010. Complex Surveys: A Guide to Analysis Using R. Wiley Publishing.
I'm just confused of why the one halves are in the bounds for this equation. Since we are letting tau approach infinity the bounds just approach negative infinity and infinity respectively. If anyone can give an explination on why they put 1/2 I would appreciate it. The bounds are not actually important mathematically, but they help with meaning. You could have written \$P=\lim_{\tau\to\infty}\frac{1}{2\tau}\int_{-\tau}^\tau p(t)dt\$ and gotten the same results. However, if you consider all of the different power functions: Finite bound: \$P=\frac{1}{\tau}\int_{T}^{T+\tau} p(t)dt\$ One sided infinite: \$P=\lim_{\tau\to\infty}\frac{1}{\tau}\int_{0}^\tau p(t)dt\$ Two sided infinite: \$P=\lim_{\tau\to\infty}\frac{1}{\tau}\int_{\frac{-\tau}{2}}^\frac{\tau}{2} p(t)dt\$ We see that \$\tau\$ has the same meaning of "period" or size of integration window in all of these cases. This is convenient for looking for parallels between the different equations. If we integrated the two sided infinite equation from \$-\tau\$ to \$\tau\$, it would obscure the connection between these different forms. You'll also see this happen later with many transforms, like the Fourier transform. We could change the way we define things to avoid some annoying \$2\pi\$ terms that arise, but that would then obscure the connection between the Fourier transform of some data and its original form.
How to prove conservation of electric charge using Noether's first theorem according to classical (non-quantum) mechanics? I know the proof based on using Klein–Gordon field, but that derivation use quantum mechanics particularly. By the word classical we will mean $\hbar=0$, and we will use the conventions of Ref. 1. The Lagrangian density for Maxwell theory with various matter content is$^1$ $${\cal L} ~=~{\cal L}_{\rm Maxwell} + {\cal L}_{\rm matter} ,\tag{1} $$ $${\cal L}_{\rm Maxwell}~=~ -\frac{1}{4}F_{\mu\nu}F^{\mu\nu},\tag{2}$$ $$ {\cal L}_{\rm matter}~=~{\cal L}_{\rm matter}^{\rm QED}+{\cal L}_{\rm matter}^{\rm scalar QED} + \ldots,\tag{3} $$ $$ {\cal L}_{\rm matter}^{\rm QED} ~:=~ \overline{\Psi}( i\gamma^{\mu} D_{\mu}-m)\Psi ,\tag{4} $$ $$ {\cal L}_{\rm matter}^{\rm scalar QED}~:=~ -(D_{\mu}\phi)^{\dagger} D^{\mu}\phi -m^2\phi^{\dagger}\phi -\frac{\lambda}{4} (\phi^{\dagger}\phi)^2,\tag{5} $$ with covariant derivative $$ D_{\mu}~=~d_{\mu}-ieA_{\mu}, \tag{6} $$ and with Minkowski sign convention (-,+,+,+). (Here we are too lazy to denote various matter masses $m$ and charges $e$ differently.) The matter equations of motion (eom) are $$ ( i\gamma^{\mu} D_{\mu}-m)\Psi ~\stackrel{m}{\approx}~0, \qquad D_{\mu}D^{\mu}\phi~\stackrel{m}{\approx}~m^2\phi+\frac{\lambda}{2} \phi^{\dagger}\phi^2, \qquad \ldots.\tag{7}$$ (The $\stackrel{m}{\approx}$ symbol means equality modulo matter eom, i.e. an on-shell equality.) The infinitesimal global off-shell gauge transformation is $$ \delta A_{\mu} ~=~0, \qquad \delta\Psi~=~-i\epsilon \Psi, \qquad \delta\overline{\Psi}~=~i\epsilon \overline{\Psi}, $$ $$ \delta\phi~=~-i\epsilon \phi,\qquad \delta\phi^{\dagger}~=~i\epsilon \phi^{\dagger}, \qquad \ldots, \qquad\delta {\cal L} ~=~0,\tag{8} $$ where the infinitesimal parameter $\epsilon$ does not depend on $x$. The Noether current is the electric $4$-current$^2$ $$ j^{\mu}~=~e\overline{\Psi}\gamma^{\mu}\Psi - ie\{\phi^{\dagger} D^{\mu}\phi-(D^{\mu}\phi)^{\dagger}\phi\}+\ldots. \tag{9}$$ $$ d_{\mu}j^{\mu}~\stackrel{m}{\approx}~0.\tag{10}$$ Hence the electric charge $$ Q~=~\int\! d^3x~ j^0\tag{11}$$ is conserved on-shell. References: M. Srednicki, QFT. -- $^1$ Note that the matter Lagrangian density ${\cal L}_{\rm matter}$ may depend on the gauge field $A_{\mu}$ $^2$ Interestingly, the electric $4$-current $j^{\mu}$ depends on the gauge potential $A_{\mu}$ in case of scalar QED matter. $^3$ Note that the above proof of the continuity equation (10) via Noether's first theorem (as OP requested) never uses Maxwell's equations.
Question: A {eq}0.40-kg {/eq} cart with charge {eq}2.5\times 10-5 C {/eq} starts at rest on a horizontal friction-less surface {eq}0.4 m {/eq} from a fixed object with charge {eq}+2.0\times 10-4 C {/eq}. When the cart is released, it moves away from the fixed object. A) How fast is the cart moving when it is very far (infinity) from the fixed charge? B) How fast is the cart moving when it is {eq}2.0\ m {/eq} from the fixed object? Conservation of Energy: The Conservation of Energy Principle states that in the absence of external force acting on a system, the total energy of a system will remain constant. For this problem, the electric potential energy due on the charged particles will be equal to the kinetic energy of the particle that moved. Answer and Explanation: Given: {eq}m_c = 0.40 \ kg {/eq} Mass of the cart {eq}q_c = 2.5 \times 10^{-5} \ C {/eq} Charge of the cart {eq}d = 0.4 \ m {/eq} Distance between the cart and the fixed object {eq}q_o = 2.0 \times 10^{-4} \ C {/eq} Charge of the fixed object {eq}k = 9.0 \times 10^{9} \frac {Nm^2}{C^2} {/eq} Part A) Since both the cart and the fixed object are both stationary initially, then the kinetic energy of the cart-fixed object system is zero. This means that the total energy of the system initially is due to the electric potential energy. {eq}PE_I = \frac {k q_c q_o}{d} = \frac {9.0 \times 10^{9} \frac {Nm^2}{C^2} * 2.5 \times 10^{-5} \ C * 2.0 \times 10^{-4} \ C}{0.4 \ m} = 112.5 \ J {/eq} When the cart moved at infinite distance from the fixed object, {eq}PE_F = \frac {9.0 \times 10^{9} \frac {Nm^2}{C^2} * 2.5 \times 10^{-5} \ C * 2.0 \times 10^{-4} \ C}{\infty} = 0 \ J {/eq} Using conservation of energy, {eq}PE_I = PE_F + KE {/eq} {eq}112.5 \ J = 0 + \frac {1}{2} m_c v^2 {/eq} {eq}v = \sqrt {\frac {2 *112.5 \ J}{0.40 \ kg}} = 23.71708245 = \boxed {24 \frac {m}{s}} {/eq} Part B) Using the same principle we used in Part A, {eq}PE_I = PE_F + KE {/eq} {eq}112.5 \ J = \frac {9.0 \times 10^{9} \frac {Nm^2}{C^2} * 2.5 \times 10^{-5} \ C * 2.0 \times 10^{-4} \ C}{2 \ m} + \frac {1}{2} (0.40 \ kg) v^2 {/eq} {eq}112.5 \ J = 22.5 \ J + \frac {1}{2} (0.40) v^2 {/eq} {eq}v = \sqrt {\frac {2(112.5 \ J - 22.5 \ J)}{0.40}} = 21.21320344 = \boxed {21 \frac {m}{s}} {/eq} Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from ICSE Environmental Science: Study Guide & SyllabusChapter 1 / Lesson 6
$\lambda$Prolog is a logic programming language based on a much richer logic than Prolog. In particular, the formulas that constitute its language are (higher-order) hereditary Harrop formulas. Horn clauses are a pallid fragment of that. The enabling concept for $\lambda$Prolog is the notion of a uniform proof, and additionally switching to an intuitionistic perspective on the logic. That last paper introduces the notion of an abstract logic programming language based on the notion of uniform proof and shows that classical first-order and higher-order Horn clauses form an abstract logic programming language. Basically, a uniform proof is one where the rules used can be given an operational interpretation based on goal-directed search. This isn't the actual definition of a uniform proof, but the motivation behind the actual definition. An abstract logic programming language is one where uniform proofs suffice to prove any formula in the language with respect to a given notion notion of provability that is contained by the usual classical one. Even the class of higher-order hereditary Harrop formulas doesn't allow $P \to Q_1\lor Q_2$ as a program clause. (It is allowable as a goal.) Why not? Obviously, $P, P\to Q_1\lor Q_2 \vdash Q_1\lor Q_2$, that is, if we know $P$ holds, and we know $P \to Q_1 \lor Q_2$ holds, then we can prove $Q_1 \lor Q_2$. However, this proof succeeds without ever specifying which of $Q_1$ or $Q_2$ holds. Indeed, neither $P, P\to Q_1\lor Q_2 \vdash Q_1$ nor $P, P\to Q_1\lor Q_2 \vdash Q_2$ is true. Formally, the proof of $Q_1\lor Q_2$ from $P$ and $P\to Q_1\lor Q_2$ is not a uniform proof. Informally, the operational interpretation we'd like to give to proving $Q_1 \lor Q_2$, is that we search for a proof of $Q_1$ and a proof of $Q_2$, and if either search succeeds we've established $Q_1 \lor Q_2$. Obviously this search approach will fail in the above example since neither $Q_1$ nor $Q_2$ can be individually established. Finally, why do we need to "retreat" to intuitionistic logic in the hereditary Harrop formula case (we don't for Horn clauses)? This is because $P\lor(P\to Q)$ is a goal in the class of hereditary Harrop formulas, and it's classically true. However, if we again apply our operational interpretation, the search would proceed as follows: First try to prove $P$, this fails. Next, try to prove $P\to Q$ which reduces to proving $Q$ given $P$ which fails. $P\lor(P\to Q)$ isn't true intuitionistically since when $Q$ is false, this is just the law of the excluded middle. Of course, it requires a more involved proof (which is in the paper) to show that there isn't a similar failure with respect to intuitionistic logic for the class of higher-order hereditary Harrop formulas. Since 1991, the notion of an abstract logic programming language has been extended. The most notable extension is the generalization of uniform proofs to focusing proofs, and the idea has been applied primarily in the context of linear logics. In many ways disjunction and negation are more tractable in linear logic. However, the idea has also been applied to Disjunctive Logic Programming. The idea was (first?) applied in Uniform Proofs and Disjunctive Logic Programming, but it didn't provide any guidance on how to handle quantifier elimination. That is, the idealized interpreter of an abstract logic programming language assumes you can "magically" guess the term to use in existential quantifier elimination. Normally this is accomplished with unification, but unification interacts with proof search. For example, higher order unification generates extra backtracking and the universal quantifiers in hereditary Harrop formulas lead to eigenvariables. Unification interacts with disjunction as well. The later paper A New Abstract Logic Programming Language and its Quantifier Elimination Method for Disjunctive Logic Programming gives a story not just for the abstract logic programming part of Disjunctive Logic Programming, but also how unification needs to be modified in this context. To my knowledge, DLV seems to be the only serious implementation of any form of disjunctive logic programming.
Let $"t"$ and $"s"$ be a words we will say that two words are "completly different" if for all $1\leqslant i\leqslant |t|$ the $i$ letter in $t$ diffrent from the $i$ letter in $s$. Prove that the language $\mathcal{L}=\{ts|t,s\in \{0,1\}^*,|t|=|s|,t,s \text{ completly different} \}$ is not a free-context-language Attempt : Applaying the pumping lemma for free-contex-language: Suppose that $\mathcal L$ is regular so exists a word '$z=uxvyw$' with length of at least $n$ such that: $(1)\,\,\,|xvy|\leqslant n$ $(2)\,\,\,|xy|\geqslant 1$ $(3)\,\,\,ux^ivy^iw \in \mathcal L\,\,\,\,\,\,\,\,\,i\geqslant 0$ Now, let's choose the word $\color{blue}{z=0^n1^n}$ it is obvious that $|z|\geqslant n$ so we can use $(1)-(3)$ $z=0^{\alpha}0^{\beta}0^{\gamma}0^{\lambda}1^n$ So $\alpha+\beta+\gamma+\lambda=n$ I am stuck here. EDIT: After using @Renato's answer: Consider $z=0^p1^p0^p1^p0^p1^p\in \mathcal{L}$ since $|z|>p$, there are $u,v,w,x,y$ such that $z=uvwxy,|vwx|\leqslant p, |vx|>0$ and $uv^iwx^iy\in \mathcal{L}$ $vwx$ must straddle the midpoint of $z$ there are fore possibilities: $vwx$ is in $0^p$ part. $vwx$ is in $1^p$ part. $vwx$ is in $1^p0^p$ part. $vwx$ is in $0^p1^p$ part. Thus, it is not of the form that we want For $i=2$ $z\notin \mathcal{L}$
Back to Unconstrained Optimization. For unconstrained optimization problems in which the exact Hessian matrix is not available, the same algorithms can be used with the Hessian matrix replaced by an approximation. One method for approximating the Hessian matrix is to use . Difference approximation methods exploit the fact that each column of the Hessian can be approximated by taking the difference between two instances of the gradient vector evaluated at two nearby points. For sparse Hessians, often many columns of the Hessian can be approximated with a single gradient evaluation by choosing the evaluation points judiciously. difference approximations If forward differences are used, then the i th column of the Hessian matrix is replaced by \[\frac{\nabla f(x_k + h_i e_i) - \nabla f(x_k)} {h_i}\] for some suitable choice of difference parameter \(h_i\). Here, \(e_i\) is the vector with \(1\) in the i th position and zeros elsewhere. Similarly, if central differences are used, the i th column is replaced by \[\frac{\nabla f(x_k + h_i e_i) - \nabla f(x_k - h_i e_i)} {2h_i}.\] An appropriate choice of the difference parameter \(h_i\) can be difficult. Rounding errors overwhelm the calculation if \(h_i\) is too small, while truncation errors dominate if \(h_i\) is too large. Newton codes rely on forward differences, since they often yield sufficient accuracy for reasonable values of \(h_i\). Central differences are more accurate but they require twice the work (\(2n\) gradient evaluations against \(n\) evaluations). Variants of Newton's method for problems with a large number of variables cannot use the above techniques to approximate the Hessian matrix because the cost of \(n\) gradient evaluations is prohibitive. For problems with a sparse Hessian matrix, however, it is possible to use specialized techniques based on graph coloring that allow difference approximations to the Hessian matrix to be computed efficiently. For example, if the Hessian matrix has bandwidth \(2b+1\), then only \(b+1\) gradient evaluations are required. VE08 is designed to solve optimization problems with a large number of variables where \(f\) is a partially separable function; that is, \(f\) can be written in the form \[f(x) = \sum_{i=1}^m f_i(x),\] where each function \(f_i: \mathbb{R}^n \rightarrow \mathbb{R}\) has an invariant subspace whose dimension is large relative to the number of variables \(n\). This is the case, in particular, if \(f_i\) depends only on a small number (typically, fewer than ten) of the components of \(x\). Functions with sparse Hessian matrices are partially separable. Indeed, most functions that arise in large-scale problems are partially separable. An advantage of algorithms designed for these problems is that techniques for approximating a dense Hessian matrix (for example, forward differences) can be used to approximate the nontrivial part of the element Hessian matrix \(\nabla^2f_i(x)\). Approximations to \(\nabla^2f(x)\) can be obtained by summing the approximations to \(\nabla^2f_i(x)\).
TTP12-044 Neutralino dark matter and the Fermi gamma-ray lines TTP12-044 Neutralino dark matter and the Fermi gamma-ray lines Motivated by recent claims of lines in the Fermi gamma-ray spectrum, we critically examine means of enhancing neutralino annihilation into neutral gauge bosons. The signal can be boosted while remaining consistent with continuum photon constraints if a new singlet-like pseudoscalar is present. We consider singlet extensions of the MSSM, focusing on the NMSSM, where a `well-tempered' neutralino can explain the lines while remaining consistent with current constraints. We adopt a complementary numerical and analytic approach throughout in order to gain intuition for the underlying physics. The scenario requires a rich spectrum of light neutralinos and charginos leading to characteristic phenomenological signatures at the LHC whose properties we explore. Future direct detection prospects are excellent, with sizeable spin-dependent and spin-independent cross-sections. Guillaume Chalons, Matthew J. Dolan, Christopher McCabe TTP12-036 Vacuum stability in the SM and the three-loop $\beta$-function for the Higgs self-interaction TTP12-036 Vacuum stability in the SM and the three-loop $\beta$-function for the Higgs self-interaction In this article the stability of the Standard Model (SM) vacuumin the presence of radiative corrections and for a Higgs boson with a mass in the vicinity of $125$ GeV is discussed.The central piece in this discussion will be the Higgs self-interaction $\lambda$ andits evolution with the energy scale of a given physical process.This is described by the $\beta$-function to which we recently computed analytically thedominant three-loop contributions.These are mainly the QCD and top-Yukawa corrections as well asthe contributions from the Higgs self-interaction itself. We will see thatfor a Higgs boson with a mass of about $125$ GeV the question whether the SM vacuum is stableand therefore whether the SM could bevalid up to Planck scale cannot be answered with certainty due to largeexperimental uncertainties, mainly in the top quark mass. TTP12-035 The Dimensional Recurrence and Analyticity Method for Multicomponent Master Integrals: Using Unitarity Cuts to Construct Homogeneous Solutions TTP12-035 The Dimensional Recurrence and Analyticity Method for Multicomponent Master Integrals: Using Unitarity Cuts to Construct Homogeneous Solutions We consider the application of the DRA method to the case of several master integrals in a given sector. We establish a connection between the homogeneous part of dimensional recurrence and maximal unitarity cuts of the corresponding integrals: a maximally cut master integral appears to be a solution of the homogeneous part of the dimensional recurrence relation. This observation allows us to make a necessary step of the DRA method, the construction of the general solution of the homogeneous equation, which, in this case, is a coupled system of difference equations. TTP12-021 The relation between the QED charge renormalized in MSbar and on-shell schemes at four loops, the QED on-shell beta-function at five loops and asymptotic contributions to the muon anomaly TTP12-021 The relation between the QED charge renormalized in MSbar and on-shell schemes at four loops, the QED on-shell beta-function at five loops and asymptotic contributions to the muon anomaly In this paper we compute the four-loop corrections to the QED photon self-energy $\Pi(Q^2)$ in the two limits of $q = 0$ and $Q^2 \to\infty$. These results are used to explicitly construct the conversionrelations between the QED charge renormalized in on-shell(OS) and $\MSbar$scheme. Using these relations and results of Baikov et al. \cite{Baikov:2012zm} weconstruct the momentum dependent part of $\Pi(Q^2,m,\alpha)$ atlarge $Q^2$ at five loops in both $\MSbar$ and OS schemes. As a directconsequence we arrive at the full result for the QED $\beta$-functionin the OS scheme at five loops. These results are applied, in turn, toanalytically evaluate a class of asymptotic contributions to the muonanomaly at five and six loops. P. A. Baikov, K. G. Chetyrkin, J. H. Kühn and C. Sturm Nucl. Phys. B867 182-202\ 2013 25 pages, 6 figures; v2: final published version The classical Lagrangian of chromodynamics,its quantization in the perturbation theory framework,and renormalization form the subject of these lectures.Symmetries of the theory are discussed.The dependence of the coupling constant \alpha_son the renormalization scale \mu is considered in detail. TTP12-011 Gamma-ray lines constraints in the NMSSM TTP12-011 Gamma-ray lines constraints in the NMSSM We present the computation of the loop-induced self-annihilation of dark matter particles into two photons in the framework of the NMSSM. This process is a theoretically clean observable with a “smoking-gun” signature but is experimentally very challenging to detect. The rates were computed with the help of the SloopS program, an automatic code initially designed for the evaluation of processes at the one-loop level in the MSSM. We focused on a light neutralino scenario and discuss how the signal can be enhanced in the NMSSM with respect to the MSSM and then compared with the present limits given by the dedicated search of the FERMI-LAT satellite on the monochromatic gamma lines.
Dear Uncle Colin, I have two points and I want to construct a circle of a given radius that passes through them. Is it possible? -- Every Underspecified Circle Lives Its Dream Hi, EUCLID, and thanks for your message! There are three possible answers to this, depending on the sizeRead More → Some charitable suggestions Hello! I'm going to be a bit more personal than usual with this post; it's Boxing Day and in the spirit of goodwill to all, I wanted to highlight some of my favourite charities. As you know, the Flying Colours Maths blog is absolutely free of charge,Read More → Dear Uncle Colin, I have a binomial expansion of $(1+x)^\frac{1}{2}$ and need to approximate $\sqrt{5}$. Apparently you need to substitute in $x=\frac{1}{4}$, but I'd have thought $x=4$ was a more obvious choice. What gives? -- Roots Are Dangerous If Understood Sloppily Hi RADIUS, and thanks for your message! That doesRead More → Reports are filtering in to Flying Colours Towers about the mock exams recently taken by year 11s. Words like 'bloodbath' and 'disaster' feature prominently (my students, I should add, have acquitted themselves well and are in line to be mentioned in dispatches for bravery.) There's a reason the new-style papersRead More → Dear Uncle Colin, How do you multiply big numbers like $2158 \times 1812$? I try to do it using the column method or the grid, but I always make mistakes. -- A Desperately Desired Error Reduction Hi, ADDER, and thanks for your message! I've been playing with something midway betweenRead More → As I happened to be in London last week, I took an afternoon to visit the Science Museum and, especially, the Winton Gallery exhibit on Mathematics. Maths! In the Science Museum! What a treat! Or so I hoped. The Winton Gallery: Mathematics There's a bit in Surely You're Joking, MrRead More → In this month's Christmassy episode of Wrong, But Useful, @reflectivemaths and I discuss: Number of the podcast: 9 Shout outs to: @aap03102 (Chris Smith) for his maths newsletter @mathistopheles (Thomas Oléron Evans) and @fryrsquared (Hannah Fry) for sending me a copy of their book @chalkdustmag for the lovely card @fryrsquaredRead More → Dear Uncle Colin, My teacher suggested that if you factorise a quadratic and add the brackets, you get the derivative. I am now too frightened to sleep. -- Quadratic Understanding Is Not Easy Hi, QUINE, and thanks for your message! To take a simple example, $x^2 + 5x + 6Read More → My dear friend @ajk44 pointed me at a puzzle on the excellent Nrich site1, and I enjoyed it enough to share my solution. (If you don't want it spoiled, don't read beyond the blockquote.) Four jewellers had respectively 8 rubies, 10 sapphires, 100 pearls and 5 diamonds. Each gave oneRead More →
I am reading CLRS relating to perfect hashing. When computing the $$ \mathbb{E}[\sum_{j=0}^{m-1}{n_j\choose{2}}] $$ where $m$ is the number of slots in the hash table, and $n_j$ is the number of keys in position $j$. I don't understand why we can directly conclude that $$ \mathbb{E}[\sum_{j=0}^{m-1}{n_j\choose{2}}]\leq{n\choose{2}}\frac{1}{m} $$ I understand that since $h$ is randomly chosen from a universal hash function family, $\Pr{(h(x_i)=h(x_j))}\leq{\frac{1}{m}},\forall{i\neq{j}}$. I don't understand why we can use the total number of pairs (the combination part) directly because if $h(x_i)=h(x_j)$ and $h(x_j)=h(x_k)$, then we have $h(x_i)=h(x_k)$ immediately instead of a probability of $\frac{1}{m}$. Someone can help me out? Thanks!
TTP13-039 Implications of an R-Scan for Charm Physics TTP13-039 Implications of an R-Scan for Charm Physics The impact of improved measurements of the cross section forelectron-positron annihilation into hadrons in the charm thresholdregion is discussed. Two aspects are studied in detail:i.)~A significant reduction of the experimental error of the electronicwidth of the narrow resonances $J/\psi$ and $\psi'$ and of thecontinuum cross section from the open charm threshold up to 4.6~GeVwill lead to a correspondingly improved determination of the charmed quarkmass. ii)~A high luminosity measurement with 24~pb${}^{-1}$ at threepoints and with a spacing of 2~MeV around $\sqrt{s}=3511\,{\rm MeV}$ mayallow to observe the direct, resonant production of $\chi_{c1}$. TTP13-015 Weak Interactions in Top-Quark Pair Production at Hadron Colliders: An Update TTP13-015 Weak Interactions in Top-Quark Pair Production at Hadron Colliders: An Update Weak corrections for top-quark pair production at hadron colliders are revisited. Predictions for collider energies of 8 TeV, adopted to the present LHC run, and for 14 TeV, presumably relevant for the next round of LHC experiments, are presented. Kinematic regions with large momentum transfer are identified, where the corrections become large and may lead to strong distortions of differential distributions, thus mimicking anomalous top quark couplings. As a complementary case we investigate the threshold region, corresponding to configurations with small relative velocity between top and antitop quark, which is particularly sensitive to the top-quark Yukawa coupling. We demonstrate, that nontrivial upper limits on this coupling are well within reach of ongoing experiments. TTP13-012 Resummation of non-global logarithms at finite Nc TTP13-012 Resummation of non-global logarithms at finite Nc In the context of inter-jet energy flow, we present the first quantitative result of the resummation of non-global logarithms at finite N_c. This is achieved by refining Weigert's approach in which the problem is reduced to the simulation of associated Langevin dynamics in the space of Wilson lines. We find that, in e+e- annihilation, the exact result is rather close to the result previously obtained in the large-N_c mean field approximation. However, we observe enormous event-by-event fluctuations in the Langevin process which may have significant consequences in hadron collisions. TTP13-009 Dimension 7 operators in the b → s transition TTP13-009 Dimension 7 operators in the b → s transition We extend the low-energy effective field theory relevant for b to s transitions up to operators of mass-dimension 7 and compute the associated anomalous-dimension matrix. We then compare our findings to the known results for dimension 6 operators and derive a solution for the renormalization group equations involving operators of dimension 7. We finally apply our analysis to a particularly simple case where the Standard Model is extended by an electroweak-magnetic operator and consider limits on this scenario from the decays Bs to mu+ mu- and B to K nu nubar. TTP13-008 beta-function for the Higgs self-interaction in the Standard Model at three-loop level TTP13-008 beta-function for the Higgs self-interaction in the Standard Model at three-loop level We analytically compute the QCD, electroweak, Higgs and third generation Yukawa contributions to the$\beta$-function for the Higgs self-coupling as well as for the Higgs mass parameterin the unbroken phase of the Standard Model at three-loop level. TTP13-006 On the $\mathcal O(\alpha_s^2)$ corrections to $b \rightarrow X_u e bar{\nu}$ inclusive decays TTP13-006 On the $\mathcal O(\alpha_s^2)$ corrections to $b \rightarrow X_u e bar{\nu}$ inclusive decays We present $O(\alpha_s^2)$ QCD corrections to the fully-differential decayrate of a $b$-quark into inclusive semileptonic charmless final states. Ourcalculation provides genuine two-loop QCD corrections, beyond theBrodsky-Lepage-Mackenzie (BLM) approximation, to any infra-red safe partonicobservable that can be probed in $b \to X_u e \bar \nu$ decays. Kinematic cutsthat closely match those used in experiments can be fully accounted for. Toillustrate these points, we compute the non-BLM corrections to moments of thehadronic invariant mass and the hadronic energy with cuts on the lepton energyand the hadronic invariant mass. Our results remove one of the sources oftheoretical uncertainty that affect the extraction of the CKM matrix element$|V_{ub}|$ from charmless inclusive B-decays. TTP13-005 $\mathcal O(\alpha_s^2)$ corrections to fully-differential top quark decays TTP13-005 $\mathcal O(\alpha_s^2)$ corrections to fully-differential top quark decays We describe a calculation of the fully-differential decay rate of a top quarkto a massless $b$-quark and a lepton pair at next-to-next-to-leading order inperturbative QCD. Technical details of the calculation are discussed andselected results for kinematic distributions are shown. TTP13-003 OPE of the pseudoscalar gluonium correlator in massless QCD to three-loop order TTP13-003 OPE of the pseudoscalar gluonium correlator in massless QCD to three-loop order In this paper analytical results are presented for higher order corrections to coefficient functions of the operator product expansion (OPE) for the correlator of two pseudoscalar gluonium operators \tilde{O}_1=G^{\mu \nu}\tilde{G}_{\mu \nu}. The Wilson coefficient in front of the scalar gluon condensate operator O_1=-1/4 G^{\mu \nu}G_{\mu \nu} is given at three-loop accuracy. The leading coefficient C_0 in front of the unity operator O_0=1 has been calculated up to three-loop order some time ago but has been checked independently in this work. It is interesting to see that the coefficient C_1 in the pseudoscalar case is finite, whereas contact terms appear in C_0 in this case and in both coefficients C_0 and C_1 in the cases of the scalar gluonium correlator and the energy momentum tensor correlator. For the corresponding Renormalization Group invariant Wilson coefficients which are also constructed the results are partially extended to four-loop accuracy. All results are given in the MSbar-scheme at zero temperature.
Research Open Access Published: Some identities on r-central factorial numbers and r-central Bell polynomials Advances in Difference Equations volume 2019, Article number: 245 (2019) Article metrics 238 Accesses 3 Citations Abstract In this paper, we introduce the extended r-central factorial numbers of the second and first kinds and the extended r-central Bell polynomials, as extended versions and central analogues of some previously introduced numbers and polynomials. Then we study various properties and identities related to these numbers and polynomials and also their connections. Introduction For \(n\in \mathbb{N}\cup \{ 0\}\), as is well known, the central factorials \(x^{[n]}\) are defined by It is also well known that the central factorial numbers of the second kind \(T(n,k)\) are defined by where n is a nonnegative integer. From (2), we can derive the generating function for \(T(n,k)\) (\(0 \leq k\leq n\)) as follows: Recently, Kim and Kim [10] considered the central Bell polynomials given by When \(x=1\), \(B_{n}^{(c)}=B_{n}^{(c)}(1)\) are called the central Bell numbers. From (4), we can find the Dobinski-like formula for \(B_{n}^{(c)}(x)\): The Stirling numbers of the second kind are defined by where \((x)_{0}=1\), \((x)_{n}=x(x-1)(x-2)\cdots (x-n+1)\) (\(n\geq 1\)). In this paper, we introduce the extended r-central factorial numbers of the second and first kinds and the extended r-central Bell polynomials, and study various properties and identities related to these numbers and polynomials and their connections. The extended r-central factorial numbers of the second kind are an extended version of the central factorial numbers of the second kind and also a ‘central analogue’ of the r-Stirling numbers of the second kind; the extended r-central Bell polynomials are an extended version of the central Bell polynomials and also a central analogue of r-Bell polynomials; the extended r-central factorial numbers of the first kind are an extended version of the central factorial numbers of the first kind and a central analogue of the (unsigned) r-Stirling numbers of the first kind. All of these numbers and polynomials were studied before (see [1, 5, 7, 8, 10, 12]). Extended r-central factorial numbers of the second kind and extended r-central Bell polynomials Comparing the coefficients on both sides of (8), we have For any nonnegative integer r, we introduce the extended r-central factorial numbers of the second kind given by Remark 1 In [11], the extended central factorial numbers of the second kind were defined as Note that these numbers are different from the extended r-central factorial numbers of the second kind defined in (10). Therefore, by comparing the coefficients on both sides of (10), the following identity holds. Theorem 1 For \(n,k,r\in \mathbb{N}\cup \{0\}\) with \(n\geq k\), we have Next, we write \(e^{(r+x)t}\) as follows: On the other hand, \(e^{(r+x)t}\) can be written as Theorem 2 For \(n\geq 0\), we have In view of (4), we may now introduce the extended r-central Bell polynomials associated with the extended r-central factorial numbers of the second kind given by Remark 2 In [11], the extended central Bell polynomials were defined as Observe here that these polynomials are different from the extended r-central Bell polynomials in (14). From (14), we note that By the comparison of the coefficients on both sides of (15), we can establish the following theorem. Theorem 3 For \(n\geq 0\), we have that Next, we observe that By using the central difference operator δ, which is defined by we can show that Therefore, by (20), we obtain the following theorem. Theorem 4 For \(n,k\geq 0\), we have From (14), we have Therefore, by comparing the coefficients on both sides of (22), we get the following identity. Theorem 5 For \(n\geq 0\), we have By (14), it can be checked that Therefore, by comparing the coefficients on both sides of (23), we establish the following theorem. Theorem 6 For \(n\geq 0\), we have Now, we observe that On the other hand, it can be seen that Theorem 7 For \(m,n,k\geq 0\) with \(n\geq m+k\), we have It is known that the generating function of central factorial is given by If we let \(f(t)=2 \log (\frac{t}{2}+\sqrt{1+\frac{t^{2}}{4}} )\), then we can easily show that Alternatively, the term \(e^{(x+r)t}\) is also represented by Theorem 8 For \(n\geq 0\), we have the following identity: Extended r-central factorial numbers of the first kind Throughout this section, we assume that r is any real number. The (unsigned) r-Stirling numbers of the first kind \(S_{1,r}(n+r,k+r)\) are defined by Then Further, we also have The central factorial numbers of the first kind \(t (n,k )\) are defined by On the other hand, we also have Let us define the extended r-central factorial numbers of the first kind as Then we want to derive the generating function of the extended r-central factorial numbers of the first kind. In addition, we also have Finally, we want to show a recurrence relation for the extended r-central factorial numbers of the first kind. This verifies the following theorem. Theorem 9 For any integers n, k with \(n-1 \geq k \geq 0\), we have the following recurrence relation: Conclusions and discussion In recent years, quite a number of old and new special numbers and polynomials have attracted attention of many researchers and have been studied by means of generating functions, combinatorial methods, umbral calculus, differential equations, p-adic integrals, p-adic q-integrals, special functions, complex analysis, and so on. In this paper, we introduced the extended r-central factorial numbers of the second and first kinds and the extended r-central Bell polynomials, and studied various properties and identities related to these numbers and polynomials and their connections. This study was done by making use of generating function techniques. The extended r-central factorial numbers of the second kind are an extended version of the central factorial numbers of the second kind and also a ‘central analogue’ of the r-Stirling numbers of the second kind; the extended r-central Bell polynomials are an extended version of the central Bell polynomials and also a central analogue of r-Bell polynomials; the extended r-central factorial numbers of the first kind are an extended version of the central factorial numbers of the first kind and a central analogue of the (unsigned) r-Stirling numbers of the first kind. All of these numbers and polynomials were studied before (see [7, 8, 10, 12]). As one of our next project, we would like to find some interesting applications of the numbers and polynomials introduced in this paper. References 1. Araci, S., Duran, U., Acikgoz, M.: On weighted q-Daehee polynomials with their applications. Indag. Math. 30(2), 365–374 (2019) 2. Carlitz, L.: Some remarks on the Bell numbers. Fibonacci Q. 18(1), 66–73 (1980) 3. Carlitz, L., Riordan, J.: The divided central differences of zero. Can. J. Math. 15, 94–100 (1963) 4. Could, H.-W., Quaintance, J.: Implications of Spivey’s Bell number formula. J. Integer Seq. 11(3), Article 08.3.7 (2008) 5. He, Y., Pan, J.: Some recursion formulae for the number of derangements and Bell numbers. J. Math. Res. Appl. 36(1), 15–22 (2016) 6. Kim, D.S., Kim, T.: Some identities of Bell polynomials. Sci. China Math. 58(10), 2095–2104 (2015) 7. Kim, D.S., Kwon, J., Dolgy, D.V., Kim, T.: On central Fubini polynomials associated with central factorial numbers of the second kind. Proc. Jangjeon Math. Soc. 21(4), 589–598 (2018) 8. Kim, T.: A note on central factorial numbers. Proc. Jangjeon Math. Soc. 21(4), 575–588 (2018) 9. Kim, T., Kim, D.S.: Degenerate central Bell numbers and polynomials. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113(3), 2507–2513 (2019) 10. Kim, T., Kim, D.S.: A note on central Bell numbers and polynomials. Russ. J. Math. Phys. (2019, to appear) 11. Kim, T., Kim, D.S., Jang, G.-W., Kwon, J.: Extended central factorial polynomials of the second kind. Adv. Differ. Equ. 2019, 24 (2019). https://doi.org/10.1186/s13662-019-1963-1 12. Kim, T., Kim, D.S., Kwon, H.-I., Kwon, J.: Umbral calculus approach to r-Stirling numbers of the second kind and r-Bell polynomials. J. Comput. Anal. Appl. 27(1), 173–188 (2019) 13. Kim, T., Yao, Y., Kim, D.S., Jang, G.-W.: Degenerate r-Stirling numbers and r-Bell polynomials. Russ. J. Math. Phys. 25(1), 44–58 (2018) 14. Pyo, S.-S.: Degenerate Cauchy numbers and polynomials of the fourth kind. Adv. Stud. Contemp. Math. (Kyungshang) 28(1), 127–138 (2018) 15. Roman, S.: The Umbral Calculus. Pure and Applied Mathematics, vol. 111. Academic Press, New York (1984) 16. Simsek, Y.: Identities and relations related to combinatorial numbers and polynomials. Proc. Jangjeon Math. Soc. 20(1), 127–135 (2017) 17. Simsek, Y.: Identities on the Changhee numbers and Apostol-type Daehee polynomials. Adv. Stud. Contemp. Math. (Kyungshang) 27(2), 199–212 (2017) 18. Zhang, W.: Some identities involving the Euler and the central factorial numbers. Fibonacci Q. 36(2), 154–157 (1998) Availability of data and materials Not applicable. Funding This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1C1C1003869). Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Statistics is the study of the collection, analysis, interpretation, presentation, and organization of data. In other words, it is a mathematical discipline to collect, summarize data. According to Merriam-Webster dictionary, statistics is defined as “classified facts representing the conditions of a people in a state – especially the facts that can be stated in numbers or any other tabular or classified arrangement”. According to statistician Sir Arthur Lyon Bowley, statistics is defined as “Numerical statements of facts in any department of inquiry placed in relation to each other”. Mathematical Statistics Mathematical statistics is the application of Mathematics to Statistics, which was originally conceived as the science of the state — the collection and analysis of facts about a country: its economy, and, military, population, and so forth. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equation and measure-theoretic probability theory. Scope Statistics is used in many sectors such as psychology, geology, sociology, weather forecasting, probability and much more. The goal of statistics is to gain understanding from data it focuses on applications and hence, it is distinctively considered as a Mathematical science. Methods The methods involve collecting, summarizing, analyzing, and interpreting variable numerical data. Here are some of the methods provided below. Data collection Data summarization Statistical analysis Data Data is a collection of facts, such as numbers, words, measurements, observations etc. Types of Data- Qualitative data-it is descriptive data. Example- She can run fast, He is thin. Quantitative data-it is numerical information. Example- An Octopus is an Eight legged creature. Types of quantitative data Discrete data-has a particular fixed value. It can be counted Continuous data-is not fixed but has a range of data. It can be measured. Representation of Data- Bar Graph A Bar Graph represents grouped data with rectangular bars with lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. Pie Chart A type of graph in which a circle is divided into Sectors that each represent a proportion of the whole. Line graph The line chart is represented by a series of data-points connected with a straight line The series of data points are called ‘markers.’ Pictograph A pictorial symbol for a word or phrase, i.e. showing data with the help of pictures. Such as Apple, Banana & Cherry can have different numbers, it is just a representation of data. Histogram A diagram consisting of rectangles whose area is proportional to the frequency of a variable and whose width is equal to the class interval. Frequency Distribution The frequency of a data value is often represented by “f.” A frequency table is constructed by arranging collected data values in ascending order of magnitude with their corresponding frequencies. Formulas used Sample Mean (\(\bar{x}\)) \(\frac{\sum x}{n}\) Population Mean (\(\mu\)) \(\frac{\sum x}{N}\) Sample Standard Deviation (s) \(\sqrt{\frac{\sum (x-\bar{x})^{2} }{n-1}}\) Population Standard Deviation (\(\sigma\)) \(\sigma = \sqrt{\frac{(x-\mu )^{2}}{N}}\) Sample Variance (\(s^{2}\)) \(s^{2} = \frac{\sum (x_{i}-\bar{x})}{n-1}\) Population Variance (\(\sigma ^{2}\)) \(\sigma ^{2} = \frac{\sum (x_{i} – \bar{x})}{N}\)< Range (R) Largest data value – smallest data value Application Some of the application of statistic are given below: Applied statistics, theoretical statistics and mathematical statistics Machine learning and data mining Statistics in society Statistical computing Statistics applied to Mathematics or the arts Hope this detailed discussion and formulas on statistics will help you to solve problems easier and faster. Learn more Maths concepts at BYJU’S with the help of interactive videos.
Coupling Heat Transfer with Subsurface Porous Media Flow In the second part of our Geothermal Energy series, we focus on the coupled heat transport and subsurface flow processes that determine the thermal development of the subsurface due to geothermal heat production. The described processes are demonstrated in an example model of a hydrothermal doublet system. Deep Geothermal Energy: The Big Uncertain Potential One of the greatest challenges in geothermal energy production is minimizing the prospecting risk. How can you be sure that the desired production site is appropriate for, let’s say, 30 years of heat extraction? Usually, only very little information is available about the local subsurface properties and it is typically afflicted with large uncertainties. Over the last decades, numerical models became an important tool to estimate risks by performing parametric studies within reasonable ranges of uncertainty. Today, I will give a brief introduction to the mathematical description of the coupled subsurface flow and heat transport problem that needs to be solved in many geothermal applications. I will also show you how to use COMSOL software as an appropriate tool for studying and forecasting the performance of (hydro-) geothermal systems. Governing Equations in Hydrothermal Systems The heat transport in the subsurface is described by the heat transport equation: (1) Heat is balanced by conduction and convection processes and can be generated or lost through defining this in the source term, Q. A special feature of the Heat Transfer in Porous Media interface is the implemented Geothermal Heating feature, represented as a domain condition: Q_{geo}. There is also another feature that makes the life of a geothermal energy modeler a little easier. It’s possible to implement an averaged representation of the thermal parameters, composed from the rock matrix and the groundwater using the matrix volume fraction, \theta, as a weighting factor. You may choose between volume and power law averaging for several immobile solids and fluids. In the case of volume averaging, the volumetric heat capacity in the heat transport equation becomes: (2) and the thermal conductivity becomes: (3) Solving the heat transport properly requires incorporating the flow field. Generally, there can be various situations in the subsurface requiring different approaches to describe the flow mathematically. If the focus is on the micro scale and you want to resolve the flow in the pore space, you need to solve the creeping flow or Stokes flow equations. In partially saturated zones, you would solve Richards’ equation, as it is often done in studies concerning environmental pollution (see our past Simulating Pesticide Runoff, the Effects of Aldicarb blog post, for instance). However, the fully-saturated and mainly pressure-driven flows in deep geothermal strata are sufficiently described by Darcy’s law: (4) where the velocity field, \mathbf{u}, depends on the permeability, \kappa, the fluid’s dynamic viscosity, \mu, and is driven by a pressure gradient, p. Darcy’s law is then combined with the continuity equation: (5) If your scenario concerns long geothermal time scales, the time dependence due to storage effects in the flow is negligible. Therefore, the first term on the left-hand side of the equation above vanishes because the density, \rho, and the porosity, \epsilon_p, can be assumed to be constant. Usually, the temperature dependencies of the hydraulic properties are negligible. Thus, the (stationary) flow equations are independent of the (time-dependent) heat transfer equations. In some cases, especially if the number of degrees of freedom is large, it can make sense to utilize the independence by splitting the problem into one stationary and one time-dependent study step. Fracture Flow and Poroelasticity Fracture flow may locally dominate the flow regime in geothermal systems, such as in karst aquifer systems. The Subsurface Flow Module offers the Fracture Flow interface for a 2D representation of the Darcy flow field in fractures and cracks. Hydrothermal heat extraction systems usually consist of one or more injection and production wells. Those are in many cases realized as separate boreholes, but the modern approach is to create one (or more) multilateral wells. There are even tactics that consist of single boreholes with separate injection and production zones. Note that artificial pressure changes due to water injection and extraction can influence the structure of the porous medium and produce hydraulic fracturing. To take these effects into account, you can perform poroelastic analyses, but we will not consider these here. COMSOL Model of a Hydrothermal Application: A Geothermal Doublet It is easy to set up a COMSOL Multiphysics model that features long time predictions for a hydro-geothermal application. The model region contains three geologic layers with different thermal and hydraulic properties in a box with a volume V≈500 [m³]. The box represents a section of a geothermal production site that is ranged by a large fault zone. The layer elevations are interpolation functions from an external data set. The concerned aquifer is fully saturated and confined on top and bottom by aquitards (impermeable beds). The temperature distribution is generally a factor of uncertainty, but a good guess is to assume a geothermal gradient of 0.03 [°C/m], leading to an initial temperature distribution T 0(z)=10 [°C] – z·0.03 [°C/m]. Hydrothermal doublet system in a layered subsurface domain, ranged by a fault zone. The edge is about 500 meters long. The left drilling is the injection well, the production well is on the right. The lateral distance between the wells is about 120 meters. COMSOL Multiphysics creates a mesh that is perfectly fine for this approach, except for one detail — the mesh on the wells is refined to resolve the expected high gradients in that area. Now, let’s crank the heat up! Geothermal groundwater is pumped (produced) through the production well on the right at a rate of 50 [l/s]. The well is implemented as a cylinder that was cut out of the geometry to allow inlet and outlet boundary conditions for the flow. The extracted water is, after using it for heat or power generation, re-injected by the left well at the same rate, but with a lower temperature (in this case 5 [°C]). The resulting flow field and temperature distribution after 30 years of heat production are displayed below: Result after 30 years of heat production: Hydraulic connection between the production and injection zones and temperature distribution along the flow paths. Note that only the injection and production zones of the boreholes are considered. The rest of the boreholes are not implemented, in order to reduce the meshing effort. The model is a suitable tool for estimating the development of a geothermal site under varied conditions. For example, how is the production temperature affected by the lateral distance of the wells? Is it worthwhile to reach a large spread or is a moderate distance sufficient? This can be studied by performing a parametric study by varying the well distance: Flow paths and temperature distribution between the wells for different lateral distances. The graph shows the production temperature after reaching stationary conditions as a function of the lateral distance. With this model, different borehole systems can easily be realized just by changing the positions of the injection/production cylinders. For example, here are the results of a single-borehole system: Results of a single-borehole approach after 30 years of heat production. The vertical distance between the injection (up) and production (down) zones is 130 meters. So far, we have only looked at aquifers without ambient groundwater movement. What happens if there is a hydraulic gradient that leads to groundwater flow? The following figure shows the same situation as the figure above, except that now there is a hydraulic head gradient of \nablaH=0.01 [m/m], leading to a superposed flow field: Single borehole after 30 years of heat production and overlapping groundwater flow due to a horizontal pressure gradient. Other Posts in This Series Modeling Geothermal Processes with COMSOL Software Geothermal Energy: Using the Earth to Heat and Cool Buildings Further Reading Download the Geothermal Doublet tutorial Explore the Subsurface Flow Module Related papers and posters presented at the COMSOL Conference: Hydrodynamic and Thermal Modeling in a Deep Geothermal Aquifer, Faulted Sedimentary Basin, France Simulation of Deep Geothermal Heat Production Full Coupling of Flow, Thermal and Mechanical Effects in COMSOL Multiphysics® for Simulation of Enhanced Geothermal Reservoirs Multiphysics Between Deep Geothermal Water Cycle, Surface Heat Exchanger Cycle and Geothermal Power Plant Cycle Modelling Reservoir Stimulation in Enhanced Geothermal Systems Comments (26) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Difference between revisions of "Probability Seminar" (→March 28, Shamgar Gurevitch UW-Madison) (→February 21, Diane Holcomb, KTH) Line 39: Line 39: + + + == <span style="color:red"> Wednesday, February 27 at 1:10pm</span> [http://www.math.purdue.edu/~peterson/ Jon Peterson], [http://www.math.purdue.edu/ Purdue] == == <span style="color:red"> Wednesday, February 27 at 1:10pm</span> [http://www.math.purdue.edu/~peterson/ Jon Peterson], [http://www.math.purdue.edu/ Purdue] == Revision as of 20:43, 12 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Wednesday, February 27 at 1:10pm Jon Peterson, Purdue March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
In the paper we show that Graded bundles (cf. [2]), which are a particular kind of graded manifold (cf. [3]), can be `fully linearised’ or `polarised’. That is, given any graded bundle of degree k, we can associate with it in a functorial way a k-fold vector bundle – we call this the full linearisation functor. In the paper [1], we fully characterise this functor. Hopefully, this notion will prove fruitful in applications as k-fold vector bundles are nice objects that that various equivalent ways of describing them. Graded Bundles Graded bundles are particular examples of polynomial bundles: that is we have a fibre bundle whose are \(\mathbb{R}^{N}\) and the admissible changes of local coordinates are polynomial. A little more specifically, a graded bundle $F$, is a polynomial bundle for which the base coordinates are assigned a weight of zero, while the fibre coordinates are assigned a weight in \(\mathbb{N} \setminus 0\). Moreover we require that admissible changes of local coordinates respect the weight. The degree of a graded bundle is the highest weight that we assign to the fibre coordinates. Any graded bundle admits a series of affine fibrations \(F = F_k \rightarrow F_{k-1} \rightarrow \cdots \rightarrow F_{1} \rightarrow F_{0} =M\), which is locally given by projecting out the higher weight coordinates. For example, a graded bundle of degree 2 admits local coordinates \((x, y ,z)\) of weight 0,1, and 2 respectively. Changes of coordinates are then, `symbolically’ \(x’ = x'(x)\), \(y’ = y T(x)\), \(z’ = z G(x) + \frac{1}{2} y y H(x)\), which clearly preserve the weight. We then have a series of fibrations \(F_2 \rightarrow F_1 \rightarrow M\), given (locally) by \((x,y,z) \mapsto (x,y) \mapsto (x)\). Linearisation The basic idea of the full linearisation is quite simple – I won’t go into details here. Recall the notion of polarisation of a homogeneous polynomial. The idea is that one adjoins new variables in order to produce a multi-linear form from a homogeneous polynomial. The original polynomial can be recovered by examining the diagonal. As graded bundles are polynomial bundles, and the changes of local coordinates respect the weight, we too can apply this idea to fully linearise a graded bundle. That is, we can enlarge the manifold by including more and more coordinates in the correct way as to linearise the changes of coordinates. In this way we obtain a k-fold vector bundle, and the original graded bundle, which we take to be of degree k. So, how do we decide on these extra coordinates? The method is to differentiate, reduce and project. That is we should apply the tangent functor as many times as is needed and then look for a substructure thereof. So, let us look at the degree 2 case, which is simple enough to see what is going on. In particular we only need to differentiate once, but you can quickly convince yourself that for higher degrees we just repeat the procedure. The tangent bundle \( T F_2\) – which we consider the tangent bundle as a double graded bundle – admits local coordinates \((\underbrace{x}_{(0,0)}, \; \underbrace{y}_{(1,0)} ,\; \underbrace{z}_{(2,0)} \; \underbrace{\dot{x}}_{(0,1)}, \; \underbrace{\dot{y}}_{(1,1)} ,\; \underbrace{\dot{z}}_{(2,1)})\) The changes of coordinates for the ‘dotted’ coordinates are inherited from the changes of coordinates on \(F_2\), \(\dot{x}’ = \dot{x}\frac{\partial x’}{\partial x}\), \( \dot{y}’ = \dot{y}T(x) + y \dot{x} \frac{\partial T}{\partial x}\), \(\dot{z}’ = \dot{z}G(x) + z \dot{x}\frac{\partial G}{\partial x} + y \dot{y}H(x) + \frac{1}{2}y y \dot{x}\frac{\partial H}{\partial x}\). Thus we have differentiated. Clearly we can restrict to the vertical bundle while still respecting the assignment of weights – one inherited from \(F_2\) and the other comes from the vector bundle structure of a tangent bundle. In fact, what we need to do is shift the first weight by minus the second weight. Technically, this means that we no longer are dealing with graded bundles, the coordinate \(\dot{x}\) will be of bi-weight (-1,1). However, the amazing thing here is that we can set this coordinate to zero – as we should do when looking at the vertical bundle – and remain in the category of graded bundles. That is, not only is setting \(\dot{x}=0\) well-defined, you see this from the coordinate transformations; but also this keeps us in the right category. We have preformed a reduction of the (shifted) tangent bundle. Thus we arrive at a double graded bundle \(VF_2\) which admits local coordinates \((\underbrace{x}_{(0,0)}, \; \underbrace{y}_{(1,0)} ,\; \underbrace{z}_{(2,0)}, \; \underbrace{\dot{y}}_{(0,1)} ,\; \underbrace{\dot{z}}_{(1,1)})\), and the obvious admissible changes thereof. Now, observe that we have the degree of \(z\) as (2,0), which is the coordinate with the highest first component of the bi-weight. Thus, as we have the structure of a graded bundle, we can project to a graded bundle of one lower degree \(\pi : VF_2 \rightarrow l(F_2)\). The resulting double vector bundle is what we will call the linearisation of \(F_2\). So we have constructed a manifold with coordinates \((\underbrace{x}_{(0,0)}, \; \underbrace{y}_{(1,0)}, \; \underbrace{\dot{y}}_{(0,1)} ,\; \underbrace{\dot{z}}_{(1,1)})\), with changes of coordinates \(x’ = x'(x)\), \(y’ = y T(x)\) \( \dot{y}’ = \dot{y}T(x)\), \(\dot{z}’ = \dot{z}G(x) + y \dot{y}H(x)\). Then, by comparison with the changes of local coordinates on \(F_2\) you see that we have a canonical embedding of the original graded bundle in its linearisation as a ‘diagonal’ \(\iota : F_2 \rightarrow l(F_2)\), by setting \(\dot{y} = y\) and \(\dot{z} = 2 z\). References [1] Andrew James Bruce, Janusz Grabowski and Mikołaj Rotkiewicz, Polarisation of Graded Bundles, SIGMA 12 (2016), 106, 30 pages. [2] Janusz Grabowski and Mikołaj Rotkiewicz, Graded bundles and homogeneity structures, J. Geom. Phys. 62 (2012), 21-36. [3] Th.Th. Voronov, Graded manifolds and Drinfeld doubles for Lie bialgebroids, in Quantization, Poisson Brackets and Beyond (Manchester, 2001), Contemp. Math., Vol. 315, Amer. Math. Soc., Providence, RI, 2002, 131-168.
Designing context-sensitive grammars (with productions $\alpha X \gamma \to \alpha \beta \gamma$ with $\beta \neq \varepsilon$) is no fun at all. It is slightly more convenient to construct monotone grammars (with productions $\alpha \to \beta$ with $|\alpha| \le |\beta|$). There is a standard construction from one to the other. But even then it is full of tiny details that have to be observed. It is relatively easy to construct a grammar for $\{ La^ib^jc^{ij} \mid i,j\ge 0 \}$. First generate strings of the form $LB^jA^i$, just like in a context-free grammar. Add the rule $BA \to ABC$, so that if every $A$ moves over every $B$ we generate $ij$ $C$'s. But now we have $C$'s in between. Let $B$'s move over $C$'s whenever needed. Then we have to rewrite $A,B,C$ into $a,b,c$ respectively provided they are in the right order. This can be done using the boundary symbol $L$ and the productions $LA \to La$, $aA \to aa$, $aB\to ab$ etcetera. Like a finite state automaton that accepts $a^*b^*c^*$. Now what about the $L$? We used it to mark the start of the string. According to the rules of monotone grammars it cannot be deleted. Well, let the $L$ double as one of the $a$'s. But then we should carefully match the $c$;s with the new design. Thus start by generating the context-free language $aB^jA^iC^j$. Done? not yet. We now always have at least one $a$, i.e., $i>0$. Add these special cases. Problem with the construction is its correctness. Productions can be applied in any order, not only the one order you have considered during its construction.
Let $f:V\rightarrow\mathbb{Z}_k$ be a vertex labeling of a hypergraph $H=(V,E)$. This labeling induces an~edge labeling of $H$ defined by $f(e)=\sum_{v\in e}f(v)$, where the sum is taken modulo $k$. We say that $f$ is $k$-cordial if for all $a, b \in \mathbb{Z}_k$ the number of vertices with label $a$ differs by at most $1$ from the number of vertices with label $b$ and the analogous condition holds also for labels of edges. If $H$ admits a $k$-cordial labeling then $H$ is called $k$-cordial. The existence of $k$-cordial labelings has been investigated for graphs for decades. Hovey~(1991) conjectured that every tree $T$ is $k$-cordial for every $k\ge 2$. Cichacz, Görlich and Tuza~(2013) were first to investigate the analogous problem for hypertrees, that is, connected hypergraphs without cycles. The main results of their work are that every $k$-uniform hypertree is $k$-cordial for every $k\ge 2$ and that every hypertree with $n$ or $m$ odd is $2$-cordial. Moreover, they conjectured […] Section: Graph Theory Edge-connectivity is a classic measure for reliability of a network in the presence of edge failures. $k$-restricted edge-connectivity is one of the refined indicators for fault tolerance of large networks. Matching preclusion and conditional matching preclusion are two important measures for the robustness of networks in edge fault scenario. In this paper, we show that the DCell network $D_{k,n}$ is super-$\lambda$ for $k\geq2$ and $n\geq2$, super-$\lambda_2$ for $k\geq3$ and $n\geq2$, or $k=2$ and $n=2$, and super-$\lambda_3$ for $k\geq4$ and $n\geq3$. Moreover, as an application of $k$-restricted edge-connectivity, we study the matching preclusion number and conditional matching preclusion number, and characterize the corresponding optimal solutions of $D_{k,n}$. In particular, we have shown that $D_{1,n}$ is isomorphic to the $(n,k)$-star graph $S_{n+1,2}$ for $n\geq2$. Section: Graph Theory Ear decompositions of graphs are a standard concept related to several major problems in graph theory like the Traveling Salesman Problem. For example, the Hamiltonian Cycle Problem, which is notoriously N P-complete, is equivalent to deciding whether a given graph admits an ear decomposition in which all ears except one are trivial (i.e. of length 1). On the other hand, a famous result of Lovász states that deciding whether a graph admits an ear decomposition with all ears of odd length can be done in polynomial time. In this paper, we study the complexity of deciding whether a graph admits an ear decomposition with prescribed ear lengths. We prove that deciding whether a graph admits an ear decomposition with all ears of length at most is polynomial-time solvable for all fixed positive integer. On the other hand, deciding whether a graph admits an ear decomposition without ears of length in F is N P-complete for any finite set F of positive integers. We also prove that, for any k ≥ […] Section: Graph Theory In the geodetic convexity, a set of vertices $S$ of a graph $G$ is $\textit{convex}$ if all vertices belonging to any shortest path between two vertices of $S$ lie in $S$. The cardinality $con(G)$ of a maximum proper convex set $S$ of $G$ is the $\textit{convexity number}$ of $G$. The $\textit{complementary prism}$ $G\overline{G}$ of a graph $G$ arises from the disjoint union of the graph $G$ and $\overline{G}$ by adding the edges of a perfect matching between the corresponding vertices of $G$ and $\overline{G}$. In this work, we we prove that the decision problem related to the convexity number is NP-complete even restricted to complementary prisms, we determine $con(G\overline{G})$ when $G$ is disconnected or $G$ is a cograph, and we present a lower bound when $diam(G) \neq 3$. Section: Graph Theory A graph $G$ is almost hypohamiltonian (a.h.) if $G$ is non-hamiltonian, there exists a vertex $w$ in $G$ such that $G - w$ is non-hamiltonian, and $G - v$ is hamiltonian for every vertex $v \ne w$ in $G$. The second author asked in [J. Graph Theory 79 (2015) 63--81] for all orders for which a.h. graphs exist. Here we solve this problem. To this end, we present a specialised algorithm which generates complete sets of a.h. graphs for various orders. Furthermore, we show that the smallest cubic a.h. graphs have order 26. We provide a lower bound for the order of the smallest planar a.h. graph and improve the upper bound for the order of the smallest planar a.h. graph containing a cubic vertex. We also determine the smallest planar a.h. graphs of girth 5, both in the general and cubic case. Finally, we extend a result of Steffen on snarks and improve two bounds on longest paths and longest cycles in polyhedral graphs due to Jooyandeh, McKay, {\"O}sterg{\aa}rd, Pettersson, and the […] Section: Graph Theory The \emph{matching preclusion number} of a graph is the minimum number of edges whose deletion results in a graph that has neither perfect matchings nor almost perfect matchings. As a generalization, Liu and Liu recently introduced the concept of fractional matching preclusion number. The \emph{fractional matching preclusion number} of $G$ is the minimum number of edges whose deletion leaves the resulting graph without a fractional perfect matching. The \emph{fractional strong matching preclusion number} of $G$ is the minimum number of vertices and edges whose deletion leaves the resulting graph without a fractional perfect matching. In this paper, we obtain the fractional matching preclusion number and the fractional strong matching preclusion number for generalized augmented cubes. In addition, all the optimal fractional strong matching preclusion sets of these graphs are categorized. Section: Distributed Computing and Networking For integers $k\ge 2$ and $\ell\ge 0$, a $k$-uniform hypergraph is called a loose path of length $\ell$, and denoted by $P_\ell^{(k)}$, if it consists of $\ell $ edges $e_1,\dots,e_\ell$ such that $|e_i\cap e_j|=1$ if $|i-j|=1$ and $e_i\cap e_j=\emptyset$ if $|i-j|\ge2$. In other words, each pair of consecutive edges intersects on a single vertex, while all other pairs are disjoint. Let $R(P_\ell^{(k)};r)$ be the minimum integer $n$ such that every $r$-edge-coloring of the complete $k$-uniform hypergraph $K_n^{(k)}$ yields a monochromatic copy of $P_\ell^{(k)}$. In this paper we are mostly interested in constructive upper bounds on $R(P_\ell^{(k)};r)$, meaning that on the cost of possibly enlarging the order of the complete hypergraph, we would like to efficiently find a monochromatic copy of $P_\ell^{(k)}$ in every coloring. In particular, we show that there is a constant $c>0$ such that for all $k\ge 2$, $\ell\ge3$, $2\le r\le k-1$, and $n\ge k(\ell+1)r(1+\ln(r))$, there is […] Section: Graph Theory A centroid node in a tree is a node for which the sum of the distances to all other nodes attains its minimum, or equivalently a node with the property that none of its branches contains more than half of the other nodes. We generalise some known results regarding the behaviour of centroid nodes in random recursive trees (due to Moon) to the class of very simple increasing trees, which also includes the families of plane-oriented and $d$-ary increasing trees. In particular, we derive limits of distributions and moments for the depth and label of the centroid node nearest to the root, as well as for the size of the subtree rooted at this node. Section: Combinatorics The satisfiability problem is known to be $\mathbf{NP}$-complete in general and for many restricted cases. One way to restrict instances of $k$-SAT is to limit the number of times a variable can be occurred. It was shown that for an instance of 4-SAT with the property that every variable appears in exactly 4 clauses (2 times negated and 2 times not negated), determining whether there is an assignment for variables such that every clause contains exactly two true variables and two false variables is $\mathbf{NP}$-complete. In this work, we show that deciding the satisfiability of 3-SAT with the property that every variable appears in exactly four clauses (two times negated and two times not negated), and each clause contains at least two distinct variables is $ \mathbf{NP} $-complete. We call this problem $(2/2/3)$-SAT. For an $r$-regular graph $G = (V,E)$ with $r\geq 3$, it was asked in [Discrete Appl. Math., 160(15):2142--2146, 2012] to determine whether for a given independent set $T […] Section: Graph Theory We consider the constrained graph alignment problem which has applications in biological network analysis. Given two input graphs $G_1=(V_1,E_1), G_2=(V_2,E_2)$, a pair of vertex mappings induces an {\it edge conservation} if the vertex pairs are adjacent in their respective graphs. %In general terms The goal is to provide a one-to-one mapping between the vertices of the input graphs in order to maximize edge conservation. However the allowed mappings are restricted since each vertex from $V_1$ (resp. $V_2$) is allowed to be mapped to at most $m_1$ (resp. $m_2$) specified vertices in $V_2$ (resp. $V_1$). Most of results in this paper deal with the case $m_2=1$ which attracted most attention in the related literature. We formulate the problem as a maximum independent set problem in a related {\em conflict graph} and investigate structural properties of this graph in terms of forbidden subgraphs. We are interested, in particular, in excluding certain wheals, fans, cliques or claws (all […] Section: Discrete Algorithms The problem of determining the number of "flooding operations" required to make a given coloured graph monochromatic in the one-player combinatorial game Flood-It has been studied extensively from an algorithmic point of view, but basic questions about the maximum number of moves that might be required in the worst case remain unanswered. We begin a systematic investigation of such questions, with the goal of determining, for a given graph, the maximum number of moves that may be required, taken over all possible colourings. We give several upper and lower bounds on this quantity for arbitrary graphs and show that all of the bounds are tight for trees; we also investigate how much the upper bounds can be improved if we restrict our attention to graphs with higher edge-density. Section: Graph Theory We show that one-way quantum one-counter automaton with zero-error is more powerful than its probabilistic counterpart on promise problems. Then, we obtain a similar separation result between Las Vegas one-way probabilistic one-counter automaton and one-way deterministic one-counter automaton. We also obtain new results on classical counter automata regarding language recognition. It was conjectured that one-way probabilistic one blind-counter automata cannot recognize Kleene closure of equality language [A. Yakaryilmaz: Superiority of one-way and realtime quantum machines. RAIRO - Theor. Inf. and Applic. 46(4): 615-641 (2012)]. We show that this conjecture is false, and also show several separation results for blind/non-blind counter automata. Section: Automata, Logic and Semantics Whitney's theorem states that every 3-connected planar graph is uniquely embeddable on the sphere. On the other hand, it has many inequivalent embeddings on another surface. We shall characterize structures of a $3$-connected $3$-regular planar graph $G$ embedded on the projective-plane, the torus and the Klein bottle, and give a one-to-one correspondence between inequivalent embeddings of $G$ on each surface and some subgraphs of the dual of $G$ embedded on the sphere. These results enable us to give explicit bounds for the number of inequivalent embeddings of $G$ on each surface, and propose effective algorithms for enumerating and counting these embeddings. Section: Graph Theory After fixing a canonical ordering (or labeling) of the elements of a finite poset, one can associate each linear extension of the poset with a permutation. Some recent papers consider specific families of posets and ask how many linear extensions give rise to permutations that avoid certain patterns. We build off of two of these papers. We first consider pattern avoidance in $k$-ary heaps, where we obtain a general result that proves a conjecture of Levin, Pudwell, Riehl, and Sandberg in a special case. We then prove some conjectures that Anderson, Egge, Riehl, Ryan, Steinke, and Vaughan made about pattern-avoiding linear extensions of rectangular posets. Section: Combinatorics
In section VII: The Capacity of a channel in the presence of white thermal noise in his 1949 paper Communication in the presence of noise, C.E.Shannon says, that for a signal with average power P, the total number of reasonably distinguishable amplitudes in the presence of white noise with average power N, is given by $$K\,\sqrt{\frac{P+N}{N}}$$ where K is a small constant in the neighborhood of unity depending on how the phrase reasonably well is interpreted. I wonder where above formula comes from and if it has any deeper theoretical background, since for me it seems just like a simple metric: The Rx signal's amplitude has a standard deviation of $$\sigma_\text{Rx}=\sqrt{P+N},$$ the noises amplitude standard deviation is $$\sigma_\text{Tx} = \sqrt{N},$$ so above formula gives the factor by which the Rx's signals amplitude is larger than the noise amplitude that corrupts that signal on average. But why not compare the Tx signal's amplitudes to the noise amplitudes?
In a circuit with just a resistor and a capacitor I'm trying to figure out which voltage is being referred to that is lagging the current(which is the same current throughout the entire SERIES AC circuit). The voltage which leads or lags is the same voltage referred to by $$I(t)=C\frac{dV(t)}{dt}$$ which comes from the derivative of $$V(t) = \frac{Q(t)}{C} = \frac{1}{C}\int_{t_0}^t I(\tau) \mathrm{d}\tau + V(t_0)$$ Therefore the voltage which lags is the voltage drop across the capacitor because the charges are added above to the plates of the capacitor so the voltage refers to it. In an AC circuit, the voltage source is forced to alternate with a cosine wave and the phase difference between the source current driven by $$V_0\cos(\omega t)\tag{1}$$ and the voltage which I'm asking about comes from: $$I = C \frac{dV}{dt} = -\omega {C}{V_\text{0}}\sin(\omega t)\tag{2}$$ which is the same as $$I = {I_\text{0}}{\cos({\omega t} + {90^\circ})}$$ The voltage used in this formula was the source voltage not the voltage drop across the capacitor which defines Capacitance. The voltage across the capacitor is not instantaneous and in fact exponentially decays up to the applied voltage as shown by a constant DC voltage circuit where : $$V_0 = v_\text{resistor}(t) + v_\text{capacitor}(t) = i(t)R + \frac{1}{C}\int_{t_0}^t i(\tau) \mathrm{d}\tau$$ Taking the derivative: $$RC\frac{\mathrm{d}i(t)}{\mathrm{d}t} + i(t) = 0$$ Solving the first order: $$I(t) = \frac{V_0}{R} \cdot e^{\frac{-t}{\tau_0}}$$ Assuming initially the resitor is $V_0$ the voltage of capacitor: $$V(t) = V_0 \left( 1 - e^{\frac{-t}{\tau_0}}\right)$$ Thus I'm confused about where the $90^\circ$ voltage lag comes from. If it's because of the derivative of the source voltage why is formula 2 even applicable to the source voltage. Second question: What is the formula for the voltage reached by the capacitor in an ac circuit. It appears as if it is the source max voltage but I don't believe/understand that. Here is an identically solved using a sin source Voltage: $$I_C+I_{max}\sin(\omega t +90^\circ)$$ In the above derivation, the source voltage is again mixed with the formula for the voltage stored across a capacitor or I'm to believe the maximum source voltage is somehow reached on the exponential decay to the voltage on a capacitor during a cycle. Could someone either explain why the source voltage is used as if it was the capacitor voltage or the identically reversed inductor or refer me to a source that explains it?
Difference between revisions of "Gay-Berne model" (spelling change) m (Added a recent publication) Line 48: Line 48: :<math>\frac{\chi \prime }{\alpha \prime^{2}}=1- {\left(\frac{\epsilon_{ee}}{\epsilon_{ss}}\right)} ^{\frac{1}{\mu}}.</math> :<math>\frac{\chi \prime }{\alpha \prime^{2}}=1- {\left(\frac{\epsilon_{ee}}{\epsilon_{ss}}\right)} ^{\frac{1}{\mu}}.</math> + + ==Phase diagram== ==Phase diagram== :''Main article: [[Phase diagram of the Gay-Berne model]]'' :''Main article: [[Phase diagram of the Gay-Berne model]]'' Revision as of 12:42, 13 June 2012 where, in the limit of one of the particles being spherical, gives: and with and A modification of the Gay-Berne potential has recently been proposed that is said to result in a 10-20% improvement in computational speed, as well as accuracy [2]. Phase diagram Main article: Phase diagram of the Gay-Berne model References J. G. Gay and B. J. Berne "Modification of the overlap potential to mimic a linear site–site potential", Journal of Chemical Physics 74pp. 3316-3319 (1981) Rasmus A. X. Persson "Note: Modification of the Gay-Berne potential for improved accuracy and speed", Journal of Chemical Physics 136226101 (2012) Related reading R. Berardi, C. Fava and C. Zannoni "A generalized Gay-Berne intermolecular potential for biaxial particles", Chemical Physics Letters 236pp. 462-468 (1995) Douglas J. Cleaver, Christopher M. Care, Michael P. Allen, and Maureen P. Neal "Extension and generalization of the Gay-Berne potential" Physical Review E 54pp. 559-567 (1996) Roberto Berardi, Carlo Fava, Claudio Zannoni "A Gay–Berne potential for dissimilar biaxial particles", Chemical Physics Letters 297pp. 8-14 (1998)
(→Stacking) (37 intermediate revisions by 2 users not shown) Line 1: Line 1: − + + + + + * [[Siril:Tutorial_import|Convert your images in the FITS format Siril uses (image import)]] * [[Siril:Tutorial_import|Convert your images in the FITS format Siril uses (image import)]] * [[Siril:Tutorial_sequence|Work on a sequence of converted images]] * [[Siril:Tutorial_sequence|Work on a sequence of converted images]] * [[Siril:Tutorial_preprocessing|Pre-processing images]] * [[Siril:Tutorial_preprocessing|Pre-processing images]] − * [[Siril:Tutorial_manual_registration|Registration ( + * [[Siril:Tutorial_manual_registration|Registration (alignment)]] * → '''Stacking''' * → '''Stacking''' − ==Stacking== + ==Stacking== + + + + + + + + + + + + + + + + + + + + + + + + + + − + + the the "stacking " , to and . − [[File:Siril stacking screen.png]] + + [[File:Siril stacking screen.png]] − + + the result + + in + in + + + + + .. + file , + .. + + + . + [[File:Siril stacking result.png|700px]] [[File:Siril stacking result.png|700px]] + + + + [[File:Siril inal_result.png|700px]] [[File:Siril inal_result.png|700px]] − The images above + + The images above the result in Siril the . Note the of the signaltonoise regarding the result given for one frame in the previous [[Siril:Tutorial_preprocessing|step]] . + + + + + Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this [[Siril:Manual|page]]. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this [[Siril:Manual|page]]. + + + + + + + − + Latest revision as of 10:34, 13 September 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixel Minimum Stacking This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower. In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]). The output console thus gives the following result: 14:33:06: Pixel rejection in channel #0: 0.181% - 1.184% 14:33:06: Pixel rejection in channel #1: 0.151% - 1.176% 14:33:06: Pixel rejection in channel #2: 0.111% - 1.118% 14:33:06: Integration of 12 images: 14:33:06: Pixel combination ......... average 14:33:06: Normalization ............. additive + scaling 14:33:06: Pixel rejection ........... Winsorized sigma clipping 14:33:06: Rejection parameters ...... low=4.000 high=3.000 14:33:07: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 14:33:07: Execution time: 9.98 s. 14:33:07: Background noise value (channel: #0): 9.538 (1.455e-04) 14:33:07: Background noise value (channel: #1): 5.839 (8.909e-05) 14:33:07: Background noise value (channel: #2): 5.552 (8.471e-05) After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 12 files. The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]21/5.1 = 4.11 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial