text
stringlengths
256
16.4k
You know the geometric series $$\sum_{n=0}^{\infty}r^n=\frac{1}{1-r},\quad |r|<1\tag{1}$$ Due to convergence of (1) (for $|r|<1$) you can take the derivative with respect to $r$ by taking the element-wise derivatives of the left-hand side. Equating the derivatives of both sides of (1) gives $$\sum_{n=0}^{\infty}nr^{n-1}=\sum_{n=1}^{\infty}nr^{n-1}=\sum_{n=0}^{\infty}(n+1)r^n=\frac{1}{(1-r)^2},\quad |r|<1\tag{2}$$ With $r=az^{-1}$ you get the following $\mathcal{Z}$-transform relation: $$(n+1)a^nu[n]\Longleftrightarrow \frac{1}{(1-az^{-1})^2}=\frac{z^2}{(z-a)^2}\tag{3}$$ where $u[n]$ is the unit step sequence.With the time-shifting property of the $\mathcal{Z}$-transform (see here) you get the following pair: $$(n-1)a^{n-2}u[n-2]\Longleftrightarrow \frac{1}{(z-a)^2}\tag{4}$$ With (4) you immediately get $$\frac{5}{(z-2)^2}\Longleftrightarrow 5(n-1)2^{n-2}u[n-2]$$ Of course, all of this is only valid inside the region of convergence, i.e. for $|z|>a=2$.
Answer $$x=.13,2.37$$ Work Step by Step Using the quadratic formula, we obtain: $$x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}$$ $$x=\frac{40\pm \sqrt{(-40)^2-4(16)(5)}}{2(5)}$$ $$x=.13,2.37$$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Background: I have seen lots of people asking whether multiplication and pseudo-random sequences can be approximated by a NN without providing whether the inputs and outputs are bounded or not, and people have answered it (lot of upvotes) based on conventional NN knowledge. without taking into consideration the aforementioned fact. TL;DR How good/Is it possible by a Neural Network to approximate an unbounded function provided it is trained on a subset of the number line and the test inputs are significantly outside the subset? Can a Neural Network do regression for an unbounded function? To me it is impossible if the output function is sigmoid, since the best approximation (basis of all signal decomposition and reconstruction schemes) of a function the Fourier Series ($\star$) demands Dirichlet's condition to be satisfied, one of which more or less states that the value should be absolutely integrable ($\int_{-\infty}^{\infty}|f(x)|^2dx < \infty$). The sigmoid can be more or less thought in terms of a sinusoidal function as its value is bounded like a sinusoid. Now, if the output function used is ReLu then the output is unbounded. But still it is just some linear combination of weights gone through some non-linear functions (in the previous layers which at best might be linearly unbounded if previous layers are ReLu). So one can assume, that even though the Neural Net can approximate an unbounded linear function, can it approximate an unbounded Polynomial function or an exponential function? $\star$ Although the regression problem might seem more suitable to Fourier Transform analogy than Fourier Series, I have used the FS analogy based on the fact that FT output is is continuous function as opposed to FS (in NN regression we are adding outputs of several nodes, similar to what we do in FS where $number_{nodes} << \infty$.
I was waiting in line at my local bank recently and spotted an urn filled with coins on a table in the corner of the lobby. When I got to the head of the line, I inquired about it and the clerk told me the urn was part of a contest among some of the bank employees. The objective of the contest was to guess the average mint year among all the coins in the urn without going over the true average. I explained to the teller that I had an unhealthy interest in these sorts of contests and then asked for one additional clarification about the rules: could the contestants pick up a few of the coins to inspect them before submitting a guess? The clerk shrugged and said she didn’t see any problem with that. The contest piqued my interests because it was somewhat unusual. Most guessing contests involve estimating the number of jellybeans in a jar or the number of snakes on a plane. What intrigued me about this contest was that it involved estimating the mean with the added twist that any guess over the true average would be disqualified. Although I couldn’t participate in the bank contest directly, I decided to recreate my own contest and describe how I arrived at a guess. This post is mainly an excuse for me to expound about a resampling technique called the bootstrap. The bootstrap is an integral component of my data analysis toolbox. It provides a broadly-applicable method to manage uncertainty in complex statistical models. I find the bootstrap easiest to grok through pictures supplemented with words and math. This post attempts to conjoin all 3 methods to provide an overview of the technique using the coin contest I encountered as an example. 1 In many scenarios, it’s too expensive or in some way infeasible to draw many random samples from a population in order to find an underlying sampling distribution for a statistic. The bootstrap method belongs to a family of resampling techniques that attempts to address problems of inference with limited data. The overarching idea is to use a single sample as a proxy to approximate the sampling distribution of a statistic under the assumption that the sample is a model of the population. Once the sampling distribution has been approximated, it can be used for statistical inference, such as in my coin contest. To recreate the contest, I gathered all the spare coins I could find, 3128 in total, then placed them into a makeshift urn: From the urn, I picked up a sample of 55 coins and recorded the mint year of each coin. Here’s the resulting empirical CDF of the mint years for the 55 coin sample: Let $\theta$ denote a population parameter, the population mean in this example, which is the average mint year among all the coins in the urn. Let $\omega \in \Omega$ denote a single simple random sample drawn from the probability space, $\Omega$ in the urn. Let $A = \{ \omega_1, \omega_2, \dots, \omega_k \} \subset \Omega $ denote an event, which represents the sample of coins of size $k$ where $\omega_i$ is an atomic event. The realization of $A$ in my coin contest is shown in the above CDF, where $k=55$. In the first phase of the bootstrap, $n$ bootstrap samples, $X_1^{*}, X_2^{*}, \dots, X_n^{*}$, each of size $k$ are generated by sampling with replacement from $A$. This process is depicted below where the event $A$ is represented as a histogram and each arrow shows a random variate drawn from $A$ and placed into a given bootstrap sample: In the next phase, a bootstrap statistic, $\widehat{\theta}{}_i^{*}$ is calculated for the $i$th bootstrap sample. Since $A$ is a SRS from the urn, it follows that the bootstrap distribution is an approximation of the sampling distribution if $A$ is a reasonable approximation of the population of coins in the urn. An estimate of $\widehat{\theta}$ is then given by the expected value of the bootstrap distribution: The bias of the original estimator is $\widehat{\theta} - \theta$. If $A$ is a reasonable model of the population, the expected value of the bootstrap distribution can be used to estimate bias via substitution: $$ bias \, (\widehat{\theta}) \approx \widehat{\theta} - \theta \approx \mathbb{E}[\widehat{\theta}{}^{*}] - \widehat{\theta} $$ Since the sample of coins I picked from the urn is small, I made the decision to bias correct the estimator. This choice can improve accuracy, but often at the expense of increased variance. This is a bias-variance tradeoff. Submitting a bias corrected estimate of $\widehat{\theta}$ as a guess is unwise. If the estimate turned out to be greater than $\theta$, I would be disqualified. To help safeguard against an over-estimate, the bootstrap can be used to construct confidence intervals (CIs) around the estimator. There are many different methods for generating CIs for bootstrap estimations. For this problem, I chose the bootstrap-t method because it’s applicable to a wide range of problems and also straightforward to calculate, albeit computationally intensive. $$ t_i^{*} = \frac{\widehat{\theta}{}_i^{*} - \widehat{\theta}}{se(\widehat{\theta}{}_i^{*})} $$ Since the estimator is the average mint year for the $i$th bootstrap, the standard error (se) of $\widehat{\theta}$ is given by $s_i^{*} \sqrt{n}^{-1}$. The mean is a special case where the standard error is known. However, for many statistics, the standard error must be approximated empirically using an additional round of bootstrapping. In these cases, the process begins anew by generating sub-bootstrap samples for each original bootstrap sample. For the $i$th bootstrap sample, sub-bootstrap samples, $X_{i,\,1}, X_{i,\,2}, \dots, X_{i,\,n}$, are generated, where $X_{i,\,j}$ is the index of a given sub-sample. In the illustration below, the gray sub-figure represents the first round of bootstrapping. For each bootstrap sample, an additional round of bootstrapping is performed drawing from one of the original bootstrap samples shown as a black box in the sub-figure: These sub-bootstraps provide a method to approximate the standard error of the original bootstrap distribution. The empirical standard deviation of the sub-bootstraps can now be used to approximate the standard error of the estimator, $\widehat{\theta}{}^*$. Hence, the boostrap-t statistic can now be calculated for each bootstrap to generate the bootstrap distribution of $t^{*}$. From this distribution, confidence intervals for $\widehat{\theta}$ can then be constructed where $\alpha \,/\, 2$ is a given quantile from the $t^{*}$ bootstrap distribution: $$ [\widehat{\theta} - se(\widehat{\theta}{}^{*}) t^{*}_{\alpha\, /\, 2}, \widehat{\theta} - se(\widehat{\theta}{}^{*}) t^{*}_{1 - \alpha \,/\, 2} ]$$ In my coin contest, I recorded the mint dates of the coins in my sample as well as the mint dates of all the coins in the urn. In a real scenario, this parameter, the average mint year in the urn, would be unknown. However, calculating this parameter retrospectively reveals insight into the performance of the technique. Here’s a plot of the performance of many bootstrap replicates as a function of sample size: The x-axis shows the sample size used in different bootstrap experiments and the y-axis shows mint years. The black horizontal line is the true average mint year of all coins in the urn. The slope of this line is zero because the true mint average in the urn is fixed irrespective of sample size. The blue and red splines show the bias corrected estimate and one-sided confidence band obtained by bootstrapping different sample sizes. The results of bootstrapping my original sample of 55 coins is shown in the plot as two points connected by a dotted line. The blue point represents my bias corrected best estimate for $\theta$, while the red point is my submission guess for the contest. The true average mint year of coins in the urn was 1995.9 years; my corrected estimate was 1996.2 years; and my submission was 1997.8 years. My submission was off by 1.9 years and my estimate was off by 0.3 years. Given that only a single sample of small size was used for inference, the bootstrap produces fairly accurate results. The bootstrap is a powerful tool, but it shouldn’t be applied carte blanche. This technique can perform poorly when approximating the sampling distribution of a parameter on the boundary of the parameter space; for example, an extreme order statistic. In other cases, when assumptions of smoothness or bounded variance do not hold, the bootstrap can also exhibit poor performance. Two alternatives to the bootstrap are the m-out-of-n bootstrap and subsampling, both of which can sometimes exhibit better performance in situations where the naive bootstrap fails. Another problem arises not from the bootstrap directly, but from bootstrapping a sample that is a poor model of the population. This problem is apparent in the above plot of my coin estimation. When the sample size becomes too small, the estimate and confidence band can deviate wildly. An extreme case of this problem occurs when the sample size is only a single value. Despite these issues, the bootstrap is a staple technique in my data analysis toolbox. I’ve used a one-sided bootstrap-t function to obtain the results shown in the plots of this post. Here’s my function called boot_upper(): #!/usr/bin/env python # -*- encoding: utf-8 -*- """ Author: Seth Brown Description: Bootstrap-t function Date: 2013-08-17 Contact: www.drbunsen.org Python 2.7.5 """ from collections import namedtuple import scipy as sp from scipy.stats.mstats import mquantiles def boot_upper(vec, fn, n_boots=2000, n_inboots=200, alpha=0.05, bias_corr=False): """ Non-parametric bootstrap-t with one-sided confidence interval vec: ndarray, original sample fn: function, statistic to bootstrap n_boots: int, number of outer bootstrap iterations n_inboots: int, number of inner bootstrap iterations alpha: float, alpha level bias_corr: bool, bias correction-defaults to False """ sample_size = len(vec) boot_stats = sp.zeros(n_boots) emp_stds = boot_stats.copy() for i in xrange(n_boots): sample_idxs = sp.random.randint(0, sample_size, size=sample_size) boot_sample = vec[sample_idxs] boot_stats[i] = fn(boot_sample) inboot_stats = sp.zeros(n_inboots) for j in xrange(n_inboots): insample_idxs = sp.random.randint(0, sample_size, size=sample_size) inboot_sample = boot_sample[insample_idxs] inboot_stats[j] = sp.std(inboot_sample, ddof=1) emp_stds[i] = sp.std(inboot_stats) stat = fn(vec) t_stats = sp.true_divide((boot_stats - stat), emp_stds) t_l = mquantiles(t_stats, [alpha])[0] std_err = sp.std(boot_stats) exp_stat = fn(boot_stats) bias = exp_stat - stat ci_upper = stat - t_l * std_err if bias_corr is True: stat = stat + bias boot_temp = namedtuple('boot', 'stat, upper_ci, bias, std_err') return boot_temp(stat, ci_upper, bias, std_err)
40th SSC CGL level Question Set, topic Trigonometry 4 This is the 40th question set for the 10 practice problem exercise for SSC CGL exam and 4th on topic Trigonometry. We repeat the method of taking the test. It is important to follow result bearing methods even in practice test environment. Method of taking the test for getting the best results from the test: Before start,go through or any short but good material to refresh your concepts if you so require. Tutorial on Basic and rich concepts in Trigonometry and its applications Answer the questionsin an undisturbed environment with no interruption, full concentration and alarm set at 12 minutes. When the time limit of 12 minutes is over,mark up to which you have answered, but go on to complete the set. At the end,refer to the answers given at the end to mark your score at 12 minutes. For every correct answer add 1 and for every incorrect answer deduct 0.25 (or whatever is the scoring pattern in the coming test). Write your score on top of the answer sheet with date and time. Identify and analyzethe problems that you couldn't doto learn how to solve those problems. Identify and analyzethe problems that you solved incorrectly. Identify the reasons behind the errors. If it is because of your shortcoming in topic knowledgeimprove it by referring to only that part of conceptfrom the best source you can get hold of. You might google it. If it is because of your method of answering,analyze and improve those aspects specifically. Identify and analyzethe problems that posed difficulties for you and delayed you. Analyze and learn how to solve the problems using basic concepts and relevant problem solving strategies and techniques. Give a gapbefore you take a 10 problem practice test again. Important:both and practice tests must be timed, analyzed, improving actions taken and then repeated. With intelligent method, it is possible to reach highest excellence level in performance. mock tests Resources that should be useful for you Before taking the test it is recommended that you refer to You may also refer to the related resources: or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test. If you like,you may to get latest subscribe content on competitive examspublished in your mail as soon as we publish it. Now set the stopwatch alarm and start taking this test. It is not difficult. 40th question set- 10 problems for SSC CGL exam: 4th on Trigonometry - test time 12 mins Problem 1. If $cosec 39^0=p$, the value of, $\displaystyle\frac{1}{cosec^2 51^0} + sin^2 39^0 + tan^2 51^0 - \displaystyle\frac{1}{sin^2 51^0 sec^2 39^0}$ is, $p^2 - 1$ $\sqrt{p^2 - 1}$ $1-p^2$ $\sqrt{1-p^2}$ Problem 2. If $sec \theta=x + \displaystyle\frac{1}{4x}$, where $(0^0 \lt \theta \lt 90^0)$, then $sec \theta + tan \theta$ is, $\displaystyle\frac{x}{2}$ $2x$ $\displaystyle\frac{2}{x}$ $x$ Problem 3. If $tan \theta=1$, then the value of $\displaystyle\frac{8sin \theta + 5cos \theta}{sin^3 \theta -2cos^3 \theta + 7cos \theta}$ is, $2\displaystyle\frac{1}{2}$ $2$ $3$ $\displaystyle\frac{4}{5}$ Problem 4. If $7sin \theta = 24cos \theta$, where $0 \lt \theta \lt \displaystyle\frac{\pi}{2}$, then the value of $14tan \theta - 75cos \theta - 7sec \theta$ is, 1 3 2 4 Problem 5. The minimum value of $sin^2 \theta + cos^2 \theta + sec^2 \theta + cosec^2 \theta + tan^2 \theta + cot^2 \theta$ is equal to, 1 7 3 5 Problem 6. In a right $\triangle ABC$ with right angle at $\angle ABC$, if $AB=2\sqrt{6}$ and $AC - BC = 2$ then, $sec A + tan A$ is, $\displaystyle\frac{\sqrt{6}}{2}$ $2\sqrt{6}$ $\sqrt{6}$ $\displaystyle\frac{1}{\sqrt{6}}$ Problem 7. If $tan 2\theta . tan 4\theta = 1$, then the value of $tan 3\theta$ is, $\sqrt {3}$ $0$ $\displaystyle\frac{1}{\sqrt{3}}$ $1$ Problem 8. If $sin \displaystyle\frac{\pi x}{2}=x^2 -2x +2$, then the value of $x$ is, $0$ $-1$ $1$ none of these Problem 9. If $2sin \theta + cos \theta = \displaystyle\frac{7}{3}$, then the value of $(tan^2 \theta - sec^2 \theta)$ is, $0$ $\displaystyle\frac{7}{3}$ $\displaystyle\frac{3}{7}$ $-1$ Problem 10. If $(rcos \theta - \sqrt{3})^2 + (rsin \theta - 1)^2 = 0$, then the value of $\displaystyle\frac{rtan \theta + sec \theta}{rsec \theta + tan \theta}$ is, $\displaystyle\frac{4}{5}$ $\displaystyle\frac{\sqrt{3}}{4}$ $\displaystyle\frac{\sqrt{5}}{4}$ $\displaystyle\frac{5}{4}$ The answers are given below, but you will find the to these questions in detailed conceptual solutions . SSC CGL level Solution Set 40 on Trigonometry 4 You may also watch video solutions in the two-part video. Part 1: Q1 to Q5 Part 2: Q6 to Q10 Answers to the questions Problem 1. Ans: Option a: $p^2 - 1$. Problem 2. Ans: Option b: $2x$. Problem 3. Ans: Option b: 2. Problem 4. Ans: Option c: 2. Problem 5. Ans: Option b: 7. Problem 6. Ans: Option c: $\sqrt{6}$. Problem 7. Ans: Option d: 1. Problem 8. Ans: Option c: 1. Problem 9. Ans: Option d: $-1$. Problem 10. Ans: Option a: $\displaystyle\frac{4}{5}$. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Question Set 40 on Trigonometry 4
To cap off this post with a mathematical gemstone, below is the full computation of the automorphism group of the graded noncommutative ring mentioned on the previous page. Any automorphism of the noncommutative $\mathbb{C}$-algebra $$R_q:=\mathbb{C}\langle x,y\rangle/(xy-qyx)$$ that preserves the grading by degree can be thought of as an element of $\mathrm{GL}_2(\mathbb{C})$, since the degree-one generators $x$ and $y$ must map to linear combinations of $x$ and $y$, and these images determine the map. In other words, we can represent the automorphism that sends $x$ to $ax+by$ and $y$ to $cx+dy$ by the $2\times 2$ matrix $$\left(\begin{array}{cc} a & b \\ c & d\end{array}\right).$$ For a map of this form to extend to an automorphism, it is necessary and sufficient that the ideal generator $xy-qyx$ maps to a scalar multiple of itself. In this case we have $$\begin{align*} xy-qyx &\mapsto (ax+by)(cx+dy)-q(cx+dy)(ax+by) \\ &= (1-q)acx^2+(ad-qbc)xy+(bc-qad)yx+(1-q)bdy^2 \end{align*}$$ If $q=1$, this simplifies to $(ad-bc)(xy-yx)$, which lies in the desired ideal. Thus the group of degree-preserving automorphisms is all of $\mathrm{GL}_2(\mathbb{C})$ in this case. If $q\neq 1$, since the $x^2$ and $y^2$ terms must vanish, we have either $a=0$ or $c=0$, and either $b=0$ or $d=0$. If $a=0$ and $b=0$ simultaneously then the $xy$ coefficient would be $0$, and similarly for $c$ and $d$. Thus we must have either $a=d=0$ or $b=c=0$. In the case that $a=d=0$, the above simplifies to $bc(-qxy+yx)$, and in the case that $b=c=0$, the it simplifies to $ad(xy-qyx)$. The latter is clearly a scalar multiple of $xy-qyx$, but the former is so only if $q=-1$. Hence if $q\neq \pm 1$, the automorphism group is isomorphic to $\mathbb{C}^\times \times \mathbb{C}^\times$, consisting of the matrices of the form $$\left(\begin{array}{cc} a & 0 \\ 0 & d\end{array}\right)$$ with $a,d\neq 0$. And if $q=-1$, we have a second copy of $\mathbb{C}^\times \times \mathbb{C}^\times$, from the matrices of the form $$\left(\begin{array}{cc} 0 & b \\ c & 0\end{array}\right)$$ with $b,c\neq 0$. The upshot is that because this ring is generated in degree $1$ with a single quadratic homogeneous relation, the higher degree $q$-numbers do not appear in the computation, and only $q=\pm 1$ are special values. A nice little gemstone!
To cap off this post with a mathematical gemstone, below is the full computation of the automorphism group of the graded noncommutative ring mentioned on the previous page. Any automorphism of the noncommutative $\mathbb{C}$-algebra $$R_q:=\mathbb{C}\langle x,y\rangle/(xy-qyx)$$ that preserves the grading by degree can be thought of as an element of $\mathrm{GL}_2(\mathbb{C})$, since the degree-one generators $x$ and $y$ must map to linear combinations of $x$ and $y$, and these images determine the map. In other words, we can represent the automorphism that sends $x$ to $ax+by$ and $y$ to $cx+dy$ by the $2\times 2$ matrix $$\left(\begin{array}{cc} a & b \\ c & d\end{array}\right).$$ For a map of this form to extend to an automorphism, it is necessary and sufficient that the ideal generator $xy-qyx$ maps to a scalar multiple of itself. In this case we have $$\begin{align*} xy-qyx &\mapsto (ax+by)(cx+dy)-q(cx+dy)(ax+by) \\ &= (1-q)acx^2+(ad-qbc)xy+(bc-qad)yx+(1-q)bdy^2 \end{align*}$$ If $q=1$, this simplifies to $(ad-bc)(xy-yx)$, which lies in the desired ideal. Thus the group of degree-preserving automorphisms is all of $\mathrm{GL}_2(\mathbb{C})$ in this case. If $q\neq 1$, since the $x^2$ and $y^2$ terms must vanish, we have either $a=0$ or $c=0$, and either $b=0$ or $d=0$. If $a=0$ and $b=0$ simultaneously then the $xy$ coefficient would be $0$, and similarly for $c$ and $d$. Thus we must have either $a=d=0$ or $b=c=0$. In the case that $a=d=0$, the above simplifies to $bc(-qxy+yx)$, and in the case that $b=c=0$, the it simplifies to $ad(xy-qyx)$. The latter is clearly a scalar multiple of $xy-qyx$, but the former is so only if $q=-1$. Hence if $q\neq \pm 1$, the automorphism group is isomorphic to $\mathbb{C}^\times \times \mathbb{C}^\times$, consisting of the matrices of the form $$\left(\begin{array}{cc} a & 0 \\ 0 & d\end{array}\right)$$ with $a,d\neq 0$. And if $q=-1$, we have a second copy of $\mathbb{C}^\times \times \mathbb{C}^\times$, from the matrices of the form $$\left(\begin{array}{cc} 0 & b \\ c & 0\end{array}\right)$$ with $b,c\neq 0$. The upshot is that because this ring is generated in degree $1$ with a single quadratic homogeneous relation, the higher degree $q$-numbers do not appear in the computation, and only $q=\pm 1$ are special values. A nice little gemstone!
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
Comments to the question contain several contributions to alternative approaches by Uri Bader. Since the latter seems to be reluctant to collect them into an answer, I decided to do the part I understand. Approach 1. Suppose given $P$ with top and bottom; then removing top and bottom from $P\times\{1,...,n\}$ is the union of $B:=(P\setminus\{\text{bottom}\})\times\{1\}$, $M:=P\times\{2,...,n-1\}$ and $T:=(P\setminus\{\text{top}\})\times\{n\}$. The hypothesis is that $\{2,...,n-1\}$ is not empty, so all three are obviously contractible, $B$ attached to $M$ along $(P\setminus\{\text{bottom}\})\times\{1,2\}$ and $M$ to $T$ along $(P\setminus\{\text{top}\})\times\{n-1,n\}$. If I understand Uri Bader's comment correctly, one just contracts $B$ and $T$ to $M$ separately. Sort of an illustration, with $P=\{\text{top},\text{bottom}\}$ and $n=4$: Another one, with $P=2\times2\times2$ and $n=3$: Approach 2. This I turn upside down since I am more used to it. For any element $a$ of a (bounded (finite)) lattice $L$ the embedding $[\text{bottom},a]\hookrightarrow L$ has a right adjoint $a\land\_$. Call an element $d$ of a lattice $L$ $\textit{dense}$ if $\forall x\in L\ (d\land x=\text{bottom})\Rightarrow(x=\text{bottom})$ holds. For any nontrivial ($\ne\text{top}$) such element the restriction of the adjoint $d\land\_$ to $L\setminus\{\text{top},\text{bottom}\}$ lands on $(\text{bottom},d]$, so the latter (which has top $d$, hence is contractible) becomes a deformation retract of $L\setminus\{\text{top},\text{bottom}\}$. Now invoking the comment by Richard Stanley - in case $L$ is distributive, it has no nontrivial dense elements if and only if it is a Boolean algebra. A natural question here is whether a characterization is known of those general (non-distributive) lattices which do not possess nontrivial dense (neither codense) elements. Note that for non-distributive lattices generality of this approach is somehow orthogonal to that in Tom Goodwillie's answer: the latter works for products of not necessarily lattices having tops and bottoms, while here one approaches lattices which might not decompose into products. In this connection there was a very interesting comment by Dan Petersen which has been elucidated by Uri Bader, but unfortunately I do not know this area well enough to say anything definite. The way I understand the idea is to consider homotopy types arising from lattices constructed in the same way in different characteristics as sort of "$q$-analogues", that is, families of homotopy types depending on a "modular" parameter encoded in $q$. I don't really know what that means that I wrote. Finally, - it might be true that face lattices of polytopes with top and bottom removed actually have the same homotopy types as the polytopes themselves. Does anybody know anything about it? Just a couple of considerations: if this is the case then obviously the answer to my second question is that any homotopy type of a finite CW-complex may occur. Indeed if I am not mistaken already simplices of the second barycentric subdivision of any CW-complex form a (topless) lattice. Note also that for a poset one possible version of the barycentric subdivision is formed by linearly ordered subposets, and maximal such are not altered if one removes top and bottom...
Let $R$ be a ring with identity (not necessarily commutative) and $R[x]$ be a ring of polynomials over $R$. We say that a ring $S$ is an extension of $R$ if there is a subring $\tilde{R}$ in $S$ isomorphic to $R$.Let $S$ be an extension of $R$, and $$\phi: R\to \tilde{R}\subset S$$be a ring isomorphism.We say that a polynomial $f(x) = \sum\limits_{j\geq 0}f_jx^j\in R[x]$ has a root $\alpha\in S$ if$$\sum\limits_{j\geq 0}\phi(f_j)\alpha^j = 0.$$ In the case, where $R$ is a commutative ring every monic polynomaial $f(x)\in R[x]$ has a root $[x]_f$ in the extension $S = R[x]/R[x]f(x)$ of $R$. In the case, where $R$ is not commutative the set $R[x]/R[x]f(x)$ is a left $R[x]$-module but not a ring, because an ideal $R[x]f(x)$ is not two-sided ideal, but only one-sided. Also in non-commutative case there are examples such that two-sided ideal, containing $f(x)$ that is an ideal $R[x]f(x)R[x]$ is equal to $R[x]$ and in this case $R[x]/R[x]f(x)R[x]$ isomorphic to zero ring. I want to prove that for every ring with identity $R$ and every monic polynomaial $f(x)$ over $R$ there exists an extension $S$ of $R$ such that $f(x)$ has a root in $S$.
Hi, I’m Maria and I’m a $q$-analog addict. The theory of $q$-analogs is a little-known gem, and in this series of posts I’ll explain why they’re so awesome and addictive! So what is a $q$-analog? It is one of those rare mathematical terms whose definition doesn’t really capture what it is about, but let’s start with the definition anyway: Definition: A $q$-analog of a statement or expression $P$ is a statement or expression $P_q$, depending on $q$, such that setting $q=1$ in $P_q$ results in $P$. So, for instance, $2q+3q^2$ is a $q$-analog of $5$, because if we plug in $q=1$ we get $5$. Sometimes, if $P_q$ is not defined at $q=1$, we also say it’s a $q$-analog if $P$ can be recovered by taking the limit as $q$ approaches $1$. For instance, the expression $$\frac{q^5-1}{q-1}$$ is another $q$-analog of $5$ – even though we get division by zero if we plug in $q=1$, we do have a well defined limit that we can calculate, for instance using L’Hospital’s Rule: $$\lim_{q\to 1} \frac{q^5-1}{q-1}=\lim_{q\to 1} \frac{5q^4}{1} = 5.$$ Now of course, there are an unlimited supply of $q$-analogs of the number $5$, but certain $q$-analogs are more important than others. When mathematicians talk about $q$-analogs, they are usually referring to “good” or “useful” $q$-analogs, which doesn’t have a widely accepted standard definition, but which I’ll attempt to define here: More Accurate Definition: An interesting $q$-analog of a statement or expression $P$ is a statement or expression $P_q$ depending on $q$ such that: Setting $q=1$ or taking the limit as $q\to 1$ results in $P$, $P_q$ can be expressed in terms of (possibly infinite) sums or products of rational functions of $q$ over some field, $P_q$ gives us more refined information about something that $P$ describes, and $P_q$ has $P$-like properties. Because of Property 2, most people would agree that $5^q$ is not an interesting $q$-analog of $5$, because usually we’re looking for polynomial-like things in $q$. On the other hand, $\frac{q^5-1}{q-1}$, is an excellent $q$-analog of $5$ for a number of reasons. It certainly satisfies Property 2. It can also be easily generalized to give a $q$-analog of any real number: we can define $$(a)_q=\frac{q^a-1}{q-1},$$ a $q$-analog of the number $a$. In addition, for positive integers $n$, the expression simplifies: $$(n)_q=\frac{q^n-1}{q-1}=1+q+q^2+\cdots+q^{n-1}.$$ So for instance, $(5)_q=1+q+q^2+q^3+q^4$, which is a natural $q$-analog of the basic fact that $5=1+1+1+1+1$. The powers of $q$ are just distinguishing each of our “counts” as we count to $5$. This polynomial also captures the fact that $5$ is prime, in a $q$-analog-y way: the polynomial $1+q+q^2+q^3+q^4$ cannot be factored into two smaller-degree polynomials with integer coefficients. So the $q$-number $(5)_q$ also satisfies Properties 3 and 4 above: it gives us more refined information about $5$-ness, by keeping track of the way we count to $5$, and behaves like $5$ in the sense that it can’t be factored into smaller $q$-analogs of integers. But it doesn’t stop there. Properties 3 and 4 can be satisfied in all sorts of ways, and this $q$-number is even more interesting than we might expect. It comes up in finite geometry, analytic number theory, representation theory, and combinatorics. So much awesome mathematics is involved in the study of $q$-analogs that I’ll only cover one aspect of it today: $q$-analogs that appear in geometry over a finite field $\mathbb{F}_q$. Turn to the next page to see them!
I need some help with understanding a part of this proof and also writing it up correctly. Given $a_n\geq a_{n+1}\geq b_{n+1} \geq b_n$ with $a_1=a$ and $b_1=b$. I am also given that $$a_{n+1}=\frac{a_n+b_n}{2}$$ and $$b_{n+1}=\sqrt{a_nb_n}$$ I need to show that sequences ${a_n}$ and ${b_n}$ converges and that ${a_n}$ and ${b_n}$ have the same limit. I am told to use the monotonic convergence theorem to prove that both sequences converges and I have the following proof: Notice that {$a_n$} is monotonically decreasing while {$b_n$} is monotonically increasing. Since {$a_n$} is bounded above by supremum $a_1$ below by its infimum $b_1$, {$a_n$} according to the monotonic convergence theorem has to converge. Similarly, notice that {$b_n$} is bounded below by infimum $b$ and supremum $a$. By monotonic convergence theorem {$b_n$} must also converge as well. Next, I am told to show that {$a_n$} and {$b_n$} have the same limit. In other words, if [$a_n-b_n$] as n tends to infinity must be 0. For this part, it seems to be the case that one can prove it by just showing that $a_{n+1} - b_{n+1} \leq (1/2) (a_n - b_n) $. And I know you can just show this by using the definition of the arithmetic mean, which is $a_{n+1} - b_{n+1} \leq a_{n+1} - b_n = (1/2) (a_n - b_n)$. Why is that? It seems incompletely and not so obvious to me. An explanation here would help. Please help me edit my proof (what I have already) and clarify my understanding
$\frac{1-(-$\frac{1}{2}^t$) }{$\frac{3}{2}$}$ For some reason it gives me an error, something about a Missing \endgroup, be warned I'm new to Latex. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community You are using too many $ and hence forcing tex to enter math mode when it is already in math mode. Hence remove all internal $. Also the super script t should go outside the parenthesis. Assuming that you are in display mode, I have added \displaystyle. If not, remove it and also \big. \documentclass{article}\usepackage{mathtools}\begin{document}$\displaystyle \frac{1-\big(-\frac{1}{2}\big)^t}{\frac{3}{2}}$\end{document} Remove inner $. $\frac{1-(-\frac{1}{2}^t)}{\frac{3}{2}}$ should do the trick. The following is just a suggestion: \documentclass{article}\usepackage{amsmath}\begin{document}$\dfrac{1-\left(-\frac{1}{2}^t\right)}{\frac{3}{2}}$\end{document}
Let $X_1,\dots,X_n$ be complete vector fields on $\mathbb R^n$ and suppose that $(X_1(p),\dots,X_n(p))$ is a basis for all $p \in \mathbb R^n$. Question: Is it possible to choose a cube $C$ around the origin of $\mathbb R^n$ such that there is for every $p \in C$ a piecewise smooth curve $\alpha \subset C$ which connects $p$ with $0$ where the smooth parts of the curve are given by the flows of the vector fields $\pm X_1,\dots,\pm X_n$? (With other words: is it possible to travel from $0$ to $p$ following only the integral curves of the given vector fields in a bounded domain?) For $n=2$ this is pretty clear; w.l.o.g. $X_1=\partial/\partial x_1$ and the image of $x_1=0$ under the flow $X_2$ fills all of $\mathbb R^2$, since the flow lines of $X_2$ intersect $x_1=0$ transversally. Now it is easy to find a $C$ and $\alpha$ for a $p \in C$. But I can not generalize this for $n$ arbitrary. Edit: Instead of demanding $C$ to be a cube, one could also ask if there is a open neighbourhood of the origin with the desired properties.
Defining parameters Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.fq (of order \(60\) and degree \(16\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 3600 \) Character field: \(\Q(\zeta_{60})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\). Total New Old Modular forms 64 64 0 Cusp forms 0 0 0 Eisenstein series 64 64 0 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
Reliance on mechanical procedures in solving problems Often we find at high school level, math problems solved in a long series of steps. Especially this we find in case of problems of the type, Prove that, "Some expression " = "Some other expression". In school math terminology, the "Some expression" is called LHS (short form of Left Hand Side) and the "Some other expression" as RHS (short form of Right Hand Side). This type of problems occur abundantly in Elementary Trigonometry of proving Identities. These long solutions use a conventional approach of going towards the solution from LHS to RHS (or initial state to goal state) through many steps using the expansion of the LHS expression and then simplification or consolidation of the numerous expanded terms towards the form of expression on the right hand side, that is, the RHS. This approach has two important disadvantages, Not only does this approach take considerable amount of time and effort, but because of large number of steps, chances of error are much higherin this approach. This mechanical approach relies heavily on manipulation of terms. In fact, if students follow only this approach of solving problems, they may tend to become used to mechanical and procedural thinking using low level mathematical constructs without using the problem solvingabilities of the student suppressing their inherent creative and innovative out-of-the-box thinking abilities. Let us go through the process of solving a Trigonometric Identity problem to appreciate the difference between the conventional approach and the problem solver's intelligent approach. The difference is always significant and measurable. Problem example Prove the identity: \begin{align} (cos\theta & – cosec\theta)^2 + (sin\theta – sec\theta)^2 \\ & = (1 – sec\theta{cosec\theta})^2 \end{align} First try to solve this problem yourself and then only go ahead. It may very well be possible that you would be able to solve it quickly in a few steps. Conventional solution Usually a conventional approach in this case will involve a long deduction process as all the terms are in the form of squares of expressions. The square expressions are expanded, then accumulated together suitably so that further simplification is possible. After accumulating together the friendly terms, the expressions are simplified and then again retransformed back to the desired expression of the RHS. This is an abstract description of the generally followed conventional problem solving. Let us see actually how it is done in our problem example case. Taking the Left hand side expression $(cos\theta – cosec\theta)^2 + (sin\theta – sec\theta)^2$ and expanding we get LHS as, \begin{align} & (cos\theta – cosec\theta)^2 + (sin\theta – sec\theta)^2 \\ & = (cos^2\theta - 2cos\theta{cosec\theta} + cosec^2\theta) \\ &\hspace{10mm} + (sin^2\theta - 2sin\theta{sec\theta} + sec^2\theta) \\ & = (sin^2\theta + cos^2\theta) \\ &\hspace{10mm} - 2\left(cos\theta{cosec\theta} + sin\theta{sec\theta}\right) \\ & \hspace{10mm} + (cosec^2\theta + sec^2\theta) \\ & = 1 - 2\left(\displaystyle\frac{cos\theta}{sin\theta} + \displaystyle\frac{sin\theta}{cos\theta}\right) \\ & \hspace{10mm} + \left(\displaystyle\frac{1}{sin^2\theta} + \displaystyle\frac{1}{cos^2\theta}\right) \\ & = 1 - 2\left(\displaystyle\frac{cos^2\theta + sin^2\theta}{sin\theta{cos\theta}}\right) \\ & \hspace{10mm} + \left(\displaystyle\frac{cos^2\theta + sin^2\theta}{sin^2\theta{cos^2\theta}}\right) \\ & = 1 - 2\left(\displaystyle\frac{1}{sin\theta{cos\theta}}\right) \\ & \hspace{10mm} + \left(\displaystyle\frac{1}{sin^2\theta{cos^2\theta}}\right) \\ & = 1 - 2sec\theta{cosec\theta} + (sec\theta{cosec\theta})^2 \\ & = (1 - sec\theta{cosec\theta})^2 \\ & = RHS \end{align} Efficient solution in a few steps Instead of this conventional approach the very first step that you must take is to analyze the problem. In any problem solving, math or otherwise, this must be the first step. You must start with analyzing the problem statement. Without the first step of Problem analysis, no efficient problem solving is possible. A corollary, In competitive exams, and also in competitive work environment, the first step of problem analysis is crucial for success. The better and quicker you analyze a problem, the faster you would reach the desired solution. Problem analysis As you have already the goal state in the form of the expression on the RHS of the $"="$ symbol, your immediate reaction would be to examine how much similarity does the RHS expression have compared to the expression you would start with on the LHS. Aside: Psychology and process of problem solving by End State Analysis: The desired goal to reach undoubtedly rank highest in importance in your mind among all other information about the problem as your natural tendency is to reach the goal state in quickest possible time. This pre-eminence of importance of the desired end state or goal state focuses your attention naturally on this end state when you know it. This is the case of proving identities. What would you look for in the end state? If it is a journey from one city to another, you study the distance to the destination from your starting point. You try to judge what kind of transportation along which path would take you to the destination in shortest possible time, isn't it? We assume here the importance of optimal journey, which is the case of any important problem solving. The same happens in this case. You judge the end state (or RHS expression) with respect to the initial given state (or LHS expression). If somehow you find significant similarities between the two, it would be easy for you to span the gap between the two states quickly. In all cases though there would be significant dissimilarity between the initial starting point and the desired end point. The similarity would invariably be there but it would be hidden from casual inspection. This is where the ability of key information discovery plays its prime role in solving the problem. More often than not, ability to recognize useful common pattern, even if hidden, results in key information discovery. If you don't know the desired goal state, from initial problem analysis you have to form possible desired goal states. In most cases this similarity would certainly be there (as you can certainly transform the LHS into RHS, you may not know how, but you should most possibly find a crucial similarity between the two). But this similarity always will be hidden behind a barrier. Your job is to look through the barrier to discover the key information. This is direct application of one of the most powerful problem solving resources that we are aware of - the End State Analysis Approach. If you want to know more you can refer to it before you proceed further. here We would strongly recommend you to follow this path. Reason is - we want you to find the key information for quick solution before you go through our explanation. If you solve a problem elegantly yourself your learning will be maximum. Take a pause, go through . the solution using End State Analysis Learn what it isand how to use the conceptand solve this problem in a few steps yourself. Key information discovery Our goal is transforming each of the two terms in the LHS to the form of expression in the RHS. Let us take the first term $(cos\theta - cosec\theta)^2$. Our obvious attention goes to the first term of the expression and we are clear that to transform this term of $cos\theta$ to 1, the value of this first term in target end state expression, we must factor $cos\theta$ out of the brackets. In doing so, we need to multiply the second term by inverse of $cos\theta$. That we do. But when the $cos\theta$ is factored out of the brackets it comes out squared. Thus for the first term we get, \begin{align} (cos\theta & - cosec\theta)^2 \\ & = cos^2\theta(1 - sec\theta{cosec\theta})^2 \end{align} Immediately you expect a simlar convenient result to come out of the second expression. Indeed that is exactly what happens. For the second term we get, \begin{align} (sin\theta & – sec\theta)^2 \\ & = sin^2\theta(1 - sec\theta{cosec\theta})^2 \end{align} Now you know the solution to be only a step away. We have LHS as, \begin{align} & (cos\theta – cosec\theta)^2 + (sin\theta – sec\theta)^2 \\ & = cos^2\theta (1 – sec\theta{cosec\theta})^2 \\ & \hspace{10mm} + sin^2\theta(1 – sec\theta{cosec\theta})^2 \\ & = (sin^2\theta + cos^2\theta) (1 – sec\theta{cosec\theta})^2 \\ & = (1 – sec\theta{cosec\theta})^2\end{align} This is by far the quickest way to reach the solution in just three steps and is based fully on examining and using the similarity between the final state and the initial state (here the final state is given and is only one. You needed to find the way to reach the final state from the initial state). Let us review our reasoning process more closely. Deductive reasoning On comparison of the RHS and LHS we find both to be squares. We form our immediate conjecture, It is highly likely that the RHS expression in exactly the same form is hidden in each of the two squared expressions on the LHS. At this second stage now we shift our comparison of the RHS expression with each of the two terms on the LHS. We take up the first term. At this third stage, the first term of RHS is 1 and the first term in our first expression under focus is $cos\theta$. Without any hesitation, our deductive reasoning mechanism dictates us to transform this $cos\theta$ to 1. How? Easy, just by factoring it out of the brackets. As soon as we do this, the puzzle is solved in our mind. Rest is routine. End note: This is not the only example of this type. You would find very large number of this and other type of problems where you would invariably be able to improve upon the time and effort in conventional solution. Our recommendation: Always think: is there any other shorter better way to the solution? Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry Efficient problem solving in Trigonometry How to solve School math problems in a few simple steps, Trigonometry 1 A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. On the other hand, That's why any SSC CGL competitive test MCQ problem can also be converted to a School Math problem suitably. the resources on Trigonometry should be useful for School students as well as SSC CGL test aspirants.
I am currently working on a physics problem that turns into a non-linear boundary value problem. I need an efficient numerical solver that I could run on my laptop with i5 dual core CPU. I am discretizing my 15 equations on an N x N x N cubic grid using fourth order finite difference derivatives. I am interested in using Newton-Krylov methods since the Jacobian that is generated is sparse. My ultimate goal is to solve this system on a 128 x 128 x 128 cube, which means I will be solving almost 15 million equations for the same number of variables. I am already in the process of learning SUNDIALS packages but I recently found the PETSc library, which many people seem to be praising for quality of the code and efficiency. PETSc also seems to have a large number of preconditioners needed for Krylov methods. So I would like to ask how do both packages compare for serial computations since I would like to run the solver on my laptop (at least for testing purposes on a bit coarser grid). I might eventually move to my university's HPC cluster to solve the system on a finer grid. Also, my intention is to use these packages on a long term basis. I am new to high performance computing so kindly excuse my ignorance. I would really appreciate any suggestions. P.S- The equations I am working on come from Non-Abelian gauge theories and are in Tensor form. Individual components (15) equations are long and complicated...so I actually compute these equations using mathematica and then discretize them. These equations are similar to Maxwell's equations but more complicated and non-linear in the fields. I do not know how much this would help here but the equations are as follows $(D_\mu F^{\mu \nu})^a = g \epsilon_{abc} (D_\nu \phi)^b \phi^c$, and $D_\mu (D^\mu \phi)^a = - \lambda (\phi^b \phi^b -v^2) \phi^a$. Here Einstein's summation convention is implied. The indices $\mu$ and $\nu$ run from 0 to 3. And index $a$ runs from 1 to 3. $D$ represents gauge covariant derivatives and $\phi$ and $F$ represent Higgs and gauge fields strength. You can check this article for more details....page 9. http://www-thphys.physics.ox.ac.uk/people/MaximeGabella/higgs.pdf
Family Size Problem 1 Consider a couple beginning a family; in the interest of promoting a diverse family, they decide to have children until they have both sexes. How many children can they expect to have? Hint We have already solved a very similar problem estimating the number of (Bernoulli) trials until the first success. The idea there was to evaluate probabilities of a sequence of trials having a certain length and then sum up an infinite series. Solution of Problem 1 Assume $p$ is the probability of having a boy, $1-p$ that of having a girl. The sample space for our problem consists of just two kinds of sequences: $BB\ldots BG$ and $GG\ldots GB.$ For a sequence of length $n,$ the probabilities are $(1-p^{n-1}p$ and $p^{n-1}(1-p),$ respectively, $n\ge 2.$ The mathematical expectation is then $\begin{align}\displaystyle \sum_{n=2}^{\infty}&n((1-p^{n-1}p + p^{n-1}(1-p)) \\ &= p(1-p)\sum_{n=2}^{\infty}n((1-p)^{n-2}+p^{n-2}) \\ &= p(1-p)\bigg(\frac{2-p}{(1-p)^2}+\frac{2-(1-p)}{(1-(1-p))^2}\bigg) \\ &= \frac{1-p+p^2}{p(1-p)}. \end{align}$ (For a way to obtain such sums in a closed form see a separate discussion.) This result sits well with the formula $E=\frac{1}{p}$ for the expected length of a sequence of trials until the first success. Since the series in the latter starts with with $p$ which is omitted as meaningless in the present problem, we would get $\displaystyle\bigg(\frac{1}{p}-p\bigg)+\bigg(\frac{1}{1-p}-(1-p)\bigg)=\frac{1-p+p^2}{p(1-p)},$ conforming our calculations. The graph of $\displaystyle f(p)=\frac{1-p+p^2}{p(1-p)}$ shows that the expectation is at minimum when $p=1/2$ and goes to infinity as $p$ approaches either $0$ or $1.$ For $p=1/2,$ the expectation equals $3.$ Problem 2 Consider a couple beginning a family; At the outset they decide they will have children until they have a child of the same sex as the first one. How many children can they expect to have? Solution of Problem 2 The sample space for this problem consists of sequences $GBB\ldots BG$ and $BGG\ldots GB$ that, for length $n\ge 2,$ come with probabilities $(1-p)^{2}p^{n-2}$ and $p^{2}(1-p)^{n-2},$ respectively. The expectation then is given by $\begin{align}\displaystyle E &= \sum_{n=2}^{\infty}n((1-p)^{2}p^{n-2}+p^{2}(1-p)^{n-2}) \\ &= \frac{(1-p)^2}{p^2}\sum_{n=0}^{\infty}np^{n}+\frac{p^2}{(1-p)^2}\sum_{n-2}^{\infty}n(1-p)^{n} \\ &= \frac{(1-p)^2}{p^2}S(p)+\frac{p^2}{(1-p)^2}S(1-p), \end{align}$ where $\displaystyle S(q)=\sum_{n=2}^{\infty}nq^n.$ I leave computing $\displaystyle S(q)=\frac{q^{2}(2-q)}{(1-q)^2}$ so that the expectation in this case is given by $\displaystyle \begin{align} E &= \frac{(1-p)^2}{p^2}\frac{p^2(2-p)}{(1-p)^{2}}+\frac{p^2}{(1-p)^2}\frac{(1-p)^2(1+p)}{p^{2}}\\ &= (2-p) + (1+p) = 3. \end{align}$ Independent of $p\,!$ Which is a great surprise in its own right. But there is more. Note that the formula has been derived under an implicit assumption that $p$ is neither $0$ nor $1.$ But what if, say, $p=0?$ What if, by a fluke of fate, women stopped producing boys. Then obviously the only possible children component in a family would be two girls. In another extreme $(p=1)$, all families would consist of two boys. Although neither case is plausible, I think that the circumstance merits being mentioned as a naturally occurring discontinuity. A diversion Moscow street board promoting beds and sofas under the slogan "Foster increase of the nativity rate." References P. Nahin, Duelling Idiots and Other Probability Puzzlers, Princeton University Press, 2000, pp 55-57 What Is Probability? Intuitive Probability Probability Problems Sample Spaces and Random Variables Probabilities Example: A Poker Hand Bernoulli Trials Binomial Distribution Proofreading Example Conditional Probability Dependent and Independent Events Algebra of Random Variables Expectation Probability Generating Functions Probability of Two Integers Being Coprime Random Walks Probabilistic Method Probability Paradoxes Symmetry Principle in Probability Non-transitive Dice 65608414
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
Proving uniqueness You can prove that the elements are unique in $O(m)$ time and space by pre-sorting them and then giving a zero-knowledge proof that they are in sorted order. Details follow. Assume the elements of $\Sigma$ are integers in the range $[0,K-1]$, where $K$ is a constant chosen in advance and made public. Pick a large prime $p$ and a group element $g \in (\mathbb{Z}/p\mathbb{Z})^*$ of prime order $q$, such that $q > 2K$. The scheme is: First, sort the elements of $\Sigma$, so $\sigma_1 < \sigma_2 < \cdots < \sigma_m$. Next, commit to all the elements, using a discrete log based commitment scheme with generator $g$; for instance, you might use Pedersen commitments. Finally, prove that the elements are in sorted value, i.e., that $\sigma_i < \sigma_{i+1}$ holds for all $i$. You can prove they are in sorted order using a range proof for discrete logs: for all $i$, you show that $\sigma_i \in [0,K-1]$, and you show that $\sigma_{i+1} - \sigma_i \in [1,K-1]$ (again, considering the $\sigma_i$'s as integers). To prove that $\sigma_{i+1} - \sigma_i \in [1,K-1]$, it suffices to prove that $d_i = \sigma_{i+1} - \sigma_i \bmod q$ is in the range $[1,K-1]$: since you've proven that each $\sigma_i$ is in $[0,K-1]$, and since $q \ge 2K$, there can be no wrap-around modulo $q$. All that remains is how to describe that each $d_i$ is in the specified range. One standard way to do a range proof is to express each $d_i$ in binary, i.e., $$d_i = \sum_j b_{i,j} 2^j.$$ Then you commit to all the $b_{i,j}$'s, use the homomorphic property of commitments to show that the $b_{i,j}$'s are consistent with the $d_i$'s (i.e., that the equation above holds), and show that $b_{i,j} \in \{0,1\}$ for each $i,j$. Of course, you can prove that the $d_i$'s were computed correctly by using the homomorphic property of discrete log-based commitment schemes: given the commitments $C(\sigma_{i+1})$ and $C(\sigma_i)$, anyone can compute a commitment $C(d_i)=C(\sigma_{i+1}-\sigma_i \bmod q)$ to $d_i$, even without knowing $\sigma_i,\sigma_{i+1}$. When using this method of range proofs together with the idea above, it will give you a valid proof that the elements $\sigma_1,\dots,\sigma_m$ are mutually disjoint. Proving it is a subset You can show that $\Sigma \subseteq \Psi$ using the techniques in the paper you mentioned.
Gradient blowup rate for a semilinear parabolic equation 1. College of Science, Xi’an Jiaotong University, Xi’an, 710049, China 2. Department of Mathematics, University of Notre Dame, Notre Dame, Indiana 46556 xx$ +x^m |u_x|^p, p> 0, m\geq 0$, for which the spatial derivative of solutions becomes unbounded in finite time while the solutions themselves remain bounded. We show that the spatial derivative of solutions is globally bounded in the case $p\leq m+2$ while blowup occurs at the boundary when $p>m+2$. Blowup rate is also found for some range of $p$. Mathematics Subject Classification:Primary: 35K55, 35B4. Citation:Zhengce Zhang, Bei Hu. Gradient blowup rate for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 767-779. doi: 10.3934/dcds.2010.26.767 [1] Jong-Shenq Guo, Bei Hu. Blowup rate estimates for the heat equation with a nonlinear gradient source term. [2] Zhengce Zhang, Yanyan Li. Gradient blowup solutions of a semilinear parabolic equation with exponential source. [3] Zhengce Zhang, Yan Li. Global existence and gradient blowup of solutions for a semilinear parabolic equation with exponential source. [4] [5] Thierry Cazenave, Yvan Martel, Lifeng Zhao. Finite-time blowup for a Schrödinger equation with nonlinear source term. [6] Jong-Shenq Guo, Satoshi Sasayama, Chi-Jen Wang. Blowup rate estimate for a system of semilinear parabolic equations. [7] Chunlai Mu, Zhaoyin Xiang. Blowup behaviors for degenerate parabolic equations coupled via nonlinear boundary flux. [8] Zaihui Gan, Boling Guo, Jian Zhang. Blowup and global existence of the nonlinear Schrödinger equations with multiple potentials. [9] Masahoto Ohta, Grozdena Todorova. Remarks on global existence and blowup for damped nonlinear Schrödinger equations. [10] Bertram Düring, Daniel Matthes, Josipa Pina Milišić. A gradient flow scheme for nonlinear fourth order equations. [11] Xiaohong Li, Fengquan Li. Nonexistence of solutions for nonlinear differential inequalities with gradient nonlinearities. [12] [13] [14] Paolo Baiti, Helge Kristian Jenssen. Blowup in $\mathbf{L^{\infty}}$ for a class of genuinely nonlinear hyperbolic systems of conservation laws. [15] Congming Peng, Dun Zhao. Global existence and blowup on the energy space for the inhomogeneous fractional nonlinear Schrödinger equation. [16] Evgeny Galakhov, Olga Salieva. Blow-up for nonlinear inequalities with gradient terms and singularities on unbounded sets. [17] Maria Francesca Betta, Rosaria Di Nardo, Anna Mercaldo, Adamaria Perrotta. Gradient estimates and comparison principle for some nonlinear elliptic equations. [18] El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. [19] Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. [20] Gaohang Yu, Shanzhou Niu, Jianhua Ma. Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Background: I notice that there is quite a bit of similarity between the standard heat equation and the energy equation (from the navier stokes equations). The heat equation is given as $$\frac{\partial\left(\rho h\right)}{\partial t} + \nabla\cdot\left(u\rho h \right) - \nabla\cdot\left(k\nabla T \right)=0$$ with $\rho,h,u,k,T,$ are the density, total specific enthalpy, fluid velocity, thermal conductivity, and temperature, respectively. Note that for many applications, it suffices to write the total specific enthalpy as $h=c_pT$, where $c_p$ is the constant pressure specific heat capacity. On the other hand, the energy equation (from the navier stokes system) is given as $$\frac{\partial\left(\rho E\right)}{\partial t}+\nabla\cdot\left(u\rho E\right)-\nabla\cdot(k\nabla T)+\nabla\cdot(\sigma u)=0$$ where $E,\sigma$ are the total energy and stress tensor, respectively. Here, the total energy E is the sum of internal and kinetic energy $E=e+K$, with $=e=c_vT$ and $K\frac{1}{2}|u|^2$. Since both equations are derived from the conservation of energy, it is no surprise that both equations share the following terms: Rate of change in Enthalpy (time derivative term) Advection of internal heat (divergence term) Heat diffusion (laplacian term) The energy equation has additional terms which are not found in the heat equation. These additional terms are specific to compressible fluid flow and account for Rate of change in & advection of kinetic energy (K) Stress contributions to energy ($\sigma$) In many situations, the kinetic energy and stress contributions are very small when the fluid flow is incompressible, and thus can be neglected. This results in the energy equation and heat equation being almost identical except for one subtle difference: A subtle difference The heat equation is formulated with the constant pressure specific heat capacity $c_p$ while the energy equation uses the constant volume heat capacity $c_v$. Yet, I assume that they are derived from the same principle of conservation of energy. If so, I would expect the two expressions to use the same specific heat capacity. This becomes more important when dealing with different types of fluid flows. I have seen models for incompressible flow where the energy equation is simply formulated in terms of the standard heat equation (as above) with the constant pressure heat capacity $c_p$. Since the temperature does not affect incompressible fluid flow, one can think of this as a "scalar transport" which depends on the resolved fluid velocity $u$. Application: Solidification & Melting In my particular case, I want to model phase change (solidification and melting). For incompressible flow, it suffices to rewrite the total specific enthalpy as the sum of internal and latent heat ($h=c_pT+h_L$) and substitute h in the equation. For compressible flows, the equation is written in terms of internal energy $c_vT$, not enthalpy. So, I'm not 100% sure that it would suffice to simply add the latent heat term as if it were a part of the total internal energy. The only way to know for sure is if I understand why the incompressible energy equation is written in terms of $c_p$ while the compressible energy equation is written in terms of $c_v$. My Question If the energy equation for incompressible flow is formulated in terms of constant pressure heat capacity $c_p$, why isn't the energy equation compressible flow also formulated in terms of constant pressure heat capacity $c_p$. Why is the heat capacity different for incompressible and compressible flow? What accounts for the difference? Furthermore, for my specific application, how do these differences affect how to add latent heat effects for solidification and melting of a compressible fluid? Can I simply add the latent heat to internal energy term $e$? Or do I need to completely reformulate the compressible energy equation in terms of enthalpy in order to add latent heat effects?
shortest path between 2 points This spec goes into the cost function design. maintain a minimum distance to obstacles Given 2D occupancy grid, threshold probability values to get occupied/free cell representation of the environment. Then, expand each obstacle cell by given minimum distance value. Once done, this spec goes into the cost function design. prefer to stay away from walls - use the center of open spaces Implement a distance to wall function that computes the minimum distance to the nearest wall for a given (x,y) cell. This distance to wall function can be pre-computed via dynamic programming before invoking the path planner. Once done, this spec goes into the cost function design. prefer straight lines This spec goes into the cost function design. limit turn radius (> X cm) limit This spec goes into the motion model design. maximum change in turn radius (steering angle must be continuous) This spec goes into the motion model design. limit maximum acceleration This spec goes into the motion model design. Do you mean linear or angular acceleration? A vehicle motion model can be chosen as follows: dx_dt = v*cos(\theta); dy_dt = v*sin(\theta); d\theta_dt = v*\kappa; d\kappa_dt = \alpha; dv_dt = a; State X := (x, y, \theta, \kappa, v) (x,y) : position [m], x_min <= x <= x_max, y_min <= y <= y_max \theta : heading [rad], -pi <= \theta <= pi \kappa : curvature [m^-1], \kappa_min <= \kappa <= \kappa_max v : linear speed in [m/s], v_min <= v <= v_max Control U := (\alpha, a) \alpha : curvature rate [(m*s)^-1], \alpha_min <= \alpha <= \alpha_max a : linear acceleration [m/s^2], a_min <= a <= a_max You can implement a graph search algorithm on discretized 5D (x, y, \theta, \kappa, v) state space and 2D control space (\alpha, a). Your cost function can have the following inputs: current state: x_cur, y_cur, \theta_cur, v_cur, \kappa_cur next state: x_next, y_next, \theta_next, v_next, \kappa_next current control action: \alpha_cur, a_cur spec 1 (minimizing path length): cost_1 := norm (x_next - x_cur, y_next - y_cur); spec 2 (maintaining a minimum distance to obstacles): cost_2 := 'inf' (x_cur, y_cur) on an obstacle in the expanded environment, 0 otherwise; spec 3 (distance to nearest wall): cost_3 := dist2wall(x_cur, y_cur); spec 4 (preferring straight lines): cost_4 = abs(\kappa_cur); You need to blend these cost terms in your cost function by using some coefficients. In summary, in order to solve your motion planning problem in a principled way with some optimality guarantees, the steps of your algorithm should be Expand the obstacles by the minimum distance value Compute dist2wall values for 2D environment Run the graph search algorithm on discretized 5D state, 2D control space The bad news is that searching on 5D space is not practical! A good roboticist should use a multi-step approach for practical applications. That is, we can first solve the motion planning model using a low-order motion model with state X = (x,y,\theta) and control U =(\kappa) to ensure minimum turn radius constraint (bounded curvature). dx_ds = cos(\theta); dy_ds = sin(\theta); d\theta_ds = \kappa; State X := (x, y, \theta) Control U := (\kappa) Here the derivatives are defined wrt the arclength parameter s not time! Using this low order motion model requires searching only in 3D state space which can be done quickly a graph search algorithm. Once the graph search algorithm computes a geometric path by using the low order motion model, we can smooth out curvature and speed profile by considering curvature rate and acceleration constraints. The catch is that we cannot guarantee feasibility guarantee anymore. Bad news from a theoretical point! That is, the motion planning problem has a solution per se; and we can compute the optimal solution if we search in 5D state space. However, in the 2-step approach, the initial geometric path may be too close to obstacles; and it cannot be smoothed out to generate a path wrt the given curvature rate and acceleration constraint. To remedy this drawback, in practice, we usually relax the path length cost term little bit, so that the initial path can be computed far from obstacles. Then, the post smoothing method usually works well once the initial path has enough clearance from obstacles. Another practical advice, for the search algorithm, keep it simple, stupid (KISS)! Try the Dijkstra algorithm first, if it can get the job done, implement the Dijkstra algorithm. Always keep track the invariant properties of the search algorithm implementation for detecting bugs. When implementing a path planner, most of the time is spent on the cost function design, developing a good low-order motion model, and field tests. Just don't waste your time in implementing a complicated search algorithm from papers. A good path planner is the one that you can easily interpret its output by using your intuition.
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
I've asked some other questions before about Rijndael's S-boxes, and step by step I'm coming to an understanding; but those steps often guide me to new questions. I did some lines of code to understand how these S-boxes work implementing $S(z) = f(g(z))$ where $g(z)$ is the transformation by multiplicative inverse in a polynomial field and $f(z)$ the affine mapping understanding a word as element in a polynomial ring following the expression. \begin{equation} \begin{aligned} b(z) &= \mu(z) \cdot a(z) + \nu(z) \\ &= (z^4+z^3+z^2+z+1) \cdot a(z) + (z^6+z^5+z+1) \;\bmod{(z^8+1)} \end{aligned} \end{equation} In the " AES proposal"(version 2 from 1999), section 7.2 explains that $\mu(z)$ was chosen from the set of co-primes with $z^8+1$ as the one with the simplest description. But what can be understood as " simple description"? The condition to select $\nu(z)$ looks to be much clear (no fixed points, neither opposite fixed points, isn't it?). If I did it well, there are 129 co-prime polynomials to select 1 (and its inverse) and I cannot see what makes this special. $\mu(z)$ has 5 ones and its inverse $\mu^{-1} (z)$ has 3, and there are other pair candidates with a relation 3-3 or 5-5 of 1s. Another description of simplicity perhaps was palindromic representation, but it's neither the case. They are not the shortest or longest ones to highlight them over the others. What makes them special to be chosen? In the mentioned " Ps: AES proposal" the $\mu(z)$ and $\nu(z)$ are not the ones above, neither are the ones shown in section 3.4.1 from " The Design of Rijndael" but with the 2 shown here I've reproduced the official S-boxes and the tables of the section C of the mention book. Would I be wrong? : Update 20150320 based on the comments received Based on the definition of the "simplicity" criteria to select $\mu(z)$, it cannot be evaluated what makes $z^4+z^3+z^2+z+1$ special. The only known requirement of this polynomial is that it must have a multiplicative inverse in the ring defined by $z^8+1$: there are 129 elements invertible. The other parameter $\nu(z)$ has an evaluable criteria to do a selecting: it must not produce any fixed point, neither opposite fixed point. Checked this, the number of polynomial pairs that satisfies that in the ring are 21717. A big number. In a shorter view like fixing $\mu(z)$, the number of possible $\nu(z)$ that satisfies the seconds criteria are 223. Even Rijndael usually is implemented storing the S-Boxes, the calculation times looks not very relevant, but perhaps they are, aren't it? In those boxplots are represented average calculation for each of the candidate pairs. On the left using doing the modular product and on the right using the MDS matrix suggested in the Rijndael's documents. Apart from the think that the matrix method is not better in general, the times for the official pair are represented by coloured dots. There are very many better candidates. I did a seconds figure as a branch that assumes that the official $\mu(z)$ has something special I cannot see. Here that data set are the 223 different $\nu(z)$ having $\mu(z)=z^4+z^3+z^2+z+1$. Again be careful with the scales on the y axis. The calculation time for the official pairs have been also draw as coloured points. And here the official $\nu(z)$ doesn't show any timing property. The code of this test is in a Rijndael project in github. From a command line it can be repeated by call $ python Polynomials.py --test-ring=8 and the basic things make with R are in a Polynomials subdirectory.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
, EURASIP Journal on Advances in Signal Processing (2008), Article ID 258184.[21] SOUSA. R.—FERREIRA, A.—ALKU, P. : The Harmonic and Noise Information of the Glottal Pulses, Speech, Biomedical Signal Processing and Control 10 (2014), 137–143.[22] LECLERC, I.—DAJANI, H. R.—GIGUERE, C. : Differences in Shimmer Across Formant Regions, Journal of Voice 27 No. 6 (2013), 685–690.[23] PŘIBIL, J.—PŘIBILOVÁ, A.—ĎURAČKOVÁ, D. : Evaluation of Spectral and Prosodic Features of Speech Affected by Orthodontic Appliances using the GMMClassifier, Journal of The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances. . Kleber, “Vocal aging effects on F0 and the first formant: A longitudinal analysis in adult speakers”, Speech Communication , 2010, 52, (7-8), 638–651.[17] C. M. Bishop, “Pattern Recognition and Machine Learning”, Springer ,.[18] G. Muhammad and K. Alghathbar, “Environment recognition for digital audio forensics using MPEG-7 and mel cepstral features”, Journal of Electrical Engineering , 2011, 62, (4), 199–205.[19] J. Přibil and A. Přibilová, “GMM-based evaluation of emotional style transformation in Czech and Slovak”, Cognitive Computation References1] M. Li, K. J. Han and S. Narayanan, ”Automatic Speaker Age and Gender Recognition Using Acoustic and Prosodic Level In formation Fusion”, Computer Speech and Language, vol. 27, 2013, 151-167.[2] T. Bocklet, A. Maier, J. G. Bauer, F. Burkhardt and E. N¨oth, ”Age and Gender Recognition for Telephone Applications Based on GMM Supervectors and Support Vector Machines”, IEEE International Conference on Acoustics, Speech, and Signal Pro- cessing, 31 March - 4 April 2008, 1605-1608, Las Vegas, NV: IEEE Hock Gan, Iosif Mporas, Saeid Safavi and Reza Sotudeh ] Hermansky, H., Morgan, N. (1994). RASTA processing of speech. IEEE transactions on speech and audio processing , 2(4), 578–589[14] Hsu, C.W., Lin, C.J. (2002). A comparison of methods for multiclass support vector machines. IEEE transactions on Neural Networks , 13(2), 415–425[15] Kittler, J., Hatef, M., Duin, R.P., Matas, J. (1998). On combining classifiers. IEEE transactions on pattern analysis and machine intelligence , 20(3), 226–239[16] Kuncheva, L.I., Alpaydin, E. (2007). Combining Pattern Classifiers: Methods and Algorithms, IEEE ), 485-493.[32] Pribil, J., Pribilova, A., Durackova, D. (2014). Evaluation of Spectral and Prosodic Features of Speech Affected by Orthodontic Appliances Using the GMMClassifier. Journal of Electrical Engineering- Elektrotechnicky Casopis, 65 (1), 30-36.[33] Augustyniak, P., Smolen, M., Mikrut, Z., Kantoch, E. (2014). Seamless Tracing of Human Behavior Using Complementary Wearable and House-Embedded Sensors. Sensors, 14 (5), 7831-7856.[34] Valis, D., Pietrucha-Urbanik, K. (2014). Utilization of diffusion processes \ddot{X}=\dfrac{\partial V}{\partial X}.\end{array}$$ (9)m Y ¨ = ∂ V ∂ Y .$$\begin{array}{}\displaystylem\ddot{Y}=\dfrac{\partial V}{\partial Y}.\end{array}$$ (10)From Eq. ( 3 ), the potential V 1 between m and m 1 is given byV 1 = − G m m 1 [ 1 r 1 + A 1 + A 2 r 1 3 ] ,$$\begin{array}{}\displaystyleV_{1}=-Gmm_1\Bigg[\dfrac{1}{r_1}+\dfrac{A_1+A}{2r_1^3}\Bigg],\end{array}$$ (11)and the potential V between m, m 1 and m 2 is also given byV 2 = − G m m 2 [ 1 r 2 + A 2 + A 2 r 2 3 ] ,$$\begin{array}{}\displaystyleV_{2}=-Gmm_2\Bigg[\dfrac{1}{r
To extend Müller's answer, Should the microphones be placed in separate tubes in order to improve separation? No, you are trying to identify the direction of the source, adding tubes will only make the sound bounce inside the tube which is definitely not wanted. If only one tube hears the sound due to no reflections around the robot tobounce into either of the other two tubes. Then you have no phasecorrelation and only know that one tube heard something louder than the other two heard which gives you a direction with an error of $\pm60°$. The best course of action would be to make them face straight up, this way they will all receive similar sound and the only thing that is unique about them are their physical placements which will directly affect the phase. A 6 kHz sine wave has a wavelength of $\frac{\text{speed of sound}}{\text{sound frequency}}=\frac{343\text{ m/s}}{6\text{ kHz}}=5.71\text{ mm}$. So if you want to uniquely identify the phases of sine waves up to 6 kHz, which are the typical frequencies for human talking, then you should space the microphones at most 5.71 mm apart. Here is one item that has a diameter that is less than 5.71 mm. Don't forget to add a low pass filter with a cut-off frequency at around 6-10 kHz. Edit I felt that this #2 question looked fun so I decided to try to solve it on my own. Can phase correlation be calculated between 3 sources simultaneously somehow? (i.e. in order to speed up the computation) If you know your linear algebra, then you can imagine that you have placed the microphones in a triangle where each microphone is 4 mm away from each other making each interior angles $60°$. So let's assume they are in this configuration: C / \ / \ / \ / \ / \ A - - - - - B I will... use the nomenclature $\overline{AB}$ which is a vector pointing from $A$ to $B$ call $A$ my origin write all numbers in mm use 3D math but end up with a 2D direction set the vertical position of the microphones to their actual wave form. So these equations are based on a sound wave that looks something like this. Calculate the cross product of these microphones based on their position and waveform, then ignore the height information from this cross product and use arctan to come up with the actual direction of the source. call $a$ the output of the microphone at position $A$, call $b$ the output of the microphone at position $B$, call $c$ the output of the microphone at position $C$ So the following things are true: $A=(0,0,a)$ $B=(4,0,b)$ $C=(2,\sqrt{4^2-2^2}=2\sqrt{3},c)$ This gives us: $\overline{AB} = (4,0,a-b)$ $\overline{AC} = (2,2\sqrt{3},a-c)$ And the cross product is simply $\overline{AB}×\overline{AC}$ $$\begin{align}\overline{AB}×\overline{AC}&= \begin{pmatrix}4\\0\\a-b\\ \end{pmatrix}× \begin{pmatrix}2\\2\sqrt{3}\\a-c\\ \end{pmatrix}\\\\&=\begin{pmatrix}0\cdot(a-c)-(a-b)\cdot2\sqrt{3}\\(a-b)\cdot2-4\cdot(a-c)\\4\cdot2\sqrt{3}-0\cdot2\\ \end{pmatrix}\\\\&=\begin{pmatrix}2\sqrt{3}(b-a)\\-2a-2b-4c\\8\sqrt{3}\\ \end{pmatrix}\end{align}$$ The Z information, $8\sqrt{3}$ is just junk, zero interest to us. As the input signals are changing, the cross vector will swing back and forth towards the source. So half of the time it will point straight to the source (ignoring reflections and other parasitics). And the other half of the time it will point 180 degrees away from the source. What I'm talking about is the $\arctan(\frac{-2a-2b-4c}{2\sqrt{3}(b-a)})$ which can be simplified to $\arctan(\frac{a+b+2c}{\sqrt{3}(a-b)})$, and then turn the radians into degrees. So what you end up with is the following equation: $$\arctan\Biggl(\frac{a+b+2c}{\sqrt{3}(a-b)}\Biggr)\frac{180}{\pi}$$ But half the time the information is literally 100% wrong, so how.. should one.... make it right 100% of the time? Well if $a$ is leading $b$, then the source can't be closer to B. In other words, just make something simple like this: source_direction=atan2(a+b+2c,\sqrt{3}*(a-b))*180/pi; if(a>b){ if(b>c){//a>b>c possible_center_direction=240; //A is closest, then B, last C }else if(a>c){//a>c>b possible_center_direction=180; //A is closest, then C last B }else{//c>a>b possible_center_direction=120; //C is closest, then A last B } }else{ if(c>b){//c>b>a possible_center_direction=60; //C is closest, then B, last A }else if(a>c){//b>a>c possible_center_direction=300; //B is closest, then A, last C }else{//b>c>a possible_center_direction=0; //B is closest, then C, last A } } //if the source is out of bounds, then rotate it by 180 degrees. if((possible_center_direction+60)<source_direction){ if(source_direction<(possible_center_direction-60)){ source_direction=(source_direction+180)%360; } } And perhaps you only want to react if the sound source is coming from a specific vertical angle, if people talk above the microphones => 0 phase change => do nothing. People talk horizontally next to it => some phase change => react. $$\begin{align}|P| &= \sqrt{P_x^2+P_y^2}\\&= \sqrt{3(a-b)^2+(a+b+2c)^2}\\\end{align}$$ So you might want to set that threshold to something low, like 0.1 or 0.01. I'm not entirely sure, depends on the volume and frequency and parasitics, test it yourself. Another reason for when to use the absolute value equation is for zero crossings, there might be a slight moment for when the direction will point in the wrong direction. Though it will only be for 1% of the time, if even that. So you might want to attach a first order LP filter to the direction. true_true_direction = true_true_direction*0.9+source_direction*0.1; And if you want to react to a specific volume, then just sum the 3 microphones together and compare that to some trigger value. The mean value of the microphones would be their sum divided by 3, but you don't need to divide by 3 if you increase the trigger value by a factor 3. I'm having issues with marking the code as C/C#/C++ or JS or any other, so sadly the code will be black on white, against my wishes. Oh well, good luck on your venture. Sounds fun. Also there is a 50/50 chance that the direction will be 180 away from the source 99% of the time. I'm a master at making such mistakes. A correction for this though would be to just invert the if statements for when 180 degrees should be added.
Answer $R-factor = 25 \frac{^{\circ} F ft^2 h}{Btu}$ Work Step by Step We find: $R-factor= \frac{ \Delta T}{ H}$ We plug in the given values: $ R-factor = \frac{1}{.040}$ We find: $R-factor = 25 \frac{^{\circ} F ft^2 h}{Btu}$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Sperner.tex \section{Sperner's theorem} \label{sec:sperner} The first nontrivial case of Szemer\'edi's theorem is the case of arithmetic progressions of length $3$. However, for the density Hales--Jewett theorem even the case $k=2$ is interesting. DHJ($2$) follows from a basic result in extremal combinatorics: Sperner's theorem. \ignore{This result has been known for a long time. \noteryan{amusingly, published after vdW's theorem}} In this section we review a standard probabilistic proof of Sperner's theorem. Besides suggesting the equal-slices distribution, this proof easily gives the $k=2$ case of the probabilistic Density Hales--Jewett theorem, a key component in our proof of DHJ($3$). To investigate DHJ($2$) it slightly more convenient to take the alphabet to be $\Omega = \{0,1\}$. Then a combinatorial line in $\{0,1\}^n$ is a pair of distinct binary strings $x$ and $y$ such that to obtain $y$ from $x$ one changes some $0$'s to $1$'s. If we think of the strings $x$ and $y$ as the indicators of two subsets $X$ and $Y$ of $[n]$, then this is saying that $X$ is a proper subset of $Y$. Therefore, when $k=2$ we can formulate the density Hales-Jewett theorem as follows: there exists $\dhj{2}{\delta}$ such that for $n \geq \dhj{2}{\delta}$, if $\calA$ is a collection of at least $\delta 2^n$ subsets of $[n]$, then there must exist two distinct sets $X,Y \in \calA$ with $X \subset Y$. In the language of combinatorics, this is saying that $\calA$ is \emph{not} an antichain. (Recall that an \textit{antichain} is a collection $\mathcal{A}$ of sets such that no set in $\mathcal{A}$ is a proper subset of any other.) Sperner's theorem gives something slightly stronger: a precise lower bound on the cardinality of any antichain. \begin{named}{Sperner's theorem} \label{thm:sperner} For every positive integer $n$, the largest cardinality of any antichain of subsets of $[n]$ is $\binom{n}{\lfloor n/2\rfloor}$. \end{named} As the bound suggests, the best possible example is the collection of all subsets of $[n]$ of size $\lfloor n/2\rfloor$. (It can be shown that this example is essentially unique: the only other example is to take all sets of size $\lceil n/2\rceil$, and even this is different only when $n$ is odd.) It is well known that $\binom{n}{\lfloor n/2\rfloor}2^{-n} \geq 1/2\sqrt{n}$ for all $n$; hence Sperner's theorem implies that one may take $\dhj{2}{\delta} = 4/\delta^2$.\noteryan{The constant can be sharpened to $\pi/2$ for small $\delta$, of course. Also, $4/\delta^2$ is technically not an integer.} Let us present a standard probabilistic proof of Sperner's theorem (see, e.g.,~\cite{Spe90}): \begin{proof} (\emph{Sperner's theorem.}) Consider the following way of choosing a random subset of $[n]$. First, we choose, uniformly at random, a permutation $\tau$ of $[n]$. Next, we choose, uniformly at random and independently of $\tau$, an integer $s$ from the set $\{0,1,\dots,n\}$. Finally, we set $X=\{\tau(1),\dots,\tau(s)\}$ (where this is interpreted as the empty set if $s=0$). Let $\mathcal{A}$ be an antichain. Then the probability that a set $X$ that is chosen randomly in the above manner belongs to $\mathcal{A}$ is at most $1/(n+1)$, since whatever $\tau$ is at most one of the $n+1$ sets $\{\tau(1),\dots,\tau(s)\}$ can belong to $\mathcal{A}$. However, what we are really interested in is the probability that $X\in\mathcal{A}$ if $X$ is chosen \textit{uniformly} from all subsets of $[n]$. Let us write $\eqs{2}[X]$ for the probability that we choose $X$ according to the distribution defined above, and $\unif_2[X]$ for the probability that we choose it uniformly. Then $\unif_2[X]=2^{-n}$ for every $X$, whereas $\eqs{2}[X]=\frac 1{n+1}\binom n{\abs{X}}^{-1}$, since there is a probability $1/(n+1)$ that $s = \abs{X}$, and all sets of size $\abs{X}$ are equally likely to be chosen. Therefore, the largest ratio of $\unif_2[X]$ to $\eqs{2}[X]$ occurs when $\abs{X}=\lfloor n/2\rfloor$ or $\lceil n/2\rceil$.\noteryan{by unimodality of binomial coefficients} In this case, the ratio is $(n+1)\binom n{\lfloor n/2\rfloor}2^{-n}$. Since $\eqs{2}(\mathcal{A})\leq 1/(n+1)$, it follows that $2^{-n}\abs{\mathcal{A}}=\unif_2(\mathcal{A})\leq\binom n{\lfloor n/2\rfloor}2^{-n}$, which proves the theorem. \end{proof} As one sees from the proof, it is very natural to consider different probability distributions on $\{0,1\}^n$, or equivalently on the set of all subsets of $[n]$. The first is the uniform distribution $\unif_2$, which is forced on us by the way the question is phrased. The second is what we called $\eqs{2}$; the reader may check that this is precisely the ``equal-slices distribution $\eqs{2}^n$ described in Section~\ref{sec:pdhj}. After seeing the above proof, one might take the attitude that the ``correct statement of Sperner's theorem is that if $\calA$ is an antichain, then $\eqs{2}(\calA) \leq 1/(n+1)$, and that the statement given above is a slightly artificial and strictly weaker consequence. \subsection{Probabilistic DHJ(\texorpdfstring{$2$}{2})} \label{sec:prob-sperner} Indeed, what the proof (essentially) establishes is the ``equal-slices DHJ($2$) theorem ; i.e., that in Theorem~\ref{thm:edhj} one may take $\edhj{2}{\delta} = 1/\delta$.\noteryan{minus $1$, even} We say ``essentially because of the small distinction between the distribution $\eqs{2}^n$ used in the proof and the distribution $\ens{2}^n$ in the statement. It will be convenient in this introductory discussion of Sperner's theorem to casually ignore this. We will introduce $\ens{k}^n$ and be more careful about its distinction with $\eqs{k}^n$ in Section~\ref{sec:eqs}. To further bolster the claim that $\eqs{2}^n$ is natural in this context we will show an easy proof of the \emph{probabilistic} DHJ($2$) theorem. Looking at the statement of Theorem~\ref{thm:pdhj}, the reader will see it requires defining $\eqs{3}^n$; we will make this definition in the course of the proof. \begin{lemma} \label{lem:p-sperner} For every real $\delta>0$, every $A \subseteq \{0,1\}^n$ with $\eqs{2}^n$-density at least $\delta$ satisfies \[ \Pr_{\lambda \sim \eqs{3}^n}[\lambda \subseteq A] \geq \delta^2. \] \end{lemma} \noindent Note that there is no lower bound necessary on $n$; this is because a template $\lambda \sim \eqs{3}^n$ may be degenerate. \begin{proof} As in our proof of Sperner's theorem, let us choose a permutation $\tau$ of $[n]$ uniformly at random. Suppose we now choose $s \in \{0, 1, \dotsc, n\}$ and also $t \in \{0, 1, \dotsc, n\}$ \emph{independently}. Let $x(\tau,s) \in \{0,1\}^n$ denote the string which has $1$'s in coordinates $\tau(1), \dotsc, \tau(s)$ and $0$'s in coordinates $\tau(s+1), \dotsc, \tau(n)$, and similarly define $x(\tau,t)$. These two strings both have the distribution $\eqs{2}^n$, but are not independent. A key observation is that $\{x(\tau,s), x(\tau,t)\}$ is a combinatorial line in $\{0,1\}^n$, unless $s = t$ in which case the two strings are equal. The associated line template is $\lambda \in \{0,1,\wild\}^n$, with \[ \lambda_i = \begin{cases} 1 & \text{if $i \leq \min\{s,t\}$,}\\ \wild & \text{if $\min\{s,t\} < i \leq \max\{s,t\}$,} \\ 0 & \text{if $i > \max\{s,t\}$.} \end{cases} \] This gives the definition of how to draw $\lambda \sim \eqs{3}^n$ (with alphabet $\{0,1,\wild\}$). Note that $\lambda$ is a degenerate template with probability $1/(n+1)$. Assuming $\Pr_{x \sim \eqs{2}^n}[x \in A] \geq \delta$, our goal is to show that $\Pr[x(\tau,s), x(\tau,t) \in A ] \geq \delta^2$. But \begin{align*} \Pr[x(\tau,s), x(\tau,t) \in A] &= \Ex_{\tau} \Bigl[\Pr_{s,t} [x(\tau,s), x(\tau,t) \in A]\Bigr] & \\ &= \Ex_{\tau} \Bigl[\Pr_{s} [x(\tau,s) \in A] \Pr_{t}[x(\tau,s) \in A]\Bigr] &\text{(independence of $s$, $t$)} \\ &= \Ex_{\tau} \Bigl[\Pr_{s} [x(\tau,s) \in A]^2\Bigr] & \\ &\geq \Ex_{\tau} \Bigl[\Pr_{s} [x(\tau,s) \in A]\Bigr]^2 & \text{(Cauchy-Schwarz)}\\ &= \Pr_{\tau,s} [x(\tau,s) \in A]^2, & \end{align*} and $x(\tau,s)$ has the distribution $\eqs{2}^n$, completing the proof. \end{proof} Having proved the probabilistic DHJ($2$) theorem rather easily, an obvious question is whether we can generalize the proof to $k = 3$. The answer seems to be no; there is no obvious way to generate random length-$3$ lines in which the points are independent, or even partially independent as in the previous proof. Nevertheless, the equal-slices distribution remains important for our proof of the general case of DHJ($k$); in Section~\ref{sec:eqs} we shall introduce both $\eqs{k}^n$ and $\ens{k}^n$ and prove some basic facts about them.
A useful "abstract nonsense" construction in ergodic theory takes a measure-preserving transformation$T$ of a probability space $(X,\mathcal B,\mu)$ and extends it to an invertible measure-preserving transformation $\bar T$ of a probability space $(\bar X,\bar{\mathcal B},\bar\mu)$. One description of this is in Omri Sarig's notes (section 1.6.4). In his construction he needs to make the assumption that $T(X)=X$, or the weaker assumption, $T(X)$ is measurable. My question is whether this is automatic for Lebesgue spaces. Hence my precise question: If $T$ is a measure-preserving transformation of $[0,1]$ (equipped with Lebesgue measure and the $\sigma$-algebra of Lebesgue measurable sets), is $T([0,1])$ necessarily measurable? Note: $[0,1]\setminus T([0,1])$ does not contain any measurable sets of positive measure by the Poincaré recurrence theorem, so $T([0,1])$ is certainly of outer measure 1.
What is the difference between cointegration and the vector error correction model (VECM)? I applied cointegration test and found long run association between variables, so should I apply VECM? Cointegration is a phenomenon that may be exhibited by a group of integrated time series; being cointegrated is a feature that may be posessed by a group of integrated time series. Let us consider a simple example. If series $x_{1,t},\dotsc,x_{m,t}$ are individually I(1) (integrated of order 1) and there exists a linear combination $y_t=\beta_1 x_{1,t}+\dotsc+\beta_m x_{m,t}$ that is I(0) (stationary), then we face the phenomenon of cointegration, and the group of series $x_{1,t},\dotsc,x_{m,t}$ posess the feature of being cointegrated. If no linear combination is I(0), then there is no cointegration and the series taken together are not cointegrated. Vector error correction model (VECM) is a model that can be used for modelling cointegrated time series. A very simple example is a bivariate VECM with no lags for two integrated-and-cointegrated time series $x_{1,t}$ and $x_{2,t}$, $$ \Delta x_{1,t} = \alpha_1 (x_{1,t-1}-\beta x_{2,t-1}) + \varepsilon_{1,t}, $$ $$ \Delta x_{2,t} = \alpha_2 (x_{1,t-1}-\beta x_{2,t-1}) + \varepsilon_{2,t}. $$ It shows that the series $x_{1,t}$ reacts to the most recent (as of time $t-1$) disequilibrium between itself and the other series and "corrects" (given a suitable value of $\alpha_1$) to reduce the disequilibrium (moves towards equilibrium). The same could be said about $x_{2,t}$.
I found in the book of Murphy, C*- Algebras and Operator Theory, the Theorem 7.1.2 : Let A be an unital C* algebra, the semi group $V(A)$ of equivalent projections (under Murray Von Neumann equivalence) in $M_∞(A)$ is cancellative. For the proof he proceeds as follow : take some projection $p, q, r$ such that $[p] \oplus [r] = [q] \oplus [r]$ add to it $[I_n - r]$ if $r\in M_n(A)$ to obtain $[p] \oplus [I_n] = [q] \oplus [I_n]$ And he concludes with that, and I don't see why that should be true that one can simplify $[I_n]$. Furthermore, one knows that there exists C*algebras where the $K^0$ group (the Grothendieck group of $V(A)$ ) is of Torsion, for exemple, the Cuntz algebra with $K^0$ group $\mathbb{Z} / n\mathbb{Z}$. For this algebra the morphism of semi group $V(A) \rightarrow K^0(A) $ cannot be injective and $V(A)$ cannot be cancellative, right ? So my question is who is right ? Me or Murphy ^^ ?
Revista Matemática Iberoamericana Full-Text PDF (643 KB) | Metadata | Table of Contents | RMI summary Volume 29, Issue 1, 2013, pp. 237–292 DOI: 10.4171/RMI/719 Published online: 2013-01-14 Real-variable characterizations of Orlicz–Hardy spaces on strongly Lipschitz domains of $\mathbb{R}^n$Dachun Yang [1]and Sibei Yang [2](1) Beijing Normal University, China (2) Beijing Normal University, China Let $\Omega$ be a strongly Lipschitz domain of $\mathbb{R}^n$, whose complement in $\mathbb{R}^n$ is unbounded. Let $L$ be a second order divergence form elliptic operator on $L^2 (\Omega)$ with the Dirichlet boundary condition, and the heat semigroup generated by $L$ having the Gaussian property $(G_{\mathrm{diam}(\Omega)})$ with the regularity of its kernels measured by $\mu\in(0,1]$, where $\mathrm{diam}(\Omega)$ denotes the diameter of $\Omega$. Let $\Phi$ be a continuous, strictly increasing, subadditive and positive function on $(0,\infty)$ of upper type 1 and of strictly critical lower type $p_{\Phi}\in(n/(n+\mu),1]$. In this paper, the authors introduce the Orlicz–Hardy space $H_{\Phi,\,r}(\Omega)$ by restricting arbitrary elements of the Orlicz–Hardy space $H_{\Phi}(\mathbb{R}^n)$ to $\Omega$ and establish its atomic decomposition by means of the Lusin area function associated with $\{e^{-tL}\}_{t\ge0}$. Applying this, the authors obtain two equivalent characterizations of $H_{\Phi,\,r}(\Omega)$ in terms of the nontangential maximal function and the Lusin area function associated with the heat semigroup generated by $L$. Keywords: Orlicz–Hardy space, divergence form elliptic operator, strongly Lipschitz domain, Dirichlet boundary condition, Gaussian property, nontangential maximal function, Lusin area function, atom Yang Dachun, Yang Sibei: Real-variable characterizations of Orlicz–Hardy spaces on strongly Lipschitz domains of $\mathbb{R}^n$. Rev. Mat. Iberoam. 29 (2013), 237-292. doi: 10.4171/RMI/719
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
ISSN: 1556-1801 eISSN: 1556-181X All Issues Networks & Heterogeneous Media June 2006 , Volume 1 , Issue 2 Select all articles Export/Reference: Abstract: This work is concerned with some aspects of the social life of the amoebae Dictyostelium discoideum(Dd). In particular, we shall focus on the early stages of the starvation-induced aggregation of Dd cells. Under such circumstances, amoebae are known to exchange a chemical messenger (cAMP) which acts as a signal to mediate their individual behaviour. This molecule is released from aggregation centres and advances through aggregation fields, first as circular waves and later on as spiral patterns. We shall recall below some of the basic features of this process, paying attention to the mathematical models that have been derived to account for experimental observations. Abstract: Under consideration is the finnite-size scaling of effective thermoelastic properties of random microstructures from a Statistical Volume Element (SVE) to a Representative Volume Element (RVE), without invoking any periodic structure assumptions, but only assuming the microstructure's statistics to be spatially homogeneous and ergodic. The SVE is set up on a mesoscale, i.e. any scale finite relative to the microstructural length scale. The Hill condition generalized to thermoelasticity dictates uniform Neumann and Dirichlet boundary conditions, which, with the help of two variational principles, lead to scale dependent hierarchies of mesoscale bounds on effective (RVE level) properties: thermal expansion and stress coefficients, effective stiffness, and specific heats. Due to the presence of a non-quadratic term in the energy formulas, the mesoscale bounds for the thermal expansion are more complicated than those for the stiffness tensor and the heat capacity. To quantitatively assess the scaling trend towards the RVE, the hierarchies are computed for a planar matrix-inclusion composite, with inclusions (of circular disk shape) located at points of a planar, hard-core Poisson point field. Overall, while the RVE is attained exactly on scales infinitely large relative to the microscale, depending on the microstructural parameters, the random fluctuations in the SVE response may become very weak on scales an order of magnitude larger than the microscale, thus already approximating the RVE. Abstract: We consider coupling conditions for the “Aw–Rascle” (AR) traffic flow model at an arbitrary road intersection. In contrast with coupling conditions previously introduced in [10] and [7], all the moments of the AR system are conserved andthe total flux at the junction is maximized. This nonlinear optimization problem is solved completely. We show how the two simple cases of merging and diverging junctions can be extended to more complex junctions, like roundabouts. Finally, we present some numerical results. Abstract: We investigate coupling conditions for gas transport in networks where the governing equations are the isothermal Euler equations. We discuss intersections of pipes by considering solutions to Riemann problems. We introduce additional assumptions to obtain a solution near the intersection and we present numerical results for sample networks. Abstract: The aim of this paper is to optimize tra±c distribution coefficients in order to maximize the trasmission speed of packets over a network. We consider a macroscopic fluidodynamic model dealing with packets flow proposed in [10], where the dynamics at nodes (routers) is decided by a routing algorithm depending on traffic distribution (and priority) coefficients. We solve the general problem for a node with mincoming and noutgoing lines and explicit the optimal parameters for the simple case of two incoming and two outgoing lines. Abstract: We consider the initial value problem for the filtration equation in an inhomogeneous medium The equation is posed in the whole space $\mathbb R^n$ , $n \geq 2$, for $0 < t < \infty$; $p(x)$ is a positive and bounded function with a certain behaviour at infinity. We take initial data $u(x,0) = u_0(x) \geq 0$, and prove that this problem is well-posed in the class of solutions with finite "energy", that is, in the weighted space $L^1_p$, thus completing previous work of several authors on the issue. Indeed, it generates a contraction semigroup. We also study the asymptotic behaviour of solutions in two space dimensions when $p$ decays like a non-integrable power as $|x| \rightarrow \infty$ : $p(x)$ $|x|^\alpha$ ~ $1$ with $\alpha \epsilon (0,2)$ (infinite mass medium). We show that the intermediate asymptotics is given by the unique selfsimilar solution $U_2(x, t; E)$ of the singular problem $ |x|^{- \alpha} u(x,0) = E\delta(x), E = ||u_0||_{L^1_p}$ Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Integral Multiple of an Algebraic Number Theorem Let $K$ be a number field and $\alpha \in K$. Then there exists a positive $n \in \Z$ such that $n \alpha \in \mathcal O_K$. In this context, $ \mathcal O_K$ denotes the algebraic integers in $K$. Proof If $\alpha = 0$ then any integer works and the proof is finished. Let $\alpha \ne 0$. Let $\map f x = x^d + a_{d - 1} x^{d - 1} + \dotsb + a_0$ be the minimal polynomial of $\alpha$ over $\Q$. Suppose that $a_i = \dfrac {b_i} {c_i}$ is a reduced fraction for each $i$ such that $a_i \ne 0$. Let $n$ be the least common multiple of the $c_i$, of which there must be at least one by our assumptions. Consider the polynomial: \(\displaystyle \map g x\) \(=\) \(\displaystyle n^d \map f {\frac x n}\) \(\displaystyle \) \(=\) \(\displaystyle n^d \paren {\frac {x^d} {n^d} + a_{d - 1} \frac {x^{d - 1} } {n^{d - 1} } + \dotsb + a_0}\) \(\displaystyle \) \(=\) \(\displaystyle x^d + n a_{d - 1} x^{d - 1} + \dotsb + n^d a_0\) Note that $g$ is a monic polynomial with coefficients in $\Z$ by our choice of $n$. Furthermore, by construction, we see that $n \alpha$ is a root of $g$ and is therefore an algebraic integer. $\blacksquare$
Actually, the motivation for introducing dependent types goes in the opposite direction! Curry had noticed that there was a direct correspondence between typed terms in the $SK$ calculus and proofs in (minimal implicational) propositional logic, but there was no programing language known to correspond in such a way to predicate logic. Indeed, introducing $\Pi$ types are necessary to model propositions of the form $\forall x.P$, which clearly do not admit representation as a simple type $A\rightarrow B$. (Actually, one can imagine a "forgetful" translation, where $A$ is the domain of $x$, and $B$ is the domain of "proofs of $P$", and in this way one can construct something very similar to certain forms of realizability, but this is clearly not an isomorphism.) In summary, there is an isomorphism between functions $\mathbb{N}\rightarrow\mathbb{N}\rightarrow\mathbb{N}$ in the simply typed $\lambda$-calculus and proofs of $N\Rightarrow N\Rightarrow N$ in minimal logic, in the context $N,N\Rightarrow N$. It is clear, however, that most programing languages have more than the STLC to define functions, and that there is more reason to care about the operational behavior of programs than that of (most) proofs. It makes sense to use dependent types to explore these distinctions, as well as more subtle notions of types to distinguish between "computationally relevant" things and "computationally irrelevant" ones (e.g. in Coq, the $\mathrm{Prop}$ vs $\mathrm{Set}$ distinction). But these were not the original motivation for dependent types. As a direct answer to your question, the "statement" $\mathbb{N}\Rightarrow\mathbb{N}\Rightarrow\mathbb{N}$ should be read in English as something like For every pair of natural numbers, there is a natural number. And that's it! The fact that $\lambda x y. x+y$ and $\lambda x y. x\cdot y$ are two very computationally meaningful proofs of that statement does not factor at all into the statement as it stands.
According to many references [1,2], the time-varying "impulse response" can compute wireless channel output $y(t)$ at time $t$ using the following expression: $$ y(t) = \int h(\tau, t) x(t - \tau) d\tau $$ In both references, they state that this represents the response of the channel at time $t$ to an impulse applied at time $t-\tau$. It seems reasonable to assume that there is some version of x(t) that involves a delta function that we can apply as an input that returns $h(\tau,t)$ as the output. Trying: $$x(t) = \delta(t)$$ $$\implies y(t) = \int h(\tau, t) \delta(t - \tau) d\tau = h(t,t) \qquad \text{nope} $$ Trying: $$x(t) = \delta(t-\tau')$$ $$\implies y(t) = \int h(\tau, t) \delta(t - \tau - \tau') d\tau = h(t-\tau',t) \qquad \text{nope, but closer} $$ Is there a way to generate something resembling $h(\tau,t)$ as the output? $$ $$ References: [1] Proakis, Digital Communications, 5th ed, p.832 [2] Goldsmith, Wireless Communications, 1st ed, p.67
Taking a slant on xnor's idea, consider the velocity of any ant towards $O$. I'll use the standard figures for the internal angle of a regular pentagon and the radius of a circumscribed circle from wikipedia.Name the vertices $A,B,C,D,E$ starting at the top and working clockwise, and let $\theta = \angle OAB$. By symmetry, $\angle OAE$ is also $\theta$, so $\angle BAE = 2\theta = 3\pi/5$ (internal angle of a regular pentagon), i.e. $\theta = 3\pi/10$. The velocity $V$ of $A$ towards $O$ is $S \cos \theta$, so the time $T$ taken for $A$ to reach $O$ is ${AO \over V} = {L \over 2 \sin (\pi/5)} \times {1 \over S \cos \theta}$ (using the formula for the radius of a circumscribed circle). $$\therefore T = {L \over 2S \sin (\pi/5) \cos (3\pi/10)} = {L \over 2S \sin^2(\pi/5)} \approx 1.45L/S$$ As xnor observes, the ants spiral through an unbounded angle. As $T$ and $S$ are finite, the total scalar distance $D=TS$ traversed by each ant is also finite. Since $T \approx 1.45L/S$ from before, we also have $TS \approx 1.45L$, i.e. $D \approx 1.45L$. The counter-intuitive result hinted at by leoll2 is then the finite time taken to traverse an infinite spiral, where the total length of the spiral is a constant factor of about 1.45 times the length of one side of the initial pentagon. Note: thanks to leoll2 for pointing out the trig simplification.
Line 88: Line 88: * (e) $H>0,\; q=0$: expanding, zero deceleration * (e) $H>0,\; q=0$: expanding, zero deceleration * (f) $H<0,\; q=0$: contracting, zero deceleration * (f) $H<0,\; q=0$: contracting, zero deceleration − * (g) $H=0,\; q=0$ : static.</p> + * (g) $H=0,\; q=0$ : static + .</p> </div> </div> </div></div> </div></div> Revision as of 00:00, 2 February 2016 First section of Cosmography Problem 1 problem id: cs-1 Using the cosmographic parameters introduced above, expand the scale factor into a Taylor series in time. solution We can write the scale factor in terms of the present time cosmographic parameters:\[a(t)\sim 1+H_{0} \Delta t-\frac{1}{2} q_{0} H_{0}^{2} \Delta t^{2} +\frac{1}{6} j_{0} H_{0}^{3} \Delta t^{3} +\frac{1}{24} s_{0} H_{0}^{4} \Delta t^{4} +120l_{0} H_{0}^{5} \Delta t^{5} \]This decomposition describes evolution of the Universe on the time interval $\Delta t$ directly through the measurable cosmographic parameters. Each of them describes certain characteristic of the evolution. In particular, the sign of deceleration parameter $q$ indicates whether the dynamics is accelerated or decelerated. In other words, a positive\textbf{ }acceleration parameter indicates that standard gravity predominates over the other species, whereas a negative sign\textbf{ }provides a repulsive e\textbf{ff}ect which overcomes the standard attraction due to gravity. Evolution of the deceleration parameter is described by the jerk parameter $j$. In particular, a positive jerk parameter would\textbf{ }indicate that there exists a transition time when the Universe modifies its expansion. In the vicinity of this transition the modulus of deceleration parameters tends to zero and then changes its sign\textbf{. }The two terms, i.e., $q$ and $j$ fix the local dynamics, but they may be not sufficient to remove the degeneration between different cosmological models and one will need higher terms of the decomposition. Problem 2 problem id: cs-2 Using the cosmographic parameters, expand the redshift into a Taylor series in time. solution \[\begin{array}{l} {1+z=\left[\begin{array}{cc} {} & {1+H_{0} (t-t_{0} )-\frac{1}{2} q_{0} H_{0}^{2} (t-t_{0} )^{2} +\frac{1}{3!} j_{0} H_{0}^{3} \left(t-t_{0} \right)^{3} +\frac{1}{4!} s_{0} H_{0}^{4} \left(t-t_{0} \right)^{4} } \\ {} & {+\frac{1}{5!} l_{0} H_{0}^{5} \left(t-t_{0} \right)^{5} \; +{\rm O}\left(\left(t-t_{0} \right)^{6} \right)} \\ {} & {} \end{array}\right]^{-1} ;} \\ {z=H_{0} (t_{0} -t)+\left(1+\frac{q_{0} }{2} \right)H_{0}^{2} (t-t_{0} )^{2} +\cdots .} \end{array}\] Problem 3 problem id: cs-3 What is the reason for the statement that the cosmological parameters are model-independent? solution The cosmographic parameters are model-independent quantities for the simple reason: these parameters are not functions of the EoS parameters $w$ or $w_{i} $ of the cosmic fluid filling the Universe in a concrete model. Problem 4 problem id: cs-4 Obtain the following relations between the deceleration parameter and Hubble's parameter$$q(t)=\frac{d}{dt}(\frac{1}{H})-1;\,\,q(z)=\frac{1+z}{H}\frac{dH}{dz}-1;\,\,q(z)=\frac{d\ln H}{dz}(1+z)-1.$$ Problem 5 problem id: cs-5 Show that the deceleration parameter can be defined by the relation \[q=-\frac{d\dot{a}}{Hda} \] solution \[q=-\frac{d\dot{a}}{Hda} =-\frac{\ddot{a}dt}{Hda} =-\frac{\ddot{a}}{aH^{2} } .\]It corresponds to the standard definition of the deceleration parameter\[q-\frac{\ddot{a}}{aH^{2} } .\] Problem 6 problem id: cs-6 Classify models of Universe basing on the two cosmographic parameters -- the Hubble parameter and the deceleration parameter. solution When the rate of expansion never changes, and $\dot{a}$ is constant, the scaling factor is proportional to time $t$ , and the deceleration term is zero. When the Hubble term is constant, the deceleration term $q$ is also constant and equal to $\mathrm{-}$1, as in the de Sitter and steady-state Universes. In most models of Universes the deceleration term changes in time. One can classify models of Universe on the basis of time dependence of the two parameters. All models can be characterized by whether they expand or contract, and accelerate or decelerate: (a) $H>0,\; q>0$: expanding and decelerating (b) $H>0,\; q<0$: expanding and accelerating (c) $H<0,\; q>0$: contracting and decelerating (d) $H<0,\; q<0$: contracting and accelerating (e) $H>0,\; q=0$: expanding, zero deceleration (f) $H<0,\; q=0$: contracting, zero deceleration (g) $H=0,\; q=0$ : static. Of course, generally speaking, both the Hubble parameter and deceleration parameter can change their sign during the evolution. Therefore the evolving Universe can transit from one type to another. It is one of the basic tasks of cosmology to follow this evolution and clarify its causes. There is little doubt that we live in an expanding Universe, and hence only (a), (b), and (e) are possible candidates. Evidences in favor of the fact that the expansion is presently accelerating continuously grows in number and therefore the current dynamics belongs to type (b).
One intuitive way to understand a DAE is to interpret it as a dynamical system which can be controlled by some input signals, whose output signals have to satisfy some (equational) constraints. For a typical multibody system, the input signals are the forces perpendicular to the constraints, the output signals are the positions of the bodies, and the (equational) constraints on the output signals are fixed distances between the bodies. The input signals must now control the dynamical system in such a way that the output signals always satisfy the constraints. This is difficult for a multibody system, because the forces only control the rate of change of the velocities, and the velocities only control the rate of change of the positions, while only the positions must satisfy the constraints. Reducing the index is easy in theory, because if we assume that the positions satisfy the constraints at the current time instance, then we can just replace the constraints on the positions by constraints on the velocities ensuring that the positions will continue to satisfy their constraints. In practice however, we don't want to throw away the constraint on the positions after we determined the constraints on the velocities, but we do have to throw away some of the initial (differential) equations, if we don't want to end with an overdetermined system. Determining the constraints on the velocities from the constraints on the positions might be tedious in practice, but at least it is straightforward (and canonical) once you understood the principle. The constraint $c(y,t)=0$ implies $\frac{d}{dt}c(y(t),t)=0=\frac{\partial c}{\partial y}*\frac{d}{dt}y+\frac{\partial c}{\partial y}$. This is not an (equational) constraint yet, because $\frac{d}{dt}y$ is not a variable but only the derivative of a variable. But the other differential equations allow us to express $\frac{d}{dt}y$ as a function of the variables, in our case $\frac{d}{dt}y=v$ for $v=\dot{y}$, so we get the equational constraint $0=\frac{\partial c}{\partial y}*v+\frac{\partial c}{\partial y}$ (or rather $0=\frac{\partial c}{\partial y}*\dot{y}+\frac{\partial c}{\partial y}$ if you manage to not get confused by using $\dot{y}$ as a variable instead of the derivative of a variable). Throwing away some of the initial (differential) equations is less straight forward (or canonical). If we can use a constraint equation like $y_1^2+y_2^2=1$ to determine $y_1$ as a function of the other variables (i.e. $y_1(t)=\sqrt{1-(y_2(t))^2}$ in this case), then we can throw away the differential equation for $y_1$, i.e. a differential equation of the form $\frac{d}{dt}y_1=\dots$. But we might have also decided to throw away the differential equation for $y_2$ instead, because the constraint also allow to determine $y_2$ as a function of the other variables. But no matter how easy it is to throw something away, this can easily destroy some symmetry of the system we didn't want to destroy, or we might be forced to switch which equation we throw away during the numerical simulation and thereby introduce undesired artifacts. So this part makes index reduction really challenging in practice.
983 173 I'll take on problem 1, though I'm only somewhat familiar with its subject matter. (a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here. For GF2, the primitive polynomials are x + (0,1) = x, x+1. For GF4, we consider primitive-polynomial candidates x For GF8, we consider primitive-polynomial candidates x Thus, GF8 has primitive polynomials x (b) There is a problem here. A basis is easy to define for addition: {1, x, x (c) That is a consequence of every finite field GF(p I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(p $$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$ If N is a prime, then the solution is known: $$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$ where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime. For GF2, the primitive polynomials are x + (0,1) = x, x+1. For GF4, we consider primitive-polynomial candidates x 2+ (0,1)*x + (0,1): x 2, x 2+ 1 = (x + 1) 2, x 2+ x = x(x + 1), x 2+ x + 1. That last one is the only primitive polynomial for GF4. For GF8, we consider primitive-polynomial candidates x 3+ (0,1)*x 2+ (0,1)*x + (0,1): x 3, x 3+ 1 = (x 2+ x + 1)*(x + 1), x 3+ x = x * (x + 1) 2, x 3+ x + 1, x 3+ x 2= x 2* (x + 1), x 3+ x 2+ 1, x 3+ x 2+ x = x*(x 2+ x + 1), x 3+ x 2+ x + 1 = (x + 1) 3. Thus, GF8 has primitive polynomials x 3+ x + 1 and x 3+ x 2+ 1. (b) There is a problem here. A basis is easy to define for addition: {1, x, x 2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2) 3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it. (c) That is a consequence of every finite field GF(p n) being a subfield of an infinite number of finite fields GF(p m*n), each one with a nonzero number of primitive polynomials with coefficients in GF(p n). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots. I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(p m*n) relative to GF(p n), I will call the number N(m). One can count all the possible candidate polynomials for GF(p m*n), and one gets $$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$ If N is a prime, then the solution is known: $$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$ where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
For $t \in \mathbb{R}$ define $$ F(t) = \sum_{n=1}^{[t]} \frac{(-1)^{(n-1)}}{n^{\frac12 + it}}$$ Let $\operatorname{Arg}(t)$ be $\operatorname{atan2}(\Im t , \Re t)$ - basically this is $\arctan$, but the sign depends on the quadrant. Observation $\operatorname{Arg}(F(t))$ jumps (usually from negativeto positive) very near all nontrivial zeta zeros on the critical line and in seemingly rare occasions without zeros. Computing $F(t)$ thenaiive way is not efficient for me. $F(t)$ is truncated Dirichlet eta function on the critical line, but it is not $0$ at zeros of zeta, though $|F(t)|$ has local minima near zeros. Added Wolfram Alpha found closed form for $F(t)$ in terms of Hurwitz zeta and zeta: \begin{align} & \sum_{n=1}^k\frac{(-1)^{n-1}}{n^{\frac12 + i t}} = \\ & 2^{-1/2-i t}(-(-1)^k \zeta(i t+1/2, (k+1)/2)+ \\ & (-1)^k \zeta(i t+1/2, (k+2)/2)+2^{1/2+i t} \zeta(1/2 i (2 t-i))-2 \zeta(1/2 i (2 t-i))) \end{align} Setting $k=[t]$ gives $F(t)$. Numerical evidence supports the closed form. In comments Greg Martin suggested $F(t)$ might not be correlated to higher zeros, though numerical evidence suggests it is correlated at height $10^6$, including closely spaced zeros. Another observation is $|F(t)|$ appears to have local minima close to zeta zeros on the critical line. Setting $$ G(t) = \sum_{n=1}^{[t]} \frac{(-1)^{(n-1)}}{n^{1 + it}}$$ the jumps of $G(t)$ appear zeros of $\eta(1+i t)$ and looks like $|G(t)| \sim |\eta(1 + i t)|$ Can this be explained? Counterexamples? Plot:
Rich trigonometry concept of Friendly Function Pair enables elegant solution In , we have explained how rich problem solving trigonometry concepts are derived from basic concepts for solving problems faster. In this session we will highlight Basic and Rich Trigonometry concepts and applications how the rich trigonometry concept of is derived from basic trigonometry concepts, and Friendly Trigonometric Function Pair how the rich concept thus derived can be used for solving different and difficult SSC CGL level problems in only a few steps elegantly. We will use two specially selected problems to show the power of the rich trigonometry concept, that essentially is a powerful mathematical problem solving technique derived from very basic concepts in Trigonometry. Note: we have used this concept in an but didn't explore it fully leaving it without naming and without showing its applicability in solving varieties of problems. earlier session We will now define the rich concept in more concrete terms first. The rich concept of Friendly Trigonometric Function Pair As one of the important rich concepts, we recognize three pairs of trigonometric functions as . These are, friendly trigonometric function pairs $sin \theta$ and $cos \theta$, $sec \theta$ and $tan \theta$, and $cosec \theta$ and $cot \theta$. The most basic relationship between two trigonometric functions involves the first friendly function pair of $sin \theta$ and $cos \theta$, $sin^2 \theta + cos^2 \theta=1$. This is one of the most frequently used trigonometric problem solving resource that we know. We have classified this pair as a friendly function pair because of the intimate relationship between them and the effectiveness of the relation to simplify trigonometry problems time and again. We will only list out the various forms in which the relation is used and move on to the other two pairs of functions, $sin^2 \theta + cos^2 \theta=1$ $sin^2 \theta = 1-cos^2 \theta$, $cos^2 \theta = 1-sin^2 \theta$. The structure of the basic relationship and its use for this most used function pair are a little different from that of the the other two pairs. We will leave the relationship in these forms as essentially basic trigonometry concepts. Nevertheless because of the intimate ties between the two functions we classify them also under the friendly function pair without adding any rich functionalty to this pair of functions. Instead we will add rich functionality to the other two function pairs, namely, $sec \theta$ and $tan \theta$; and $cosec \theta$ and $cot \theta$ which together act as the problem solving content of this powerful concept of friendly trigonometric function pair. The highly effective problem solving relationship between the elements of a friendly function pair The basic relationship between the first function pair of $sec \theta$ and $tan \theta$ is, $sec^2 \theta = 1+tan^2 \theta$. This is the form in which this function pair is mostly used. In this form the relationship is a basic trigonometry concept. We will transform the relationship to a different form. $sec^2 \theta = 1+tan^2 \theta$, Or, $sec^2 \theta -tan^2 \theta=1$, Or, $(sec \theta +tan \theta)(sec \theta -tan \theta)=1$, Or, $sec \theta +tan \theta=\displaystyle\frac{1}{sec \theta -tan \theta}$. Alternately, $sec \theta -tan \theta=\displaystyle\frac{1}{sec \theta +tan \theta}$. The inverse relationship between the two additive and subtractive complementary expressions of the two functions lends the power to solve complex problems elegantly. In the same way we recognize the inverse relationship between $cosec \theta$ and $cot \theta$, $cosec^2 \theta = 1+cot^2 \theta$, Or, $cosec^2 \theta -cot^2 \theta=1$, Or, $(cosec \theta + cot \theta)(cosec \theta - cot \theta)=1$, Or, $cosec \theta + cot \theta=\displaystyle\frac{1}{cosec \theta - cot \theta}$. Alternately, $cosec \theta - cot \theta=\displaystyle\frac{1}{cosec \theta + cot \theta}$. We will now show how these enriched relationships can be used for solving complex trigonometry problems in only a few steps elegantly. Chosen Problem 1. If $a=\sec \theta+\tan \theta$, then $\displaystyle\frac{a^2-1}{a^2+1}$ is, $\sec \theta$ $\cos \theta$ $\tan \theta$ $\sin \theta$ Solution to chosen problem 1 - Problem analysis Not only do we recognize the presence of a in the given input function, we also identify in the target expression, the potential to apply the powerful rich algebraic technique of friendly trigonometric function pair Componendo and Dividendo. The rich algebraic technique of componendo dividendo We apply this technique on an expression of the form, $\displaystyle\frac{x+y}{x-y}=\frac{p}{q}$, where $x$ and $y$ are the unknown variables and $p$ and $q$ are usually constants. As a first step we add 1 to both sides of the equation giving, $\displaystyle\frac{2x}{x-y}=\frac{p+q}{q}$. In the second step we subtract 1 from both sides of the original equation giving, $\displaystyle\frac{2y}{x-y}=\frac{p-q}{q}$. In the third step we take the ratio of the two results to get, $\displaystyle\frac{x}{y}=\frac{p+q}{p-q}$. The LHS being a simple ratio of the two variables and the RHS involving usually numbers, the result simplifies the problem significantly. Returning to our problem we observe, though the target expression form is perfectly amenable for applying the technique of componendo and dividendo, the final hurdle to the elegant solution turns out to be the fact that the input expression is in terms of variable $a$ while the target expression involves variable $a^2$. To overcome this barrier we would now transform the variable $a$ to $a^2$ by applying the rich concept of friendly trigonometric function pair, $a=sec\theta+tan\theta=\displaystyle\frac{1}{sec\theta -tan\theta}$. Multiplying the two equations, $a^2=\displaystyle\frac{sec\theta + tan\theta}{sec\theta -tan\theta}$. Now it is only a simple step to the solution. Solution to chosen problem 1 - Problem solving final stage Applying the componendo dividendo technique on the equation (subtracting 1 from both sides, again adding 1 to both sides of the original equation and taking the ratio of the two), $\displaystyle\frac{a^2-1}{a^2+1}=\frac{tan\theta}{sec\theta}=sin\theta$. We have reached the solution in only a few steps by applying rich trigonometry as well as concept of friendly trigonometric function pair rich algebraic technique of componendo dividendo. This is an example of achieving what we call through the process of elegant solution efficient problem solving. Answer: d: $sin \theta$. Key concepts and techniques used: Basic trigonometry concepts -- -- rich trigonomtery concepts concept of friendly trigonometric function pair -- target driven approach in forming first $a^2$ and then the desired fraction -- rich algebra techniques -- componendo dividendo technique. To solve the second chosen problem we will use the rich friendly inverse relationsip between the second pair of functions $cosec \theta$ and $cot \theta$. But this time we will use the relationship in a different way to simplify the target expression quickly and elegantly. This shows the basic characteristic of a rich problem solving concept, is just a potentially powerful problem solving concept A rich problem solving concept to that can be used in various ways solve many different problems having a common characteristic of applicability of the specific rich concept. Let us clarify by going through the process of solving the second problem. Chosen Problem 2. The value of $\displaystyle\frac{cot \theta + cosec \theta - 1}{cot \theta -cosec \theta +1}$ is, $cosec \theta - cot \theta$ $cosec \theta + cot \theta$ $sec \theta + cot \theta$ $cosec \theta + tan \theta$ Solution to chosen problem 2 - Problem analysis Examining this problem, here also we detect the presence of a , $cosec \theta + cot \theta$. Only, we have to use this key pattern intelligently. friendly function pair On further examination and comparison of the denominator and the numerator of the target expression the path to the solution becomes clear. Solution to chosen problem 2 - Problem solving execution Applying the inverse relationship of, $cosec \theta + cot \theta =\displaystyle\frac{1}{cosec \theta - cot \theta}$, we transform the target expression to, $E=\displaystyle\frac{cot \theta + cosec \theta - 1}{cot \theta -cosec \theta +1}$ $=\displaystyle\frac{\displaystyle\frac{1}{cosec \theta - cot \theta} - 1}{cot \theta -cosec \theta +1}$ $=\displaystyle\frac{1}{cosec \theta - cot\theta}\times{\displaystyle\frac{cot \theta -cosec \theta + 1}{cot \theta -cosec \theta +1}}$ $=\displaystyle\frac{1}{cosec \theta - cot \theta}$ $=cosec \theta + cot \theta$. Again detecting the very useful pattern of friendly trigonometric function pair and using the pattern judiciously we reach the solution elegantly in a few simple steps. Answer: b: $cosec \theta + cot \theta$. Key concepts and techniques used: -- Key patten identification -- concept of friendly trigonometric function pair -- basic algebraic concepts -- efficient simplification. rich trigonometry concepts Important: To be able to apply a rich problem solving concept, we need to first, be fully aware of the concept, second, be aware of the basic patterns involved in the concept, third, detect the useful patterns in the problem, and finally, fourth, decide how to exploit the rich concept utilizing the power of the problem solving pattern and execute the steps. We need not only to go through the concept and its applicability, using the concept to be able to apply it when it is needed. we must also solve a number of problems independently Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry How to solve difficult SSC CGL level School math problems in a few quick steps, Trigonometry 5 A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving.
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Registro completo - Registros similares 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Registro completo - Registros similares 2018-08-23 11:31 Registro completo - Registros similares 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Registro completo - Registros similares 2018-08-23 11:31 Registro completo - Registros similares 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Registro completo - Registros similares 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Registro completo - Registros similares 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Registro completo - Registros similares 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Registro completo - Registros similares 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Registro completo - Registros similares
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
Triangle from Inradius, Circumradius, Side or Angle Given circle $C(O,R)$ with center $O$ and radius $R,$ we'll be looking into the problem of constructing $\Delta ABC$ inscribed into $C(O,R)$ and having inradius $r.$ Assuming $R\gt 2r,$ the triangle exists and is unique, provided either one of the angles or one of the sides is given. The two problems are equivalent since equal chords in a circle subtend equal angles and vice versa. There are two constructions; one is designated as $A, R, r,$ the other $a, R, r.$ $A, R, r$ Assume the problem has been solved. Draw the inscribed and circumscribed circles. Since angle A is given, we may also assume the side BC be known. The bisectors of angles B and C intersect at the center I of the inscribed circle. $\angle B + \angle C + \angle A = 180^{\circ}.$ $\angle IBC + \angle ICB + \angle BIC = 180^{\circ}.$ Therefore, $\angle BIC = 90^{\circ} + \angle A/2.$ This means that point I belongs to a locus of points from which segment BC is seen under angle $90^{\circ} + \angle A/2.$ This is a circular arc that is easily constructed. On the other hand, $I$ lies on the line parallel to and at the distance $r$ from $BC.$ Therefore, the point $I$ is easily determined as the intersection of a circle and a straight line. Once $I$ has been constructed, double the angles $IBC$ and $ICB.$ Alternatively, draw the circle $C(I,r)$ with center $I$ and radius $r$, and the tangents to $C(I,r)$ from $B$ and $C.$ Point $A$ lies at the intersection of thus obtained two rays. The construction starts with drawing the circumscribed circle and any inscribed angle equal to $A.$ This determines the segment $BC.$ Then $I$ is constructed as above and then $A.$ For a proof of correctness, just note that, by the construction, $\angle BIC=90^{\circ}+A/2,$ and drawing the tangents from $B$ and $C$ makes $BI$ and $CI$ angle bisectors in $\Delta ABC,$ implying that $\angle BAC=180^{\circ}-2(90^{\circ}-A/2)=A,$ meaning that the rays $BA$ and $CA$ meet on the circumcircle $C(O,R).$ $a, R, r$ Professor René Sperb suggested a construction that makes use of Euler's formula for the distance $d$ between the incenter and circumcenter of a triangle. The three quantities are related by $2Rr = R^{2} - d^{2}.$ It follows that $d$ could be constructed as a leg of a right triangle with the other leg $r$ and the hypotenuse $R-r.$ Note that Euler's formula explains the necessity whereas the ultimate success of the construction the sufficiency, of the condition $R\gt 2r.$ Thus we draw circles $C(O,R)$ and $C(O,d)$ of radii $R$ and $d$ around the same point $O,$ place chord $BC=a$ in the former, and path a line, say, $l\parallel BC$ at distance $r.$ The intersection of $l$ and $C(O,d)$ gives the incenter $I$ and, with it, the incircle $C(I,r).$ The tangents to the incircle from $B$ and $C$ meet at point $A$ on the circumcircle. This was obvious for the preceding construction where $\angle BIC$ was such that insured the correct value of angle at $A;$ it may be less obvious for the present one. A proof of correctness of this construction consists in showing that the two constructions produce exactly the same circle $C(I,r).$ But this is indeed so, since, as we have seen, the first construction is correct so that, the Euler formula for the resulting $\Delta ABC$ assures that the distance $d$ for that triangle is exactly the distance used in the second construction. We can be more explicit and derive the Euler formula along the way. First note that the arc through $B$ and $C$ used in the first construction is centered at the midpoint (I'll call it $D)$ of the arc $BC$ of circle $C(O,R).$ (This is to be expected as that arc is part of $(D)$ - a circle through the incenter.) This is indeed so because (in the diagram below) $\angle BCD = \angle CBD = A/2$ so that $\angle BDC = 180^{\circ}-A,$ making $BIC=90^{\circ}+A/2,$ for any point $I$ on the arc. We shall now show that the distance between the $I$ found in the first construction satisfies the Euler formula, as it should. Using the notations in the diagram, $\begin{align} y &= R - r - h,\\ x &= \sqrt{\rho^{2}-(h+r)^{2}},\\ \rho^{2} &= h^{2}+(a/2)^{2},\\ d^{2} &= x^{2}+y^{2}\\ &= \rho^{2}-(h+r)^{2}+(R-r-h)^{2}\\ &= h^{2}+(a/2)^{2}-(h+r)^{2}+(R-r-h)^{2}\\ &= (a/2)^{2}-2Rr+(R-h)^{2},\\ R^{2}&=(R-h)^{2}+(a/2)^{2} \end{align}$ Combining the last two we get $d^{2}=R^{2}-2Rr$ - the Euler formula! 65607880
Difference between revisions of "Permanent" (LaTeX part) (LaTeX) Line 1: Line 1: − {{TEX| + {{TEX|}} ''of an $m \times n$-matrix $A = \left\Vert a_{ij} \right\Vert$'' ''of an $m \times n$-matrix $A = \left\Vert a_{ij} \right\Vert$'' Line 48: Line 48: Upper bounds for permanents: Upper bounds for permanents: − 1) For a + 1) For a 0>-matrix of order , − + − + =/. − + − 2) For a completely-indecomposable matrix + 2) For a completely-indecomposable matrix of order with non-negative integer elements, − + − + -. − + − 3) For a complex normal matrix + 3) For a complex normal matrixwith eigen values , − + − + =. + − The most familiar problem in the theory of permanents was van der Waerden's conjecture: The permanent of a doubly-stochastic matrix of order + The most familiar problem in the theory of permanents was van der Waerden's conjecture: The permanent of a doubly-stochastic matrixof order is bounded from below by /, and this value is attained only for the matrix composed of fractions /. A positive solution to this problem was obtained in [[#References|[4]]]. − Among the applications of permanents one may mention relationships to certain combinatorial problems (cf. [[ + Among the applications of permanents one may mention relationships to certain combinatorial problems (cf. [[Combinatorial analysis]]), such as the "problème des rencontres" and the "problème d'attachement" (or "hook problem" ), and also to the [[Fibonacci numbers]], the enumeration of [[Latin square]]and Steiner triple systems (cf. [[Steiner system]]), and to the derivation of the number of -factors and linear subgraphs of a [[graph]], while doubly-stochastic matrices are related to certain probability models. There are interesting physical applications of permanents, of which the most important is the dimer problem, which arises in research on the [[Adsorption|adsorption]] of di-atomic molecules in surface layers: The permanent of a 0-matrix of a simple structure expresses the number of ways of combining the atoms in the substance into di-atomic molecules. There are also applications of permanents in statistical physics, the theory of crystals and physical chemistry. ====References==== ====References==== Revision as of 21:31, 30 November 2014 of an $m \times n$-matrix $A = \left\Vert a_{ij} \right\Vert$ The function $$ \mathrm{per}(A) = \sum_\sigma a_{1\sigma(1)}\cdots a_{m\sigma(m)} $$ where $a_{ij}$ are elements from a commutative ring and summation is over all one-to-one mappings $\sigma$ from $\{1,\ldots,m\}$ into $\{1,\ldots,n\}$. If $m=n$, then $\sigma$ represents all possible permutations, and the permanent is a particular case of the Schur matrix function (cf. Immanant) $$ d_\chi^H (A) = \sum_{\sigma\in H} \chi(\sigma) \prod_{i=1}^n a_{i\sigma(i)} $$ for $H \subseteq S_n$, where $\chi$ is a character of degree 1 on the subgroup $H$ (cf. Character of a group) of the symmetric group $S_n$ (one obtains the determinant for $H=S_n$, $\chi =\pm 1$, in accordance with the parity of $\sigma$). The permanent is used in linear algebra, probability theory and combinatorics. In combinatorics, a permanent can be interpreted as follows: The number of systems of distinct repesentatives for a given family of subsets of a finite set is the permanent of the incidence matrix for the incidence system related to this family. The main interest is in the permanent of a matrix consisting of zeros and ones (a $(0,1)$-matrix), of a matrix containing non-negative real numbers, in particular doubly-stochastic matrices (in which the sum of the elements in any row and any column is 1), and of a complex Hermitian matrix. The basic properties of the permanent include a theorem on expansion (the analogue of Laplace's theorem for determinants) and the Binet–Cauchy theorem, which gives a representation of the permanent of the product of two matrices as the sum of the products of the permanents formed from the cofactors. For the permanents of complex matrices it is convenient to use representations as scalar products in the symmetry classes of completely-symmetric tensors (see, e.g., [3]). One of the most effective methods for calculating permanents is provided by Ryser's formula: $$ \mathrm{per}(A) = \sum_{t=0}^{n-1} (-1)^t \sum_{X \in \Gamma_{n-t}} \prod_{i=1}^m r_i(X) $$ where $\Gamma_k$ is the set of submatrices of dimension $m \times k$ for the square matrix $A$, $r_i = r_i(X)$ is the sum of the elements of the $i$-th row of $X$ and $i,k=1,\ldots,m$. As it is complicated to calculate permanents, estimating them is important. Some lower bounds are given below. a) If $A$ is a $(0,1)$-matrix with $r_i(A) \ge t$, $i=1,\ldots,m$, then $$ \mathrm{per}(A) \ge \frac{t!}{(t-m)!} $$ for $t \ge m$, and $$ \mathrm{per}(A) \ge t! $$ if $t < m$ and $\mathrm{per}(A) > 0$. b) If $A$ is a $(0,1)$-matrix of order $n$, then $$ \mathrm{per}(A) \ge \prod_{i=1}^n \{ r_i^* + i - n \} $$ where $r_1^* \ge \cdots \ge r_n^*$ are the sums of the elements in the rows of $A$ arranged in non-increasing order and $\{ r_i^* + i - n \} = \max(0, r_i^* + i - n )$. c) If $A$ is a positive semi-definite Hermitian matrix of order $n$, then $$ \mathrm{per}(A) \ge \frac{n!}{s(A)^n} \prod_{i=1}^n |r_i|^2 $$ where $s(A) = \sum_{i,j} a_{ij}$ if $s(A) > 0$. Upper bounds for permanents: 1) For a $(0,1)$>-matrix $A$ of order $n$, $$ \mathrm{per}(A) \le \prod_{i=1}^n (r_i!)^{1/r_1} \ . $$ 2) For a completely-indecomposable matrix $A$ of order $n$ with non-negative integer elements, $$ \mathrm{per}(A) \le 2^{s(A)-2n} + 1 \ . $$ 3) For a complex normal matrix $A$ with eigen values $\lambda_1,\ldots,\lambda_n$, $$ |\mathrm{per}(A)| \le \frac{1}{n}\sum_{i=1}^n |\lambda_i|^n \ . $$ The most familiar problem in the theory of permanents was van der Waerden's conjecture: The permanent of a doubly-stochastic matrix of order $n$ is bounded from below by $n!/n^n$, and this value is attained only for the matrix composed of fractions $1/n$. A positive solution to this problem was obtained in [4]. Among the applications of permanents one may mention relationships to certain combinatorial problems (cf. Combinatorial analysis), such as the "problème des rencontres" and the "problème d'attachement" (or "hook problem" ), and also to the Fibonacci numbers, the enumeration of Latin squares and Steiner triple systems (cf. Steiner system), and to the derivation of the number of $1$-factors and linear subgraphs of a graph, while doubly-stochastic matrices are related to certain probability models. There are interesting physical applications of permanents, of which the most important is the dimer problem, which arises in research on the adsorption of di-atomic molecules in surface layers: The permanent of a $(0,1)$-matrix of a simple structure expresses the number of ways of combining the atoms in the substance into di-atomic molecules. There are also applications of permanents in statistical physics, the theory of crystals and physical chemistry. References [1] H.J. Ryser, "Combinatorial mathematics" , Wiley & Math. Assoc. Amer. (1963) Zbl 0112.24806 [2] V.N. Sachkov, "Combinatorial methods in discrete mathematics" , Moscow (1977) (In Russian); translated by V. Kolchin: Encyclopedia of Mathematics and Its Applications 55. Cambridge University Press (1995) Zbl 0845.05003 [3] H. Minc, "Permanents" , Addison-Wesley (1978) [4] G.P. Egorichev, "The solution of van der Waerden's problem on permanents" , Krasnoyarsk (1980) (In Russian); Adv. Math. 42 (1981) 299-305. Zbl 0478.15003. [5] D.I. Falikman, "Proof of the van der Waerden conjecture regarding the permanent of a doubly stochastic matrix" Math. Notes , 29 : 6 (1981) pp. 475–479 Mat. Zametki , 29 : 6 (1981) pp. 931–938. Zbl 0475.15007 Comments The solution of the van der Waerden conjecture was obtained simultaneously and independently of each other in 1979 by both O.I. Falikman, [5], and G.P. Egorichev, [4], [a4]. For some details cf. also [a2]–[a5]. References [a1] D.E. Knuth, "A permanent inequality" Amer. Math. Monthly , 88 (1981) pp. 731–740 [a2] J.C. Lagarias, "The van der Waerden conjecture: two Soviet solutions" Notices Amer. Math. Soc. , 29 : 2 (1982) pp. 130–133 [a3] J.H. van Lint, "Notes on Egoritsjev's proof of the van der Waerden conjecture" Linear Algebra Appl. , 39 (1981) pp. 1–8 [a4] G.P. [G.P. Egorichev] Egorychev, "The solution of van der Waerden's problem for permanents" Adv. in Math. , 42 : 3 (1981) pp. 299–305 [a5] J.H. van Lint, "The van der Waerden conjecture: Two proofs in one year" Math. Intelligencer , 4 (1982) pp. 72–77 [a6] R.M. Wilson, "Non-isomorphic triple systems" Math. Zeitschr. , 135 (1974) pp. 303–313 [a7] A. Schrijver, "A short proof of Minc's conjecture" J. Comb. Theory (A) , 25 (1978) pp. 80–83 [a8] H. Minc, "Nonnegative matrices" , Wiley (1988) How to Cite This Entry: Permanent. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Permanent&oldid=35188
4 0 Limit of an integral 1. Homework Statement Evaluate: the limit as n goes to [tex]\infty[/tex] of [tex]\int^{1}_{0} (n^{3/2}x^{5/4})/(1+n^{2}x^{2})dx[/tex] dx is the lebesgue measure 2. Homework Equations I thought I could use the monotone convergence theorem or the dominated convergence theorem neither work. 3. The Attempt at a Solution the intregrand is dominated by [tex]1/\sqrt{x}[/tex] but [tex]1/\sqrt{x}[/tex] isn't lebesgue integrable. 1. Homework Statement Evaluate: the limit as n goes to [tex]\infty[/tex] of [tex]\int^{1}_{0} (n^{3/2}x^{5/4})/(1+n^{2}x^{2})dx[/tex] dx is the lebesgue measure 2. Homework Equations I thought I could use the monotone convergence theorem or the dominated convergence theorem neither work. 3. The Attempt at a Solution the intregrand is dominated by [tex]1/\sqrt{x}[/tex] but [tex]1/\sqrt{x}[/tex] isn't lebesgue integrable.
Title On the preservation of certain properties of functions under composition Date of Award 1994 Degree Type Dissertation Degree Name Doctor of Philosophy (PhD) Department Mathematics Advisor(s) Daniel Waterman Keywords homeomorphisms, convex functions, fourier series Subject Categories Number Theory | Partial Differential Equations | Physical Sciences and Mathematics Abstract Let $I\sb{n,m},\ m = 1,2,\...,k\sb{n}$ be disjoint closed intervals such that for each $n,\ I\sb{n,m-1}$ is to the left of $I\sb{n,m}.$ Given x, if for every $\epsilon>0$ there exists N such that $I\sb{n,m}\subset(x,x + \epsilon)$ whenever $n>N,$ then ${\cal I} = \{I\sb{n,m}:n = 1,2,\...;\ m = 1,2,\...,k\sb{n}\}$ is called a right system of intervals (at x). A left system is defined similarly. Let$$\alpha\sb{n}({\cal I}) = \sum\sbsp{i=1}{k\sb{n}}{f(I\sb{n,i})\over i}\quad{\rm where}\quad f(\lbrack a,b\rbrack) = f(b) - f(a).$$In Chapter 1 we prove the following result: Theorem 1. If f is regulated, then $f\ \circ\ g$ has everywhere convergent Fourier series for every homeomorphism g is f and only if $\lim\limits\sb{n\to\infty}\alpha\sb{n}({\cal I}) = 0$ for every system ${\cal I}$ and for every x. Goffman and Waterman proved an analogous theorem for the case where f is continuous. In Chapter 2 we turn our attention to functions of bounded. $\Lambda$-variation, which we define as follows: Suppose $\Lambda = \{\lambda\sb{n}\}$ is an increasing sequence such that $\sum\limits\sbsp{n=1}{\infty}{1\over\lambda\sb{n}} = \infty.$ We say that $f\in\Lambda BV$ on an interval (a,b) if $\sum\limits\sbsp{n=1}{\infty}{\vert f(I\sb{n})\vert\over\lambda\sb{n}}<\infty$ for every collection $\{I\sb{n}\}$ of nonoverlapping intervals in (a,b). Here we show that $g\ \circ\ f\in\Lambda BV$ for every $f\in\Lambda BV$ if and only if $g\in Lip1.$ Chaika and Waterman proved an analogous result for the classes GW, UGW and HBV. In Chapter 3 we prove an analogous theorem for the class $\Phi BV$, which we define as follows: Let $\phi$ be a convex function satisfying $\phi(0) = 0,\ \phi(x)>0$ for $x>0,\ {\phi(x)\over x}\to0$ as $x\to0,$ and ${\phi(x)\over x}\to\infty$ as $x\to\infty.$ We say $f\in\Phi BV$ on (a,b) if $\sum\limits\sbsp{n=1}{\infty}\phi(\vert f(I\sb{n})\vert)<\infty$ for every collection $\{I\sb{n}\}$ of nonoverlapping intervals in (a,b). We will make the further assumption that $\phi$ satisfy the $\Delta\sb2$ condition so that the resulting class $\Phi BV$ forms a linear space. In this chapter we prove that $g\ \circ\ f\in\Phi BV$ for every $f\in\Phi BV$ if and only if $g\in Lip1.$ We also present an interesting condition which is equivalent to the $\Delta\sb2$ condition. Access Surface provides description only. Full text is available to ProQuest subscribers. Ask your Librarian for assistance. Recommended Citation Pierce, Pamela Bitler, "On the preservation of certain properties of functions under composition" (1994). Mathematics - Dissertations. 50. https://surface.syr.edu/mat_etd/50 http://libezproxy.syr.edu/login?url=http://proquest.umi.com/pqdweb?did=741921461&sid=1&Fmt=2&clientId=3739&RQT=309&VName=PQD
Assume we have two series of indepedent success-failure observations, e.g. coin tosses $$ \boldsymbol x_1 \in \{H,T\}^{n_1} \\ \boldsymbol x_2 \in \{H,T\}^{n_2} $$ Also, let $k_i$ be the number of heads (=successes) in $\boldsymbol x_i$. Now, I want to assume the observations are generated by two random variables $X_i$ drawn from a binomial distribution: $$ X_i \sim B(n_i, p_i) $$ Goal (intuitively) Basically, what I want to do is compare two different scenarios: The observations have been generated by two different coins The observations have been generated by the same coin For the comparison I want to have a value $\Lambda \in [0, 1]$ that expresses the difference of the two models. Goal (formally) I want to estimate the model parameter $p_i$ and compare two different models $\boldsymbol \theta^{(j)}, j=1,2$ of the underlying binomial distribution for the joint probability of the observations, i.e. $$ L(\boldsymbol\theta^{(j)}) = P(X_1=k_1, X_2=k_2; \boldsymbol\theta^{(j)}) = P(X_1=k_1; \theta_1^{(j)}) \cdot P(X_2 = k_2; \theta_2^{(j)}) $$ So, for the first scenario $j=1$, i.e. when the observations have been generated by two different coins, I assume the MLE for each binomial distribution: $$ \theta_i^{(1)} = k_i/n_i $$ For the second scenario $j=2$, I assume that the underlying binomial distributions have the same parameter, namely: $$ \theta_1^{(2)} = \theta_2^{(2)} = \frac{k_1 + k_2}{n_1 + n_2} $$ Approach (1) In order to compare the two models, I could just use the likelihood ratio $$ \Lambda := \frac{L(\boldsymbol\theta^{(2)})}{L(\boldsymbol\theta^{(1)})} $$ We know for sure that the first model is more likely than the second, since it has been estimated with the MLEs. This means that $\Lambda \in [0, 1]$. Approach (2) A colleague of mine insists that we can actually calculate the probability of the models, by using Bayes: $$P(\boldsymbol \theta^{(j)}|X_1=k_1, X_2=k_2) = \frac{P(X_1=k_1,X_2=k_2|\boldsymbol\theta^{(j)})\cdot P(\boldsymbol \theta^{(j)})}{\sum_{l=1}^2 P(X_1=k_1,X_2=k_2|\boldsymbol\theta^{(l)})\cdot P(\boldsymbol \theta^{(l)})}$$ Since we don't know the priors $P(\boldsymbol\theta^{(j)})$ he suggests $$ P(\boldsymbol\theta^{(1)}) = P(\boldsymbol\theta^{(2)}) = 0.5 $$ He suggests to use $$\Lambda := P(\boldsymbol \theta^{(2)}|X_1=k_1, X_2=k_2) \in [0,1]$$ Question Is the second approach correct? If not, what assumption would I have to make in order to legitimate it? In other words. Is it okay to constraint the entire hypotheses space to our two hypotheses $\Theta^C = \{\boldsymbol\theta^{(1)}, \boldsymbol\theta^{(2)}\}$? Let me know what you think. Thank you in advance!
How to find the number of all the positive integral solutions and the solutions itselves of $$5x+7y=100?$$ Please help me!! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community positive integral means that positive and integer right?because $5$ and $7$ are coprime,that means that $100$ is represented by sum of two number,which one of them is divisible by $5$ and $7$,one of possible solution is $30+70$,which is equal to exactly $100$, or $x=6$ and $y=10$,could you found others? EDITED: second is just $35+65$ there $35$ is divisible by $5$ and $7$ that means that $y=5$ and $x=13$ generally $x=(100-7*y)/5$ how much positive integer of $y$ is so that $(100-7*y)$ is divisble by $5$ and leaves $x$ positive? clearly $y$ is positive number,which is multiply of $5$ Assume we have $5x+7y=100$. Let us solve for the variable with the smallest coefficient $x=20-\frac{7y}{5}$. This means that $\frac{7y}{5}$ is integer. Therefore $y=5y_2$ for some integer $y_2$. Plugging this into the original equation we get $x=20-7y_2$. You see now, that for every integer you put in $y_2$ you get an integer value for $x$, and an integer value for $y=5y_2$ that satisfy the equation $5x+7y=100$. Since we want positive solutions you can put $y_2>0$ such that $20-7y_2>0$, i.e. $20/7>y_2$. So, $0<y_2\leq 2$. We have then two possibilities: $y_2=1$ or $y_2=2$, for which we get $y=5$, or $y=10$ respectively. The corresponding values for $x$ are $x=13$, or $6$. Given $$5x+7y=100$$ also $x\ge0$ and $y\ge0$ie you get the inequalities $$5x\le100$$ or $$x\le20$$ similarly $$y\le14.28$$ Now $$x=\frac{100-7y}{5}$$ which means $5$ divides $7y$. The only values satisfying for $y$ are $$y=0,5,10$$ and is then bounded by the inequality. Hint $\ $ Since $\,5x+7y\,$ is linear in $\,x,y,\,$ the general solution of $\,5x+7y=100\,$ is the sum of any particular solution, e.g. $\,(x,y)=(20,0),\,$ plus the general solution solution of the associated homogeneous equation $\,5x+7y = 0,\,$ which is $\,(x,y)=(-7n,5n).\,$ Summing them gives the general solution $\,(x,y) = (20,0)+(-7n,5n) = (20-7n,5n).\,$ Now $\,5n > 0\!\iff\! n\ge 1,\,$ and $20-7n>0\!\iff\! n\le2.$ $5x+7y=100~\implies 5|y$ $y$ is limited to 14 on the upper side. So, we are left with only $y=5,10$ When,$y=5$,we can have a solution,i.e.,$x=13$ Also,$y=10$,we can have a soluton,i.e.,$x=6$ $5x \gt0$ so $x=1,2,3,4,5,\ldots $careful about $5x$ and 100 both are 5q ---> so 7y must be 5K toothen put $y=0,5,10$ (not more because $7y$ must be less or equal to $100$) so check them to find $x $ Since $5 \mid 100$, we know that $5$ must divide $5x +7y$ and that therefore $5$ must divide $7y$. Since $5$ and $7$ are coprime, only multiples of $5$ as $y$ will suffice, and since $15 \cdot 7>100$ and both $x,y$ are integral and positive, only $y=5,10$ are solutions. We have now shown that $(13,5)$ and $(6,10)$ are the only solutions. Edit: $(20,0)$ can also be a solution if you consider $0$ a positive number As $(5,7)=1$ we would definitely have some $x,y\in \mathbb{Z}$ such that $5x+7y=1$ we see that $5(3)+7(-2)=1$ i.e., $5(300)+7(-200)=100$ Now if $ax_1+by_1 = c$ is any solution, then all solutions are of the form $$x = x_1 - r\frac{b}{\gcd(a,b)},\qquad y = y_1 + r\frac{a}{\gcd(a,b)}$$ we have $(5,7)=1$ so any solution would be of the form $x=300-7r$ and $y=-200+5r$ We want solutions to be positive which means that we want $7r<300$ and $5r>200$ i.e., $r<43$ and $r>40$ i.e., $r\in \{41,42\}$ For $r=41$ we would have : $x=300-287=13; y=-200+215=5$ i.e., $(x,y)=(13,5)$ For $r=42$ we would have : $x=300-294=6; y=-200+220=10$ i.e., $(x,y)=(6,10)$ So, only positive solutions are $(13,5)$ and $(6,10)$. If you consider $0$ to be positive then $(20,0)$ would also be a solution you are looking for Thank you for pointing out my mistake, postmortem. Through diophantine equation, I get $$x = 300 -7r,$$ for all $r-$ integers, $$y = -200 +5r $$ To solve the inequalities above. So, to get the positive solution, for the above inequalities, we need to set certain boundaries such that, $r\ge40$ , and $r<42$, sorry for my mistake, due to I am having fever. Thank you so much for pointing out my mistake.
The Christoffel symbols are a measure of the first derivatives of the metric tensor. In particular, they will be zero if all derivatives are zero. In a euclidean space this will alway be the cas-e, not only in 2 dimensions! For another coordinate system you can either use the definition (e.g. from wikipedia), which can be complicated since in 4D for example there are 40 of them. Or, which seems to be easier most of the time, compute the Lagrangian of a free particle (it is mostly easy in an easier basis), take the Euler-Lagrange-Equation and bring it in the form $\ddot x^i = \Gamma^i_{jk}\dot x^j\dot x^k$ This can be also seen as a definition of the symbols as coefficients it this equation. Now you can read out the desired Christoffel symbols from those coefficients.
How can I show using properties of determinants that: $$\det\begin{pmatrix} (b+c)^2 & a^2 & a^2 \\ b^2 & (c+a)^2 & b^2 \\ c^2 & c^2 & (a+b)^2 \\ \end{pmatrix} = 2abc(a+b+c)^3$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community the value of the determinant is evidently a homogeneous symmetric polynomial D(a,b,c) of degree six. it is easy to see that it vanishes if $a=0$, $b=0$ or $c=0$. and if $a+b+c=0$ then $(b+c)^2=a^2$ etc. , so again the determinant vanishes. hence for suitably chosen $\lambda$ and $\mu$ $$ D(a,b,c) = abc(a+b+c)\left(\lambda (a^2+b^2+c^2) + \mu(ab+bc+ca) \right) $$ we may easily compute $D(1,1,1)=54$, hence $$ \lambda+\mu = 6 $$ likewise $D(2,1,1) = 256$, giving: $$ 6\lambda +5 \mu = 32 $$thus $\lambda=2$ and $\mu=4$, so $$ \lambda (a^2+b^2+c^2) + \mu(ab+bc+ca) = 2(a+b+c)^2 $$ and, finally: $$ D(a,b,c) = 2abc(a+b+c)^3 $$
A few years ago I read this piece about meat purveyor Pat LaFrieda and since then I’ve wanted to visit one of his restaurants. While at LaGuardia Airport awaiting a flight, all outbound flights from the airport were delayed due to inclement weather and only one outbound runway was operational. This was fortuitous—the heavens opened and I began my siege upon one of Pat’s restaurants—Custom Burgers. No one seemed to know exactly when my flight would depart and I was concerned that if I chose to eat at Custom Burgers I might miss my departing flight. There were only 5 flights ahead of me in the queue and I didn’t want to get stranded in LGA overnight nor did I want to miss eating at Custom Burger. This provoked an interesting question—how to estimate the probability that I could eat at Pat’s restaurant without missing my flight? Since only one runway was open, flight departures were disjoint events—no two planes could takeoff at the same time. By discretizing time into minutes, seconds, and ever increasingly smaller intervals, it follows that the probability of a flight departing in a given time interval approaches zero as the number of time intervals, $ n $ increases. Therefore, the probability of precisely $ k $ flights departing in $ n $ small time intervals is given by the Binomial distribution: $$ \Pr(k) = \dbinom{n}{k} \text{ } p^k(1 - p)^{n - k} \text{ where,} $$ $$ p = \frac{x}{n}$$ The unknown $ p $ represents the probability of the mean number of flights, $ x $ per $ n $ interval. I didn’t have a way to get the true average number of flights leaving in a given time interval, but I could generate a reasonable approximation by gathering the departure times of outbound flights from Flight Tracker Pro on my iPhone. Once I had this data, using a crude maximum likelihood estimation I approximated $ x $ to be 8.16 flights per hour out of LGA. 1 Solving for $ p $ given $ \hat{x} = np $ allowed the rate of departing flights to be approximated. Substituting the expression $ \frac{x}{n} $ into the Poisson distribution facilitated the approximation of the binomial probability where $ n $ is large and $ p $ is small, giving the probability density function of the Poisson: $$ \Pr(k) = \frac{x^k e^{-x}}{k!} $$ I estimated that it would take me a minimum of thirty minutes to eat at Custom Burgers, so I wanted to find the probability of 6 or more flights leaving in the next thirty minutes. Since I was 6th in the queue, if 6 or more flights departed, I would miss my flight. To determine this probability, I first calculated the probability of 5 or fewer flights leaving within the next thirty minutes: $$ \Pr(\leq k) = \sum_{i=1}^{N} \Pr(k_i) \text{ where}, $$ $$ \Pr(k_i) = \frac{e^{-4.08} e^{k_i}}{!k_i} $$ Using this calculation, it was possible to determine the probability of missing my flight. $$ \Pr(k > 5) = 1 - \Pr(k \leq 5) = 1 - 0.7725 = 0.2275 $$ There was an $ \approx 23\% $ chance I would miss my flight if I went to Custom Burger for thirty minutes. These odds seemed like an acceptable risk, so I left to eat—and I’m glad I did, the meal was great and I made my flight. I assume $ p $ is constant across the time interval. ↩
Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 44, Number 3 (2014), 717-731. Associate elements in commutative rings Abstract Let $R$ be a commutative ring with identity. For $a,b\in R$, define $a$ and $b$ to be \textit{associates}, denoted $a\sim b$, if $a\mid b$ and $b\mid a$, so $a=rb$ and $b=sa$ for some $r,s\in R$. We are interested in the case where $r$ and $s$ can be taken or must be taken to be non zero-divisors or units. We study rings, $R$, called \textit{strongly regular associate}, that have the property that, whenever $a\sim b$ for $a,b\in R$, then there exist non zero-divisors $r,s\in R$ with $a=rb$ and $b=sa$ and rings $R$, called \textit{weakly pr\'{e}simplifiable}, that have the property that, for nonzero $a,b\in R$ with $a\sim b$, whenever $a=rb$ and $b=sa$, then $r$ and $s$ must be non zero-divisors. Article information Source Rocky Mountain J. Math., Volume 44, Number 3 (2014), 717-731. Dates First available in Project Euclid: 28 September 2014 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1411945660 Digital Object Identifier doi:10.1216/RMJ-2014-44-3-717 Mathematical Reviews number (MathSciNet) MR3264478 Zentralblatt MATH identifier 1302.13001 Citation Anderson, D.D.; Chun, Sangmin. Associate elements in commutative rings. Rocky Mountain J. Math. 44 (2014), no. 3, 717--731. doi:10.1216/RMJ-2014-44-3-717. https://projecteuclid.org/euclid.rmjm/1411945660
I asked this in the Ethereum StackExchange yesterday but I figured it would be more appropriate to ask it here. I am looking into implementing some operations for the BLS signature scheme in Solidity, using the new precompiled contracts for pairing operations released with Ethereum Byzantium, but I am not sure whether it is possible. According to the BLS paper, page 310, to verify a BLS signature $s \in F_q$ corresponding to public key $V \in G_2$ and message $M$: Find $y \in F_q$ with $\sigma = (s, y)$. Compute $R \leftarrow MapToGroup(M) \in G_1$. Test if either $e(\sigma, Q) = e(R, V)$ or $e(\sigma, Q)^{-1} = e(R, V)$. Ethereum has added a few precompiled contracts for operations on the bn128 curve: Addition Scalar multiplication Pairing check: Given an input $(a_1, b_1, a_2, b_2, \cdots, a_k, b_k) \in (G_1 \times G_2)^k$, returns whether $log_{P_1}(a_1) \cdot log_{P_2}(b_1) + \cdots + log_{P_1}(a_k) \cdot log_{P_2}(b_k) = 0$ (in $F_q$) Supposing we have $R$ and $\sigma$, is step 3 possible using the precompiled contracts (with other Solidity functionality)? I am not sure I can find a formulation since there is no implementation for evaluating the pairing function $e$ itself.
Let $\Omega$ be an open bounded subset of $\mathbb R^N$. Let $u_0 \in L^\infty(\Omega)$ and $f \in L^\infty((0,T)\times\Omega).$ Consider the following boundary value problem for the heat equation: $$ \begin{cases} u_t - \Delta u = f \\ u|_{\partial\Omega} = 0\\ u(0) = u_0 \end{cases} $$ Questions: Let $k \ge 2$. Assume $u_0 \in C^k(\bar \Omega)$, $f \in C^k([0,T) \times\bar \Omega)$ such that $u_0 = \Delta u_0 = 0$ on $\partial \Omega$ and assume that $\Omega$ is of class $C^k$. Is it true that there exists a unique solution $u \in L^\infty((0,T)\times\bar\Omega) \cap C^k([0,T)\times \bar\Omega)$? How can one prove it? Do we need some additional assumptions on $f$? Fix $U$ a neighborhood of $x_0 \in \Omega$, and assume that $u_0 \in C^k(U)$, and $f \in C^k([0,T) \times U)$. Is it true that there exists a unique (weak) solution of the heat equation that is regular in $U$, that is $u \in C^k([0,T)\times U) \cap L^\infty$? Are the results in the first two questions true even if we assume $\Omega$ Lipschitz? An are they true with less regularity assumptions on $f$?
Examples of Functions In a compact form a function is a well defined unidirectional dependency, a concept of which I shall provide several examples. Functions that are not numeric The set of fingerprints is uniquely defined for every person. That is to say, there is a function (call it $f)$ from the set of people to the set of fingerprint sets. The function is not defined for every person, but only for those who were fingerprinted. Any fingerprint set uniquely defines a person. This function is inverseto $f.$ For every person, there is a unique DNA molecule whose copies are carried by every cell in human body. Thus there is a function from the set of people to the set of molecular structures known as the DNA. This function does not have an inverse; for, say, a pair of identical twins share the DNA structure. Every triangle has the barycenter, the incenter, the orthocenter, the circumcenter, the symmedian point, the isogonic center and many other remarkable points. These all are various functions defined on a set of triangles. Every triangle centeris defined by a homogeneous function of in, say, barycentric coordinates, which is symmetric with respect to its arguments. I would like to claim that, for every object, its optimal appearance occurs at a certain distance, which would make this distance a function of the appearance. But this may not be true: an object may look equally well from several distinct distances. Any coloring in a game of Y can be uniquely reduced to a coloring of a smaller board in a manner that preserves the winning chains. Numeric functions The distinction between numeric and non numeric function is rather nebulous. It's neither standard nor common. I would think of a function as numeric provided that, on top of establishing a correspondence between number sets, the function gains some properties from that fact. If the numbers are used merely as tags or convenience symbols, house numbers for example, there is probably no point in thinking of the function as numeric. The house numbers may still be used to indicate the proximity of a house from a point of departure. More generally, we talk of a distance function, which I feel comfortable to think of as numeric. The most pedestrian definition of a function is the one common in high school textbooks, see for example [Jacobs, p. 122]: A function is a pairing of two sets of numbers so that to each element in the first set there corresponds exactly one number in the second set. So, what makes a function numeric? As I already mentioned, the distinction is loose, but mostly one thinks of a function as numeric if it is defined by means of an algebraic formula. A more broad convention that only requires the two paired sets in the definition to consist of (whatever) numbers, allows functions that only nominally relate numbers to one another and do not gain any essential properties from their being defined for number sets. The function $x\mapsto x^2$ relates a number to its square. We would commonly write $f(x) = x^2,$ but, to denote the result as, say, $^{2}(x)$ is as consistent with Euler's notations as could be. In fact, for the binomial coefficient "n choose k", which is a function of two variables, several different notations - $C(n, k),$ $_{n}C_{k},$ $\displaystyle C^{n}_{k}$ - are in common use. $f(x) = ax^{2} + bx + c$ and is the second simplest of the polynomialfunctions. For the simplicity title, I think, compete two functions: the constantfunction $f(x) = const,$ and the identityfunction $f(x) = x.$ Some function, because of their importance and frequent use in mathematics and applications, have, with time, been granted special names. Such are the trigonometric functions $\sin (x),$ $\cos (x),$ and others, exponential function $a^{x}$ and the logarithm $\log (x).$ Having a name does not mean that the values of the function can be easily calculated. Virtually for all x, sin(x) has to be approximated. But the same is true for apparently simpler functions, like $x^{1/2}=\sqrt{x}.$ New functions can be constructed as the combinations (in many senses) of other functions. Some combinations yield surprising results. For example, incomprehensible at first sight expression $x + \arctan (\cot\pi \cdot x))/\pi - 1/2$ is just a representation of the floor function: $\lfloor x\rfloor = x + \arctan (\cot\pi \cdot x))/\pi - 1/2.$ (I thank Andrew Newton for this example and for the link to an interesting discussion.) $|x| = \arccos (\cos\pi \cdot x/(1 + x^{2})))\cdot (1 + x^{2})/\pi$ , $\mbox{sign}(x) = \arccos (\cos\pi \cdot x/(1 + x^{2})))\cdot (1 + x^{2})/\pi \cdot x,$ $H(x) = (\mbox{sign} (x) - 1)^{2}/4,$ Natural as it appears to be and common as it is, using formulas to define functions may be quite treacherous. Iain Stewart tells about his experience with offering the students to find the derivative of the function $f(x) = \log (\log (\sin(x))).$ By applying standard rules of the calculus, most students derive the following answer: $f' = \cot (x)/\log (\sin(x)).$ Curiously, the derivative makes sense for the values of $x$ where $\sin (x) \gt 0.$ But having said that, we should also note that $\log (\sin(x))$ is never positive, so that $\log (\log (\sin(x)))$ does not make sense for any $x.$ References H. R. Jacobs, Mathematics: A Human Endeavor, 3 rdedition, Freeman, 2002 (6 thprinting) I. Stewart, Concepts of Modern Mathematics, Dover, 1995 (3 rdprinting), pp. 64-65 Functions Functional Notations and Terminology Examples of Functions Addition of Functions Multiplication of Functions Cantor Set and Function Limit and Continuous Functions Sine And Cosine Are Continuous Functions Composition of Functions, an Exercise 65608292
Dependency Parsing Unlabled Dependency Parses root is a special root symbol Each dependency is a pair (h, m) where h is the index of a head word, m is the index of a modifier word. In the figures, we represent a dependency (h, m) by a directed edge from h to m Dependencies in the above example are (0, 2), (2, 1), (2, 4), and (4, 3). (We take 0 to be the root symbol.) Conditions on Dependency Structures 从root到任何一个word都有一条直接路径 no crossing (比如右下角的笔记就不行~) 对于 "John saw Mary" 有5中dependency parse 有crossing的结构叫 non-projective structure dependency parsing resource: Conll 2007 McDonald dependency banks 一个 treebank 通过 lexicalization 可以转换成 dependency bank.也就是一个lexicalizated PCFG可以转换为一个 dependency bank. efficiency of dependency parsing dynamic programming - Jason Eisner very efficiencyat Parsing very useful representations CS224 什么是dependency structure describing the structure of a sentence by taking each word and saying what it's a dependent on.So, if it's a word that kind of modifies or is an argument of another word that you're saying, it's a dependent of that word. ambiguity: PP attachments attachment ambiguities: A key parsing decision is how we 'attach' vairous constituents PPs 介词短语, adverbial or participial phrase 副词和分词短语, infinitives 不定式, coordinations 并列关系 Catalan numbers: \(C_n=(2n)!/[n+1]!n!\) ??需要在查资料!! 人工标注也就太太麻烦了~ dependency Grammar and Dependency structure universal dependency the arrow connects a head(governor, superior, regent) with a dependent (modifier, inferior, subordinate) usually, dependencies from a tree(connected, acyclic 非周期的, single-head) Question: compare CFGs and PCFGs and do they, dependency grammars look strongly lexicalized,they're between words and does that makes it harder to generalize? dependency conditioning preferences 关于如何确定dependency? 有以下几点perferences? dependency parsing 这里需要注意一点:是否有交叉 crossing(non-projective). 在图中的例子中,是有crossing的。 但是dependency tree的定义是 no crossing。 methods of dependency Parsing 第4种方法是目前最主流的方法。 paper presentation: Improving distributional similarity with lessions learned from word embeddings, Omer Levy, Yoav Goldberg, Ido Dagan shift reduced parsing??? Greedy transition-based parsing The parser: a stack \(\sigma\), written with top to the right which starts with the ROOT symbol a buffer \(beta\), written with the top to the left which starts with the input sentence s set of dependency arcs A which starts off empty a set of actions \(Left-Arc_r\) \(\sigma|w_i|w_j,\beta,A \rightarrow \sigma|w_j, \beta, A\bigcup{r(w_j,w_i)}\) 表示stack \(\sigma\) 中的两个元素 \(w_i\leftarrow w_j\) MaltParser Question: dependency parsing的准确率是否会出现waterfall般的下降,buz one decision will prevent sone other decisions. it's not bad. dependency parse evaluation suffers much less badly from waterfall effects than CFG parsing when which is worse in that respect. feature representation 不太懂这部分。。。 evaluation of dependency parsing why train a neural dependency parser? Indicator Features Revisited. our approach: learn a dense and compact feature representation. a neural dependency parser 清华的本科生简直神一样的存在。。。 distributed representations Model Architecture Non-linearities between layers: Why they are needed Google announcement of Parsey McParseface, SyntaxNet
We are given that the number $b_i$ announced by girl $i$ is the average of the numbers $x_{i^+}$ and $x_{i^-}$ chosen by the neighboring girls $i^+$ and $i^-$: $$2 b_i = x_{i^+} + x_{i^-},$$ where $$\begin{array}{rcl}i^+ & = & i + 1 - \left\lfloor \frac{i}{N} \right\rfloor N, \\i^- & = & i - 1 + \left\lfloor \frac{N + 1 - i}{N} \right\rfloor N\end{array}$$ and $N$ is the number of girls (that is, $i^+$ and $i^-$ loop around the interval $[1, N]$). This forms a linear equation system, which can be expressed on matrix form as $A\mathbf{x} = \mathbf{b}$ where the number at row $i$, column $j$ of the matrix $A$ is $$a_{i,j} = \begin{cases}\frac{1}{2}, \ i-j \equiv_N 1,\\\frac{1}{2}, \ j-i \equiv_N 1,\\0 \ \ \mathrm{otherwise.}\end{cases}$$ For $N = 5$, $2A$ looks like: $$\begin{array}{ccccc} &0 &1 &0 &0 &1 \\ &1 &0 &1 &0 &0 \\ &0 &1 &0 &1 &0 \\ &0 &0 &1 &0 &1 \\ &1 &0 &0 &1 &0\end{array}$$ Solving this equation system for $\mathbf{x}$ with $N = 10$ and $\mathbf{b} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]$ gives $x_6 = 1$.
Search Now showing items 1-10 of 19 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE (Springer, 2013-07) The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ... Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV (American Physical Society, 2013-01) Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
My model is the following: I have a 1D-lattice with L sites. Each site can be occupied by either one or zero atoms. The Hamiltonian looks as follows: $$H = T\sum(b^\dagger_i b_{i+1} + h.c.) + V\sum n_i n_{i+1}$$ My question is now of a rather basic quantum mechanical nature. I want to make sure what I'm doing is right and that I really understand it. (I'm programming all of this so I want to make sure the reason why my model isn't working doesn't lie on the physical side of things.) First I've set up the hamiltonian's matrix regarding the Fock states (for example $|010011101001\rangle$): I've created a list of all possible states (there are $2^L$ states since every position can be either zero or one). Say state $|a\rangle$ results in two states, $|a\rangle$ itself and $|b\rangle$: $$H|a\rangle = x|a\rangle + y |b\rangle$$ then I looked at column belonging to $|a\rangle$ and wrote $x$ and $y$ in the lines belonging to $|a\rangle$ and $|b\rangle$ respectively. That way I got my Hamiltonian matrix. (It has block-diagonal form since the total number of atoms is conserved, that means there's a block for each possible total atom number from $0$ to $L$.) Up to this point I have no difficulties. Now I diagonalize the Hamiltonian. I'm doing this by calculating its eigenvectors and eigenstates (since it has diagonal block form I can do this for each block separately, minimizing the calculation time). Next I plot the expectation value of some $\hat n _i$ operator against energy (using a scatter plot) with $i$ lying somewhere between $1$ and $L$. (The $\hat n _i$ operator on a Fock state returns the state itself with an eigenvalue of the occupation number of the site $i$, so $\hat n _2 |011\rangle = 1\cdot|011\rangle$ and $\hat n _2 |001\rangle = 0\cdot|001\rangle$.) Now comes the part where I'm not sure on how to proceed. How do I calculate the expectation value, i.e. $ \langle \hat n _i \rangle$, and how do I calculate the energy? I've got the eigenstates in a representation of $2^L$ dimensional vectors, where each line corresponds to a specific Fock state (the first line is all lattice places empty the last line is all filled and those in between have some other occupational setup. That means I have the eigenvectors in the basis of my possible Fock states. How do I now calculate the expectation value? I know that the eigenvalues are the energies corresponding to my eigenvectors but I don't know how the operator affects the eigenvectors, so my idea was to decompose the eigenvectors in my Fock basis so if I for now just call my different Fock states by the number of it's corresponding line in the Hamiltonian, and my eigenstate looks for a example like $(x,y,z)$ then I decompose it into $$(x,y,z) = x|1\rangle + y|2\rangle + z |3\rangle .$$ Since I know how my operator acts on the Fock states I could now calculate the expectation value by just the dot product between the old $x,y,z$ eigenvector and the new, changed eigenvector. I could also just calculate the Fock states' energies but since this problem scales up very fast this wouldn't be computationally feasible, I think, and one also wouldn't make use of the diagonalization. I'm really not quite sure if this is correct, and if this is really how one makes use of the diagonalized Hamilton, maybe someone can shed some light on my problem and explain if my approach is correct or where my faults lie.
XGBoost mostly combines a huge number of regression trees with a small learning rate. In this situation, trees added early are significant and trees added late are unimportant. Vinayak and Gilad-Bachrach proposed a new method to add dropout techniques from the deep neural net community to boosted trees, and reported better results in some situations. This is a instruction of new tree booster dart. Rashmi Korlakai Vinayak, Ran Gilad-Bachrach. “DART: Dropouts meet Multiple Additive Regression Trees.” JMLR. Drop trees in order to solve the over-fitting. Trivial trees (to correct trivial errors) may be prevented. Because of the randomness introduced in the training, expect the following few differences: Training can be slower than gbtree because the random dropout prevents usage of the prediction buffer. The early stop might not be stable, due to the randomness. In \(m\)-th training round, suppose \(k\) trees are selected to be dropped. Let \(D = \sum_{i \in \mathbf{K}} F_i\) be the leaf scores of dropped trees and \(F_m = \eta \tilde{F}_m\) be the leaf scores of a new tree. The objective function is as follows: \(D\) and \(F_m\) are overshooting, so using scale factor The booster dart inherits gbtree booster, so it supports all parameters that gbtree does, such as eta, gamma, max_depth etc. Additional parameters are noted below: sample_type: type of sampling algorithm. uniform: (default) dropped trees are selected uniformly. weighted: dropped trees are selected in proportion to weight. normalize_type: type of normalization algorithm. tree: (default) New trees have the same weight of each of dropped trees. forest: New trees have the same weight of sum of dropped trees (forest). rate_drop: dropout rate. range: [0.0, 1.0] skip_drop: probability of skipping dropout. If a dropout is skipped, new trees are added in the same manner as gbtree. range: [0.0, 1.0] import xgboost as xgb# read in datadtrain = xgb.DMatrix('demo/data/agaricus.txt.train')dtest = xgb.DMatrix('demo/data/agaricus.txt.test')# specify parameters via mapparam = {'booster': 'dart', 'max_depth': 5, 'learning_rate': 0.1, 'objective': 'binary:logistic', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 0.1, 'skip_drop': 0.5}num_round = 50bst = xgb.train(param, dtrain, num_round)# make prediction# ntree_limit must not be 0preds = bst.predict(dtest, ntree_limit=num_round) Note Specify ntree_limit when predicting with test sets By default, bst.predict() will perform dropouts on trees. To obtaincorrect results on test sets, disable dropouts by specifyinga nonzero value for ntree_limit.
Search Now showing items 1-10 of 17 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
Possible Duplicate: Cardinality of the permutations of an infinite set Why does the symmetric group on an infinite set X have the cardinality of the power set ${\cal P}(X)$? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community Possible Duplicate: Cardinality of the permutations of an infinite set Why does the symmetric group on an infinite set X have the cardinality of the power set ${\cal P}(X)$? I assume you mean an infinite set $X$. You need to use the Axiom of Choice to prove this fact, but I'm not sure to what extent it is necessary. Since $|X \times X| = |X|$ (uses AC) and $Sym(X) \subseteq \mathcal{P}(X \times X)$, it is clear that $|Sym(X)| \leq 2^{|X \times X|} = 2^{|X|}$. Since $|X \times 2| = |X|$ (uses AC) we can split $X$ into two disjoint sets $X_0$ and $X_1$, each of size $|X|$. Let $a:X_0 \to X_1$ be a bijection. For each set $A \subseteq X_0$ define $\sigma_A \in Sym(X)$ to be the bijection that exchanges $x$ and $a(x)$ for every $x \in A$ and leaves all other elements unchanged. It is clear that $A \in \mathcal{P}(X_0) \mapsto \sigma_A \in Sym(X)$ is an injection. Therefore $|Sym(X)| \geq 2^{|X_0|} = 2^{|X|}$. So the equality $|Sym(X)| = 2^{|X|}$ holds unconditionally for all infinite sets such that $|X \times X| = |X|$. The fact that $|X \times X| = |X|$ for all infinite sets $X$ is equivalent to AC by an old result of Tarski.
Journal of Symbolic Logic J. Symbolic Logic Volume 60, Issue 1 (1995), 178-190. The Equivalence of NF-Style Set Theories with "Tangled" Theories; The Construction of $\omega$-Models of Predicative NF (and more) Abstract An $\omega$-model (a model in which all natural numbers are standard) of the predicative fragment of Quine's set theory "New Foundations" (NF) is constructed. Marcel Crabbe has shown that a theory NFI extending predicative NF is consistent, and the model constructed is actually a model of NFI as well. The construction follows the construction of $\omega$-models of NFU (NF with urelements) by R. B. Jensen, and, like the construction of Jensen for NFU, it can be used to construct $\alpha$-models for any ordinal $\alpha$. The construction proceeds via a model of a type theory of a peculiar kind; we first discuss such "tangled type theories" in general, exhibiting a "tangled type theory" (and also an extension of Zermelo set theory with $\Delta_0$ comprehension) which is equiconsistent with NF (for which the consistency problem seems no easier than the corresponding problem for NF (still open)), and pointing out that "tangled type theory with urelements" has a quite natural interpretation, which seems to provide an explanation for the more natural behaviour of NFU relative to the other set theories of this kind, and can be seen anachronistically as underlying Jensen's consistency proof for NFU. Article information Source J. Symbolic Logic, Volume 60, Issue 1 (1995), 178-190. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183744684 Mathematical Reviews number (MathSciNet) MR1324507 Zentralblatt MATH identifier 0819.03044 JSTOR links.jstor.org Citation Holmes, M. Randall. The Equivalence of NF-Style Set Theories with "Tangled" Theories; The Construction of $\omega$-Models of Predicative NF (and more). J. Symbolic Logic 60 (1995), no. 1, 178--190. https://projecteuclid.org/euclid.jsl/1183744684
Abstract The Deligne groupoid is a functor from nilpotent differential graded Lie algebras concentrated in positive degrees to groupoids; in the special case of Lie algebras over a field of characteristic zero, it gives the associated simply connected Lie group. We generalize the Deligne groupoid to a functor $\gamma$ from $L_\infty$-algebras concentrated in degree $>-n$ to $n$-groupoids. (We actually construct the nerve of the $n$-groupoid, which is an enriched Kan complex.) The construction of gamma is quite explicit (it is based on Dupont’s proof of the de Rham theorem) and yields higher dimensional analogues of holonomy and of the Campbell-Hausdorff formula. In the case of abelian $L_\infty$ algebras (i.e., chain complexes), the functor $\gamma$ is the Dold-Kan simplicial set. [Ashley] N. Ashley, "Simplicial $T$-complexes and crossed complexes: a nonabelian version of a theorem of Dold and Kan," Dissertationes Math. $($Rozprawy Mat.$)$, vol. 265, p. 61, 1988. @article{Ashley, MRKEY = {959431}, AUTHOR = {Ashley, N.}, TITLE = {Simplicial {$T$}-complexes and crossed complexes: a nonabelian version of a theorem of {D}old and {K}an}, INVNOTE = {With a preface by Ronald Brown}, JOURNAL = {Dissertationes Math. $($Rozprawy Mat.$)$}, FJOURNAL = {Polska Akademia Nauk. Instytut Matematyczny. Dissertationes Mathematicae. Rozprawy Matematyczne}, VOLUME = {265}, YEAR = {1988}, PAGES = {61}, ISSN = {0012-3862}, CODEN = {DSMAAH}, MRCLASS = {55Q99 (18G30 18G55 20L15 55U10)}, MRNUMBER = {89k:55028}, ZBLNUMBER = {1003.55500}, MRREVIEWER = {Timothy Porter}, } @article{Beke, MRKEY = {2112899}, AUTHOR = {Beke, Tibor}, TITLE = {Higher \v{C}ech theory}, JOURNAL = {$K$-Theory}, FJOURNAL = {$K$-Theory. An Interdisciplinary Journal for the Development, Application, and Influence of $K$-Theory in the Mathematical Sciences}, VOLUME = {32}, YEAR = {2004}, NUMBER = {4}, PAGES = {293--322}, ISSN = {0920-3036}, CODEN = {KTHEEO}, MRCLASS = {18F20 (18G55 55N30)}, MRNUMBER = {2005i:18007}, ZBLNUMBER = {1070.18008}, MRREVIEWER = {Daniel C. Isaksen}, DOI = {10.1007/s10977-004-0840-0}, } [BG] A. K. Bousfield and V. K. A. M. Gugenheim, On ${ PL}$ de Rham Theory and Rational Homotopy Type, , 1976. @book{BG, MRKEY = {0425956}, AUTHOR = {Bousfield, A. K. and Gugenheim, V. K. A. M.}, TITLE = {On {${\rm PL}$} de {R}ham {T}heory and Rational Homotopy Type}, SERIES = {Mem. Amer. Math. Soc.}, NUMBER = {8:179}, YEAR = {1976}, PAGES = {ix+94}, ISSN = {0065-9266}, MRCLASS = {55D15 (58A10)}, MRNUMBER = {54 \#13906}, MRREVIEWER = {Jean-Michel Lemaire}, ZBLNUMBER = {0338.55008}, } [Dakin] M. K. Dakin, Kan complexes and multiple groupoid structures, Univ. Amiens, 1983. @book{Dakin, MRKEY = {766238}, AUTHOR = {Dakin, M. K.}, TITLE = {Kan complexes and multiple groupoid structures}, INVBOOKTITLE = {Mathematical sketches, 32}, SERIES = {Esquisses Math.}, NUMBER = {32}, PAGES = {xi+92}, PUBLISHER = {Univ. Amiens}, INVADDRESS = {Amiens}, YEAR = {1983}, MRCLASS = {55U10 (18G30)}, MRNUMBER = {766238}, ZBLNUMBER = {0566.55010}, } @misc{Deligne, author = {P. Deligne}, title = {letter to {L}. {B}reen, Feb.~28}, YEAR = {1994}, url = {http://math.northwestern.edu/~getzler/Papers/deligne.pdf}, } [Dold] A. Dold, "Homology of symmetric products and other functors of complexes," Ann. of Math., vol. 68, pp. 54-80, 1958. @article{Dold, MRKEY = {0097057}, AUTHOR = {Dold, Albrecht}, TITLE = {Homology of symmetric products and other functors of complexes}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {68}, YEAR = {1958}, PAGES = {54--80}, ISSN = {0003-486X}, MRCLASS = {55.00}, MRNUMBER = {20 \#3537}, ZBLNUMBER = {0082.37701}, MRREVIEWER = {Sze-tsen Hu}, DOI = {10.2307/1970043}, } [Dupont] J. L. Dupont, "Simplicial de Rham cohomology and characteristic classes of flat bundles," Topology, vol. 15, iss. 3, pp. 233-245, 1976. @article{Dupont, MRKEY = {0413122}, AUTHOR = {Dupont, Johan L.}, TITLE = {Simplicial de {R}ham cohomology and characteristic classes of flat bundles}, JOURNAL = {Topology}, FJOURNAL = {Topology. An International Journal of Mathematics}, VOLUME = {15}, YEAR = {1976}, NUMBER = {3}, PAGES = {233--245}, ISSN = {0040-9383}, MRCLASS = {57D20}, MRNUMBER = {54 \#1243}, ZBLNUMBER = {0331.55012}, MRREVIEWER = {Ph. Tondeur}, DOI = {10.1016/0040-9383(76)90038-0}, } [DupontBook] J. L. Dupont, Curvature and Characteristic Classes, New York: Springer-Verlag, 1978. @book{DupontBook, MRKEY = {0500997}, AUTHOR = {Dupont, Johan L.}, TITLE = {Curvature and Characteristic Classes}, SERIES = {Lecture Notes in Math.}, NUMBER = {640}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1978}, PAGES = {viii+175}, ISBN = {3-540-08663-3}, MRCLASS = {57D20 (53-02)}, MRNUMBER = {58 \#18477}, ZBLNUMBER = {0373.57009}, MRREVIEWER = {D. Lehmann}, } [Duskin] J. Duskin, "Higher-dimensional torsors and the cohomology of topoi: the abelian theory," in Applications of Sheaves, Fourman, M. P., Mulvey, C. J., and Scott, D. S., Eds., New York: Springer-Verlag, 1979, pp. 255-279. @incollection{Duskin, MRKEY = {555549}, AUTHOR = {Duskin, J.}, EDITOR = {Fourman, Michael P. and Mulvey, Christopher J. and Scott, Dana S.}, TITLE = {Higher-dimensional torsors and the cohomology of topoi: the abelian theory}, FBOOKTITLE = {Applications of {S}heaves (Proc. Res. Sympos. Appl. Sheaf Theory to Logic, Algebra and Anal., Univ. Durham, Durham, 1977)}, BOOKTITLE = {Applications of {S}heaves}, VENUE = {Univ. Durham, 1977}, SERIES = {Lecture Notes in Math.}, NUMBER = {753}, PAGES = {255--279}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1979}, MRCLASS = {18B25 (14F20 14L25)}, MRNUMBER = {82c:18006}, ZBLNUMBER = {0444.18014}, MRREVIEWER = {R. T. Hoobler}, } [DuskinNerve] J. W. Duskin, "Simplicial matrices and the nerves of weak $n$-categories. I: Nerves of bicategories," Theory Appl. Categ., vol. 9, pp. 198-308, 2001. @article{DuskinNerve, MRKEY = {1897816}, AUTHOR = {Duskin, John W.}, TITLE = {Simplicial matrices and the nerves of weak {$n$}-categories. {I}: {N}erves of bicategories}, INVNOTE = {CT2000 Conference (Como)}, JOURNAL = {Theory Appl. Categ.}, FJOURNAL = {Theory and Applications of Categories}, VOLUME = {9}, YEAR = {2001}, FYEAR = {2001/02}, PAGES = {198--308}, ISSN = {1201-561X}, MRCLASS = {18D05 (18G30 55P15 55U10)}, MRNUMBER = {2003f:18005}, ZBLNUMBER = {1046.18009}, MRREVIEWER = {R. H. Street}, } [EM] S. Eilenberg and S. Mac Lane, "On the groups of $H(\Pi,n)$. I," Ann. of Math., vol. 58, pp. 55-106, 1953. @article{EM, MRKEY = {0056295}, AUTHOR = {Eilenberg, Samuel and Mac Lane, Saunders}, TITLE = {On the groups of {$H(\Pi,n)$}. {I}}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {58}, YEAR = {1953}, PAGES = {55--106}, ISSN = {0003-486X}, MRCLASS = {56.0X}, MRNUMBER = {15,54b}, ZBLNUMBER = {0050.39304}, MRREVIEWER = {P. J. Hilton}, DOI = {10.2307/1969820}, } [FL] Z. Fiedorowicz and J. Loday, "Crossed simplicial groups and their associated homology," Transactions Amer. Math. Soc., vol. 326, iss. 1, pp. 57-87, 1991. @article{FL, MRKEY = {998125}, AUTHOR = {Fiedorowicz, Zbigniew and Loday, Jean-Louis}, TITLE = {Crossed simplicial groups and their associated homology}, JOURNAL = {Transactions Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {326}, YEAR = {1991}, NUMBER = {1}, PAGES = {57--87}, ISSN = {0002-9947}, CODEN = {TAMTAM}, MRCLASS = {18F25 (18D05 19D55 20F36 55U10)}, MRNUMBER = {91j:18018}, ZBLNUMBER = {0755.18005}, MRREVIEWER = {Ronald Brown}, DOI = {10.2307/2001855}, } [darboux] E. Getzler, "A Darboux theorem for Hamiltonian operators in the formal calculus of variations," Duke Math. J., vol. 111, iss. 3, pp. 535-560, 2002. @article{darboux, MRKEY = {1885831}, AUTHOR = {Getzler, Ezra}, TITLE = {A {D}arboux theorem for {H}amiltonian operators in the formal calculus of variations}, JOURNAL = {Duke Math. J.}, FJOURNAL = {Duke Mathematical Journal}, VOLUME = {111}, YEAR = {2002}, NUMBER = {3}, PAGES = {535--560}, ISSN = {0012-7094}, CODEN = {DUMJAO}, MRCLASS = {32G34 (37K05 55P62)}, MRNUMBER = {2003e:32026}, ZBLNUMBER = {1100.32008}, MRREVIEWER = {Olivier G. Schiffmann}, DOI = {10.1215/S0012-7094-02-11136-3}, } [Glenn] P. G. Glenn, "Realization of cohomology classes in arbitrary exact categories," J. Pure Appl. Algebra, vol. 25, iss. 1, pp. 33-105, 1982. @article{Glenn, MRKEY = {660389}, AUTHOR = {Glenn, Paul G.}, TITLE = {Realization of cohomology classes in arbitrary exact categories}, JOURNAL = {J. Pure Appl. Algebra}, FJOURNAL = {Journal of Pure and Applied Algebra}, VOLUME = {25}, YEAR = {1982}, NUMBER = {1}, PAGES = {33--105}, ISSN = {0022-4049}, CODEN = {JPAAA}, MRCLASS = {18G30 (14F20 18G10 20L17)}, MRNUMBER = {83j:18016}, ZBLNUMBER = {0487.18015}, MRREVIEWER = {M. Barr}, DOI = {10.1016/0022-4049(82)90094-9}, } [GM] W. M. Goldman and J. J. Millson, "The deformation theory of representations of fundamental groups of compact Kähler manifolds," Inst. Hautes Études Sci. Publ. Math., vol. 67, pp. 43-96, 1988. @article{GM, MRKEY = {972343}, AUTHOR = {Goldman, William M. and Millson, John J.}, TITLE = {The deformation theory of representations of fundamental groups of compact {K}ähler manifolds}, JOURNAL = {Inst. Hautes Études Sci. Publ. Math.}, FJOURNAL = {Institut des Hautes Études Scientifiques. Publications Mathématiques}, VOLUME = {67}, YEAR = {1988}, PAGES = {43--96}, ISSN = {0073-8301}, CODEN = {PMIHA6}, MRCLASS = {32G10 (22E45 32C10)}, MRNUMBER = {90b:32041}, ZBLNUMBER = {0678.53059}, MRREVIEWER = {H. Kerner}, URL = {http://www.numdam.org/item?id=PMIHES_1988__67__43_0}, } [Hinich] V. Hinich, "Descent of Deligne groupoids," Internat. Math. Res. Notices, vol. 5, pp. 223-239, 1997. @article{Hinich, MRKEY = {1439623}, AUTHOR = {Hinich, Vladimir}, TITLE = {Descent of {D}eligne groupoids}, JOURNAL = {Internat. Math. Res. Notices}, FJOURNAL = {International Mathematics Research Notices}, YEAR = {1997}, VOLUME = {5}, PAGES = {223--239}, ISSN = {1073-7928}, MRCLASS = {22E47 (22E41)}, MRNUMBER = {98h:22019}, ZBLNUMBER = {0948.22016}, MRREVIEWER = {William M. McGovern}, DOI = {10.1155/S1073792897000160}, } [Kan] D. M. Kan, "Functors involving c.s.s. complexes," Trans. Amer. Math. Soc., vol. 87, pp. 330-346, 1958. @article{Kan, MRKEY = {0131873}, AUTHOR = {Kan, Daniel M.}, TITLE = {Functors involving c.s.s. complexes}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {87}, YEAR = {1958}, PAGES = {330--346}, ISSN = {0002-9947}, MRCLASS = {55.25}, MRNUMBER = {24 \#A1720}, ZBLNUMBER = {0090.39001}, MRREVIEWER = {D. Puppe}, DOI = {10.2307/1993103}, } [LM] T. Lada and M. Markl, "Strongly homotopy Lie algebras," Comm. Algebra, vol. 23, iss. 6, pp. 2147-2161, 1995. @article{LM, MRKEY = {1327129}, AUTHOR = {Lada, Tom and Markl, Martin}, TITLE = {Strongly homotopy {L}ie algebras}, JOURNAL = {Comm. Algebra}, FJOURNAL = {Communications in Algebra}, VOLUME = {23}, YEAR = {1995}, NUMBER = {6}, PAGES = {2147--2161}, ISSN = {0092-7872}, CODEN = {COALDM}, MRCLASS = {16S30 (17B35 18G99)}, MRNUMBER = {96d:16039}, ZBLNUMBER = {0999.17019}, MRREVIEWER = {Stanis{\l}aw Betley}, DOI = {10.1080/00927879508825335}, } [LS] L. Lambe and J. Stasheff, "Applications of perturbation theory to iterated fibrations," Manuscripta Math., vol. 58, iss. 3, pp. 363-376, 1987. @article{LS, MRKEY = {893160}, AUTHOR = {Lambe, Larry and Stasheff, Jim}, TITLE = {Applications of perturbation theory to iterated fibrations}, JOURNAL = {Manuscripta Math.}, FJOURNAL = {Manuscripta Mathematica}, VOLUME = {58}, YEAR = {1987}, NUMBER = {3}, PAGES = {363--376}, ISSN = {0025-2611}, CODEN = {MSMHB2}, MRCLASS = {55R20 (55P45 57T30)}, MRNUMBER = {89d:55041}, ZBLNUMBER = {0632.55011}, MRREVIEWER = {C.-F. B{ö}digheimer}, DOI = {10.1007/BF01165893}, } [May] P. J. May, Simplicial Objects in Algebraic Topology, Chicago, IL: University of Chicago Press, 1992. @book{May, MRKEY = {1206474}, AUTHOR = {May, J. Peter}, TITLE = {Simplicial {O}bjects in {A}lgebraic {T}opology}, SERIES = {Chicago Lectures in Math.}, INVNOTE = {Reprint of the 1967 original}, PUBLISHER = {University of Chicago Press}, ADDRESS = {Chicago, IL}, YEAR = {1992}, PAGES = {viii+161}, ISBN = {0-226-51181-2}, MRCLASS = {55U10}, MRNUMBER = {93m:55025}, ZBLNUMBER = {0769.55001}, MRREVIEWER = {Donald M. Davis}, } [NR] A. Nijenhuis and R. W. Richardson Jr., "Cohomology and deformations in graded Lie algebras," Bull. Amer. Math. Soc., vol. 72, pp. 1-29, 1966. @article{NR, MRKEY = {0195995}, AUTHOR = {Nijenhuis, Albert and Richardson, Jr., R. W.}, TITLE = {Cohomology and deformations in graded {L}ie algebras}, JOURNAL = {Bull. Amer. Math. Soc.}, FJOURNAL = {Bulletin of the American Mathematical Society}, VOLUME = {72}, YEAR = {1966}, PAGES = {1--29}, ISSN = {0002-9904}, MRCLASS = {17.30 (16.90)}, MRNUMBER = {33 \#4190}, ZBLNUMBER = {0136.30502}, MRREVIEWER = {G. Leger}, DOI = {10.1090/S0002-9904-1966-11401-5}, } [Paoli] S. Paoli, "Semistrict Tamsamani $n$-groupoids and connected $n$-types," , preprint , 2007. @techreport{Paoli, author = {S. Paoli}, TITLE = {Semistrict {T}amsamani $n$-groupoids and connected $n$-types}, YEAR = {2007}, TYPE = {preprint}, ARXIV = {math/0701655v2}, } @article{Quillen, MRKEY = {0258031}, AUTHOR = {Quillen, Daniel}, TITLE = {Rational homotopy theory}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {90}, YEAR = {1969}, PAGES = {205--295}, ISSN = {0003-486X}, MRCLASS = {55.40}, MRNUMBER = {41 \#2678}, ZBLNUMBER = {0191.53702}, MRREVIEWER = {J. F. Adams}, DOI = {10.2307/1970725}, } [Sullivan] D. Sullivan, "Infinitesimal computations in topology," Inst. Hautes Études Sci. Publ. Math., vol. 47, pp. 269-331, 1977. @article{Sullivan, MRKEY = {0646078}, AUTHOR = {Sullivan, Dennis}, TITLE = {Infinitesimal computations in topology}, JOURNAL = {Inst. Hautes Études Sci. Publ. Math.}, FJOURNAL = {Institut des Hautes Études Scientifiques. Publications Mathématiques}, VOLUME = {47}, YEAR = {1977}, PAGES = {269--331}, FPAGES = {269--331 (1978)}, ISSN = {0073-8301}, MRCLASS = {57D99 (55D99 58A10)}, MRNUMBER = {58 \#31119}, ZBLNUMBER = {0374.57002}, MRREVIEWER = {J. F. Adams}, URL = {http://www.numdam.org/item?id=PMIHES_1977__47__269_0}, } [Whitney] H. Whitney, Geometric Integration Theory, Princeton, NJ: Princeton Univ. Press, 1957. @book{Whitney, MRKEY = {0087148}, AUTHOR = {Whitney, Hassler}, TITLE = {Geometric Integration Theory}, PUBLISHER = {Princeton Univ. Press}, ADDRESS = {Princeton, NJ}, YEAR = {1957}, PAGES = {xv+387}, MRCLASS = {53.0X}, MRNUMBER = {19,309c}, ZBLNUMBER = {0083.28204}, MRREVIEWER = {H. Samelson}, } [kuranishi] M. Kuranishi, "On the locally complete families of complex analytic structures," Ann. of Math. (2), vol. 75, pp. 536-577, 1962. @article {kuranishi, MRKEY = {MR0141139}, AUTHOR = {Kuranishi, M.}, TITLE = {On the locally complete families of complex analytic structures}, JOURNAL = {Ann. of Math. (2)}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {75}, YEAR = {1962}, PAGES = {536--577}, ISSN = {0003-486X}, MRCLASS = {57.70 (32.47)}, MRNUMBER = {25 \#4550}, DOI = {10.2307/1970211}, ZBLNUMBER = {0106.15303}, MRREVIEWER = {C. B. Morrey, Jr.}, }
It is known (as a slogan) that the "existential fragment of second-order logic (ESO) is compact". My first question is: (1) Is ESO compact for: (a) uncountable languages (b) languages with constants or just for relational vocabularies? The possible answers are obviously: ESO is compact only for countable languages (with or without constants, it doesn't matter here) ESO is compact for uncountable vocabularies as well, but not the uncountable ones that contain constants ESO is compact for uncountable languages - also the ones with constants. Which one is correct? I can't see any problem in rewriting the usual compactness proofs for FO (also the ultraproduct one) to get compactness for ESO via Skolemization, but maybe there is something I cannot notice. The second question is actually a motivation for the first one and concerns generalized quantifiers. I stumbled upon a slogan that: $\alpha$: The quantifier "there exist at most countably many" does not have a $\Sigma^1_1$ definition, but the authors of the paper where I found it provide neither a proof nor a reference. I have checked in the most "obvious" sources ("Model-theoretic logics", Krynicki-Mostowski anthology on quantifiers, Keisler's paper on logic with "there exist at least uncountably many" quantifier, Westerstahl's book on quantifiers among couple others, some of them giving some results on those quantifiers in MSO on strings and trees, but I'm interested in the general case here). The proof of $\alpha$ would be obvious, if we knew, that the complement of the class of countable models actually is $\Sigma^1_1$-definable, but it doesn't seem to be the case or at leas I can't see that. There are easy deifnitions of countability and "at most countability" with higher complexity. One might write a sentence that says: "The set of all elements $X$ is infinite and for any inifnite set $Y$ (i.e. subset of the universe) there is an injection from $X$ to $Y$" Such a sentence is true in a model $\mathcal{M}$ if and only if $M$ is countably inifnite and the definition is $\Pi^1_2$, as it is a conjunction of a $\Sigma^1_1$ sentence and a $\Pi^1_2$ sentence. One could also give a $\Sigma^1_2$ definition, aka "The Enumerability Axiom" (or at least that is the way G. Boolos called it) - the definition simply defines the natural numbers via the inductive properties of the successor function. One can give a similar one, stating that there is a well-ordering of the universe with all the sets of predecessors of a given element being finite (but then the complexity grows). From this we can also easily see how to give a definition of "at most countability" which still is not $\Pi^1_1$ obviously. The only thing that comes to my mind for defining the class of uncountable models is stating that "the universe is infinite and is not countable". Therefore I do not see a way to prove the result by a standard Craig's Interpolation or Lindstrom's First argument (referring to the fact that a given class of models is FO-definable (elementary), provided both the class and its complement are pseudoelementary, or $\Sigma^1_1$ definable). Thus the questions: (2) What is the (minimal) complexity of the definition of at most countable models (or "there exist at most countably many" quantifier)? (3) What is the (minimal) complexity of the definition of inifnite countable models (or "there exist inifnitely countably many" quantifier)? (4) What is the (minimal) complexity of the definition of uncountable models (or "there exist uncountably many" quantifier)? And now: the connection between the first part of my question and the second one(s). If ESO is compact for uncountable languages with constants, then it is trivial to prove that the class of at most countable models cannot have a $\Sigma^1_1$ deifnition. Simply put: asuume that it has such a definition, call it $\varphi$. Add an uncountable set of constants $\{c_\alpha: \alpha < \omega_1\}$ and look at the following theory: $\{\varphi\} \cup \{c_\alpha \neq c_\beta: \alpha < \beta < \omega_1\}$. Then each finite subset of this theory is satisfiable, but the entire theory is not. So if we had "full" compactness for ESO, we would have the result (leaving questions (3) and (4) to answer, and, at least to me, interesting). Thanks in advance.
I’ve been doing a lot of reading on confidence interval theory. Some of the reading is more interesting than others. There is one passage from Neyman’s (1952) book “Lectures and Conferences on Mathematical Statistics and Probability” (available here) that stands above the rest in terms of clarity, style, and humor. I had not read this before the last draft of our confidence interval paper, but for those of you who have read it, you’ll recognize that this is the style I was going for. Maybe you have to be Jerzy Neyman to get away with it. Neyman gets bonus points for the footnote suggesting the “eminent”, “elderly” boss is so obtuse (a reference to Fisher?) and that the young frequentists should be “remind[ed] of the glory” of being burned at the stake. This is just absolutely fantastic writing. I hope you enjoy it as much as I did. [begin excerpt, p. 211-215] [Neyman is discussing using “sampling experiments” (Monte Carlo experiments with tables of random numbers) in order to gain insight into confidence intervals. \(\theta\) is a true parameter of a probability distribution to be estimated.] The sampling experiments are more easily performed than described in detail. Therefore, let us make a start with \(\theta_1 = 1\), \(\theta_2 = 2\), \(\theta_3 = 3\) and \(\theta_4 = 4\). We imagine that, perhaps within a week, a practical statistician is faced four times with the problem of estimating \(\theta\), each time from twelve observations, and that the true values of \(\theta\) are as above [ie, \(\theta_1,\ldots,\theta_4\)] although the statistician does not know this. We imagine further that the statistician is an elderly gentleman, greatly attached to the arithmetic mean and that he wishes to use formulae (22). However, the statistician has a young assistant who may have read (and understood) modern literature and prefers formulae (21). Thus, for each of the four instances, we shall give two confidence intervals for \(\theta\), one computed by the elderly Boss, the other by his young Assistant. [Formula 21 and 22 are simply different 95% confidence procedures. Formula 21 is has better frequentist properties; Formula 22 is inferior, but the Boss likes it because it is intuitive to him.] Using the first column on the first page of Tippett’s tables of random numbers and performing the indicated multiplications, we obtain the following four sets of figures. The last two lines give the assertions regarding the true value of \(\theta\) made by the Boss and by the Assistant, respectively. The purpose of the sampling experiment is to verify the theoretical result that the long run relative frequency of cases in which these assertions will be correct is, approximately, equal to \(\alpha = .95\). You will notice that in three out of the four cases considered, both assertions (the Boss’ and the Assistant’s) regarding the true value of \(\theta\) are correct and that in the last case both assertions are wrong. In fact, in this last case the true \(\theta\) is 4 while the Boss asserts that it is between 2.026 and 3.993 and the Assistant asserts that it is between 2.996 and 3.846. Although the probability of success in estimating \(\theta\) has been fixed at \(\alpha = .95\), the failure on the fourth trial need not discourage us. In reality, a set of four trials is plainly too short to serve for an estimate of a long run relative frequency. Furthermore, a simple calculation shows that the probability of at least one failure in the course of four independent trials is equal to .1855. Therefore, a group of four consecutive samples like the above, with at least one wrong estimate of \(\theta\), may be expected one time in six or even somewhat oftener. The situation is, more or less, similar to betting on a particular side of a die and seeing it win. However, if you continue the sampling experiment and count the cases in which the assertion regarding the true value of \(\theta\), made by either method, is correct, you will find that the relative frequency of such cases converges gradually to its theoretical value, \(\alpha= .95\). Let us put this into more precise terms. Suppose you decide on a number \(N\) of samples which you will take and use for estimating the true value of \(\theta\). The true values of the parameter \(\theta\) may be the same in all \(N\) cases or they may vary from one case to another. This is absolutely immaterial as far as the relative frequency of successes in estimation is concerned. In each case the probability that your assertion will be correct is exactly equal to \(\alpha = .95\). Since the samples are taken in a manner insuring independence (this, of course, depends on the goodness of the table of random numbers used), the total number \(Z(N)\) of successes in estimating \(\theta\) is the familiar binomial variable with expectation equal to \(N\alpha\) and with variance equal to \(N\alpha(1 – \alpha)\). Thus, if \(N = 100\), \(\alpha = .95\), it is rather improbable that the relative frequency \(Z(N)/N\) of successes in estimating \(\alpha\) will differ from \(\alpha\) by more than \( 2\sqrt{\frac{\alpha(1-\alpha)}{N}} = .042 \) This is the exact meaning of the colloquial description that the long run relative frequency of successes in estimating \(\theta\) is equal to the preassigned \(\alpha\). Your knowledge of the theory of confidence intervals will not be influenced by the sampling experiment described, nor will the experiment prove anything. However, if you perform it, you will get an intuitive feeling of the machinery behind the method which is an excellent complement to the understanding of the theory. This is like learning to drive an automobile: gaining experience by actually driving a car compared with learning the theory by reading a book about driving. Among other things, the sampling experiment will attract attention to the frequent difference in the precision of estimating \(\theta\) by means of the two alternative confidence intervals (21) and (22). You will notice, in fact, that the confidence intervals based on \(X\), the greatest observation in the sample, are frequently shorter than those based on the arithmetic mean \(\bar{X}\). If we continue to discuss the sampling experiment in terms of cooperation between the eminent elderly statistician and his young assistant, we shall have occasion to visualize quite amusing scenes of indignation on the one hand and of despair before the impenetrable wall of stiffness of mind and routine of thought on the other. [See footnote] For example, one can imagine the conversation between the two men in connection with the first and third samples reproduced above. You will notice that in both cases the confidence interval of the Assistant is not only shorter than that of the Boss but is completely included in it. Thus, as a result of observing the first sample, the Assistant asserts that \( .956 \leq \theta \leq 1.227. \) On the other hand, the assertion of the Boss is far more conservative and admits the possibility that \(\theta\) may be as small as .688 and as large as 1.355. And both assertions correspond to the same confidence coefficient, \(\alpha = .95\)! I can just see the face of my eminent colleague redden with indignation and hear the following colloquy. Boss: “Now, how can this be true? I am to assert that \(\theta\) is between .688 and 1.355 and you tell me that the probability of my being correct is .95. At the same time, you assert that \(\theta\) is between .956 and 1.227 and claim the same probability of success in estimation. We both admit the possibility that \(\theta\) may be some number between .688 and .956 or between 1.227 and 1.355. Thus, the probability of \(\theta\) falling within these intervals is certainly greater than zero. In these circumstances, you have to be a nit-wit to believe that \( \begin{eqnarray*} P\{.688 \leq \theta \leq 1.355\} &=& P\{.688 \leq \theta < .956\} + P\{.956 \leq \theta \leq 1.227\}\\ && + P\{1.227 \leq \theta \leq 1.355\}\\ &=& P\{.956 \leq \theta \leq 1.227\}.\mbox{”} \end{eqnarray*} \) Assistant: “But, Sir, the theory of confidence intervals does not assert anything about the probability that the unknown parameter \(\theta\) will fall within any specified limits. What it does assert is that the probability of success in estimation using either of the two formulae (21) or (22) is equal to \(\alpha\).” Boss: “Stuff and nonsense! I use one of the blessed pair of formulae and come up with the assertion that \(.688 \leq \theta \leq 1.355\). This assertion is a success only if \(\theta\) falls within the limits indicated. Hence, the probability of success is equal to the probability of \(\theta\) falling within these limits —.” Assistant: “No, Sir, it is not. The probability you describe is the a posteriori probability regarding \(\theta\), while we are concerned with something else. Suppose that we continue with the sampling experiment until we have, say, \(N = 100\) samples. You will see, Sir, that the relative frequency of successful estimations using formulae (21) will be about the same as that using formulae (22) and that both will be approximately equal to .95.” I do hope that the Assistant will not get fired. However, if he does, I would remind him of the glory of Giordano Bruno who was burned at the stake by the Holy Inquisition for believing in the Copernican theory of the solar system. Furthermore, I would advise him to have a talk with a physicist or a biologist or, maybe, with an engineer. They might fail to understand the theory but, if he performs for them the sampling experiment described above, they are likely to be convinced and give him a new job. In due course, the eminent statistical Boss will die or retire and then —. [footnote] Sad as it is, your mind does become less flexible and less receptive to novel ideas as the years go by. The more mature members of the audience should not take offense. I, myself, am not young and have young assistants. Besides, unreasonable and stubborn individuals are found not only among the elderly but also frequently among young people. [end excerpt]
Beyoncé’s record, 4, was officially released this week, despite having been out and about on the Internet for quite a while. On that album is a song titled “1+1,” the number Bey performed so astonishingly well in her dressing room (and then later on American Idol). Because thinking about song titles is part of how we pass the time over here, the name of that track got us wondering about other songs with numbers in their names. When we realized there are just so, so many of those, we started pondering a more selective category: songs with mathematical operations in, or suggested by, their titles. Because summer is the season of mixtapes and this is the week of Beyoncé, and nothing celebrates both quite like blasting songs full of multiplication and addition (don’t ask in exactly what way this celebrates them), here is a mixtape with a math theme. If you play it loud enough and have some adorable children on hand, Beyoncé will possibly drop by your barbecue. Please, let us know any songs we’ve missed, and if you have thoughts on semi-random but highly relevant theme mixes we might assemble in the future — nerdy or otherwise — we’d love to hear them. Track 0. ““Two Divided by Zero,” Pet Shop Boys Track .000001. “A Million to 1,” Kiss Track .01. “1% of One,” Stephen Malkmus Track .5. “2/4,” Clinic Track .66. “Two Out Of Three Ain’t Bad,” Meatloaf Track 1. “50/50,” Frank Zappa / Mothers Track 1a. “5-4 = Unity,” Pavement Track 1.25. “5/4,” Gorillaz Track 2. “1+1,” Beyonce Track 4. “2x2,” Bob Dylan Track 5. “2 + 2 = 5,” Radiohead Track 5a. “5 to 1,” The Doors Track 9. “If 6 Was 9,” Jimi Hendrix Track 14. “7 and 7 Is,” Love Track 17-20. “Between 17 and 20,” Elton John Track 24. “4 + 20,” Crosby, Still, Nash & Young Track 44. “22’ twos,” Jay-Z Track 490. “Seventy Times Seven,” Brand New Track 910. “1 after 909,”The Beatles Track Infinity. “Love Plus One,” Haircut Bonus Tracks: “B + A,” The Beta Band “\Delta M_i^{-1} = - \alpha \sum_{n=1}^N D_i \left[ n \right] \left[ \sum_{j \in C \left[ i \right]}^{} F_{ji} \left[ n-1 \right] + Fext_i \left[ n^{-1} \right] \right]”,” Aphex Twin [See also, here, track 2] And you can hear them all in sequence if you listen to them below.
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
On a case-by-case basis, it is possible to typeset text within math mode using \textrm{...} or \mathrm{...}, the latter being used predominantly for typesetting units or symbols and not pure text (since it gobbles spaces that are not escaped). \mbox{...} is another alternative to \textrm{...}, since it resets its contents to text mode by default. Here are some examples: \documentclass{article} \begin{document} Here is a formula: $x=\exp(\log \mathrm{x})$ Here is another: $\sin^2 t+\cos^2 t = \textrm{famous identity}$ \end{document} The above font changes do not work that well in general, since switching to a different font when using sub- and superscripts, say, does not always scale as expected. There are ways around it though. For example, using \text{...} from the amstext package (automatically loaded by amsmath - see the AMS package dependencies), which switches to the appropriate font size via \mathchoice: \documentclass{article} \usepackage{amsmath}% http://ctan.org/pkg/amsmath % amsmath loads the amstext package by default \begin{document} Here is a formula: $x=y^{abc}$ Here it is again: $x=y^{\mbox{abc}}$ Compare that to: $x=y^{\text{abc}}$ \end{document}
This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social Email Address: Article Id: WHEBN0000375033 Reproduction Date: In mathematics, Trigonometric substitution is the substitution of trigonometric functions for other expressions. One may use the trigonometric identities to simplify certain integrals containing radical expressions:[1][2] Substitution 1. If the integrand contains a2 − x2, let x = a \sin(\theta) and use the identity 1-\sin^2(\theta) = \cos^2(\theta). Substitution 2. If the integrand contains a2 + x2, let x = a \tan(\theta) and use the identity 1+\tan^2(\theta) = \sec^2(\theta). and use the identity Substitution 3. If the integrand contains x2 − a2, let x = a \sec(\theta) and use the identity \sec^2(\theta)-1 = \tan^2(\theta). In the integral we may use Note that the above step requires that a > 0 and cos(θ) > 0; we can choose the a to be the positive square root of a2; and we impose the restriction on θ to be −π/2 < θ < π/2 by using the arcsin function. For a definite integral, one must figure out how the bounds of integration change. For example, as x goes from 0 to a/2, then sin(θ) goes from 0 to 1/2, so θ goes from 0 to π/6. Then we have Some care is needed when picking the bounds. The integration above requires that −π/2 < θ < π/2, so θ going from 0 to π/6 is the only choice. If we had missed this restriction, we might have picked θ to go from π to 5π/6, which would give us the negative of the result. we may write so that the integral becomes (provided a ≠ 0). Integrals like should be done by partial fractions rather than trigonometric substitutions. However, the integral can be done by substitution: We can then solve this using the formula for the integral of secant cubed. Substitution can be used to remove trigonometric functions. In particular, see Tangent half-angle substitution. For instance, Substitutions of hyperbolic functions can also be used to simplify integrals.[3] In the integral \int \frac{1}{\sqrt{a^2+x^2}}\,\mathrm dx, make the substitution x=a\sinh{u}, \mathrm dx=a\cosh{u}\,\mathrm du. Then, using the identities \cosh^2 (x) - \sinh^2 (x) = 1 and \sinh^{-1}{x} = \ln(x + \sqrt{x^2 + 1}), \begin{align} \int \frac{1}{\sqrt{a^2+x^2}}\,\mathrm dx &= \int \frac{a\cosh{u}}{\sqrt{a^2+a^2\sinh^2{u}}}\,\mathrm du\\ &=\int \frac{a\cosh{u}}{a\sqrt{1+\sinh^2{u}}}\,\mathrm du\\ &=\int \frac{a\cosh{u}}{a\cosh{u}}\,\mathrm du\\ &=u+C\\ &=\sinh^{-1}{\frac{x}{a}}+C\\ &=\ln\left(\sqrt{\frac{x^2}{a^2} + 1} + \frac{x}{a}\right) + C\\ &=\ln\left(\frac{\sqrt{x^2+a^2} + x}{a}\right) + C \end{align} Integral, Dot product, Directional derivative, Mathematics, Vector field Calculus, Versine, MathWorld, Trigonometry, Pi Integral, Calculus, Riemann integral, Mathematics, Measure (mathematics)
GR8677 #19 Problem Quantum Mechanics}Bohr Theory Recall the Rydberg energy. QED Comments casseverhart13 2019-10-01 03:22:05 Good Examine and fascinating. Thanks. www.trustytreeservice.com ernest21 2019-08-10 03:09:33 Instead of doing the long division by hand, you could just say that gamma is less than 2. Solving the inequality you get that v > \\sqrt{3/4}c. E is the only answer that satisfies this. amino acid game joshuaprice153 2019-08-08 07:30:34 I\'m definitely going to bookmark your site, I just love your post, thanks for such a nice sharing.. Hope to get some info on your site in future. carpet steam cleaner jeka 2007-02-17 08:08:36 Energy spectrum of the hydrogen atom is given by the equation , where is the Rydberg constant. So the right answer is (E) FortranMan 2008-10-16 23:10:09 According to Griffiths, the allowed energies for a hydrogen atom are Where is the ground state of the hydrogen atom, The Rydberg constant is defined as Thus the energy levels are given as Not entirely necessary to solve the problem, but it's safer to keep your terms straight. VKB 2014-03-25 21:44:48 Its a good approach to solve problems @ home,interesting. Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
This is a continuation of Part 1 of this series of posts on $q$-analogs. Counting by $q$’s Another important area in which $q$-analogs come up is in combinatorics. In this context, $q$ is a formal variable, and the $q$-analog is a generating function in $q$, but viewed in a different light than usual generating functions. We think of the $q$-analog of as “$q$-counting” a set of weighted objects, where the weights are given by powers of $q$. Say you’re trying to count permutations of $1,\ldots,n$, that is, ways of rearranging the numbers $1,\ldots,n$ in a row. There are $n$ ways to choose the first number, and once we choose that there are $n-1$ remaining choices for the second, then $n-2$ for the third, and so on. So there are $n!=n\cdot (n-1)\cdot \cdots \cdot 2\cdot 1$ ways to rearrange the entries. For instance, the $3!=6$ permutations of $1,2,3$ are $123$, $132$, $213$, $231$, $312$, $321$. Now, say we weight the permutations according to how “mixed up” they are, in the sense of how many pairs of numbers are out of order. An inversion is a pair of entries in the permutation in which the bigger number is to the left of the smaller, and $\mathrm{inv}(\pi)$ denotes the number of inversions of the permutation $\pi$. The table below shows the permutations of 3 along with the number of inversions they contain. $$\begin{array}{ccc} p & \mathrm{inv}(p) & q^{\mathrm{inv}(p)}\\\hline 123 & 0 & 1 \\ 132 & 1 & q\\ 213 & 1 & q\\ 231 & 2 & q^2\\ 312 & 2 & q^2 \\ 321 & 3 & q^3 \end{array} $$ We weight each permutation $p$ by $q^{\mathrm{inv}(p)}$, and $q$-count by summing these $q$-powers, to form the sum $$\sum_{p\in S_n}q^{\mathrm{inv}(p)}$$ where $S_n$ is the set of all permutations of $1,\ldots,n$. So for $n=3$, the sum is $1+2q+2q^2+q^3$ by our table above. We now come to an important philosophical distinction between $q$-analogs and generating functions. As a generating function, the sum $1+2q+2q^2+q^3$ is thought of in terms of the sequence of coefficients, $1,2,2,1$. Generatingfunctionologically, we might instead write the sum as $\sum_{i=0}^\infty c_i q^i$ where $c_i$ is the number of permutations of length $n$ with $i$ inversions. But in $q$-analog notation, $\sum_{p\in S_n}q^{\mathrm{inv}(p)}$, we understand that it is not the coefficients but rather the exponents of our summation that we are interested in.. In general, a combinatorial $q$-analog can be defined as a summation of $q$-powers $q^{\mathrm{stat}(p)}$ where $p$ ranges over a certain set of combinatorial objects and $\mathrm{stat}$ is a statistic on these objects. Recall that we defined an “interesting $q$-analog” of an expression $P$ to be an expression $P_q$ such that Setting $q=1$ or taking the limit as $q\to 1$ results in $P$, $P_q$ can be expressed in terms of (possibly infinite) sums or products of rational functions of $q$ over some field, $P_q$ gives us more refined information about something that $P$ describes, and $P_q$ has $P$-like properties. Certainly setting $q=1$ in a combinatorial $q$-analog results in the total number of objects, and the $q$-analog gives us more information about the objects than just their total number. It’s also a polynomial in $q$, so it satisfies properties 1, 2, and 3 above. Let’s now see how our $q$-analog $\sum_{p\in S_n}q^{\mathrm{inv}(p)}$, which is a $q$-analog of $n!$, also satisfies property 4. Notice that $1+2q+2q^2+q^3$ factors as $(1)(1+q)(1+q+q^2)$. Indeed, in general this turns out to be the same $q$-factorial we saw in the last post! That is, $$\sum_{p\in S_n}q^{\mathrm{inv}(p)}=(1)(1+q)(1+q+q^2)\cdots(1+q+\cdots+q^n)=(n)_q!.$$ So it satisfies property 4 by exhibiting a product formula like $n!$ itself. I posted a proof of this fact in this post, but let’s instead prove it by building up the theory of $q$-counting from the ground up. The multiplication principle in combinatorics is the basic fact that the number of ways of choosing one thing from a set of $m$ things and another from a set of $n$ things is the product $m\cdot n$. But what if the things are weighted? $q$-Multiplication Principle: Given two weighted sets $A$ and $B$ with $q$-counts $M(q)$ and $N(q)$, the $q$-count of the ways of choosing one element from $A$ and another from $B$ is the product $M(q)N(q)$, where the weight of a pair is the sum of the weights of the elements. Let’s see how this plays out in the case of $(n)_q!$. If each entry in the permutation is weighted by the number of inversions it forms with smaller entries (to its right), then the first entry can be any of $1,2,\ldots,n$, which contributes a factor of $1+q+q^2+\cdots+q^{n-1}$ to the total. The next entry then can be any of the $n-1$ remaining entries, and since the first entry cannot be the smaller entry of an inversion with the second, this choice contributes a factor of $1+q+q^2+\cdots+q^{n-2}$ to the total by the same argument. Continuing in this fashion we get the $q$-factorial as our $q$-count. Notice that even the proof was a $q$-analog of the proof that $n!$ is the number of permutations of $1,\ldots,n$, now that we have the $q$-Multiplication Principle. That’s all for now! In the next post we’ll talk about how to use the $q$-Multiplication Principle to derive a combinatorial interpretation of the $q$-binomial coefficient, and discuss $q$-Catalan numbers and other fun $q$-analogs. Stay tuned!
Let $F_0 \subset F_1 \subset F_2 \subset \cdots$ and $K_0 \subset K_1 \subset K_2 \subset \cdots$ be two towers of fields. Also, let $F = \cup_{i=0}^\infty F_i$ and $K = \cup_{i=0}^\infty K_i$. Now suppose for each $i$ we have injective homomorphisms from $F_i$ to $K_{\sigma(i)}$ and from $K_i$ to $F_{\mu(i)}$ where $i \leq \sigma(i)$ and $i \leq \mu(i)$. In other words, each field $F_i$ is isomorphic to a subfield of some $K_j$ where $j \geq i$ and each field $K_i$ is isomorphic to a subfield of some $F_j$ where $j \geq i$. [Think of the two towers sitting next to each other with arrows pointing diagonally upward.] My question, can we conclude that $F \cong K$? A colleague asked me this question some time ago. I came up with a sketch of a proof for the case when $F_{i+1}$ is an algebraic extension of $F_i$ and $K_{i+1}$ is an algebraic extension of $K_i$ for each $i$. I suspect it's false in general [something to do with the fact that injective and surjective aren't equivalent for maps between infinite dimensional spaces.] Does anybody know a counterexample for the general case? I would also appreciate a reference for the algebraic case (where I'm 99% sure it's true). Thanks!
I would like to solve 3D transient incompressible Navier-Stokes with FEM, Newton method, Schur-based preconditioner, Lagrangean P2/P1 elements (no stabilization), in a rigid pipe discretized with tetrahedrons. The fluid is initially at rest, I impose constant inlet and outlet pressure boundary conditions (and therefore constant pressure gradient), and I would like to reach Poiseulle steady-state solution after a while. Reynolds number is within the laminar regime, and CFL holds! After couple hundred of timesteps, pressure becomes unstable and everything breaks. I impose pressure b.c. by adding -$\int_\Gamma p \eta N_i dx$ at the moment equation, where $\eta$ is the normal outward unitary vector, and $N_i$ is the $P_2$ basis function associated to the boundary nodes. Nothing out of the standard, I think. However, when I visualize the result of this integral in Paraview, the boundary looks like a checkerboard: there are nonzero values at the d.o.f. associated to the edges, but nearly ZERO values at the d.o.f associated to the vertices! The explanation I used to convince myself is that the shape functions associated to vertices have zero integral when they are restricted to triangles (actually, unless I have made a bizarre mistake, this also holds for $P_2$ shape functions associated to the vertices of the triangles have zero integral). P.S.: the velocity profile around the last successful time-steps before stoping has also a kind of chessboard profile. Few issues puzzle me: 1) everything works fine if I keep the same pressure gradient, mesh, Reynolds number etc. but impose parabolic velocity profile (given by Poiseulle formulae), or constant velocity (and then I have the Poiseulle profile after the entrance length). 2) the code which evaluates the pressure integral above is the exactly the same used by another (solid mechanics) solver, validated with inhomogeneous & transient traction boundary conditions, which uses P1 elements instead! 3) if I refine the mesh, I still have the checkerboard pattern, but the pressure imposition setup somehow works! I did not expect mesh dependency. To be clear, this checkerboard pattern is not the pressure, but the result of the integral above. My quite vague question is: what am I missing? Let me know if the problem statement is clear. Btw., do you know any other way to impose pressure? I've already heard about introducing 1's at the (nonexistent) boundary pressure entries of the continuity equation ($\nabla \cdot v = 0$), but I am uncomfortable with this idea, if I don't use any stabilization. Any hint is highly appreciated! Thanks!
Difference between revisions of "Main Page" (→Unsolved questions) Line 30: Line 30: Prove that for any <math>c>0</math> there is a <math>d</math>, such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. Prove that for any <math>c>0</math> there is a <math>d</math>, such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. − The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for k=4. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. + The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for k=4. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. − Finally, let me prove that there is square if d is large enough compare to c. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length d. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. + Finally, let me prove that there is square if dis large enough compare to c. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length d. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. Line 95: Line 95: Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. + == Other resources == == Other resources == * [http://meta.wikimedia.org/wiki/Help:Contents Wiki user's guide] * [http://meta.wikimedia.org/wiki/Help:Contents Wiki user's guide] Revision as of 20:41, 11 February 2009 The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits [ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ] are equal to one of the following: [ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$. Presumably, this should be (perhaps much) simpler than DHJ, k=3. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Let $\mathcal{O}_0$ be the principal block of the BGG category $\mathcal{O}$ for a finite dimensional simple Lie algebra over $\mathbb{C}$. For an element $w$ in the Weyl group $W$, let $\Delta_w$ denote the Verma module with highest weight $w_0w^{-1}\cdot 0$, where $w_0\in W$ is the longest element, and $\cdot$' denotes the dot-action. It is well known that $\mathrm{dim}(\mathrm{Hom}(\Delta_v, \Delta_w)) \leq 1$ This fact is pretty straightforward to prove algebraically. However, I do not know how to see this topologically. Namely, I do not know how to prove this via the interpretation of Verma modules as perverse sheaves on the flag variety. I would be grateful if someone could explain how to see this fact topologically. Added later: In response to Jim Humphreys comment let me add some motivation: In this regard I think of category $\mathcal{O}$ as a "toy example". I would like to know what sort of generality this fact holds for. For instance, is the corresponding statement true for perverse sheaves smooth along a stratification given by affine spaces? The latter is certainly a highest weight category, computations in it can be undertaken topologically, etc. So as a starting point I would like to understand the topological reason for its truth for the "toy example". In the same vein as 1) I would like to know whether this truly is a "geometric" fact, i.e., does it hold if I consider my sheaves with coefficients in a commutative ring say? Computing extension groups of Verma modules is an old problem. If there is any hope for doing this topologically, I would think a reasonable place to start would be to compute $\mathrm{Ext}^0$ topologically! In the same vein as 3). One can see that the extensions of Verma modules is given by (compactly supported) cohomology (appropriately shifted) of intersections of Schubert cells with opposite Schubert cells. This is related to my earlier questions: The above fact about homomorphisms between Vermas translates to the lowest non-vanishing cohomology (compactly supported) being one dimensional. These are smooth affine varieties, but (at least in low ranks) their Betti numbers satisfy a curious "Poincare duality"/palindromic type phenomenon. This phenomenon is even more starkly visible if one further looks at the Hodge numbers. Amusingly, since these varieties are smooth and irreducible, one immediately gets that the highest non-vanishing extension group (when it is possible to have morphisms between the Vermas) is one dimensional and concentrated in the "right" degree. This latter fact can also be shown algebraically, but requires a careful argument using translation functors (which can also be done geometrically without ever knowing anything about the intersections, but now I am digressing). Anyway, a topological reason as in my question may hopefully give some insight as to whether this palindromic phenomenon is a low rank coincidence or has any hope for holding in general. Apologies if any of the reasons above are too vague/ranting, I didn't want to throw in all of that in my original question in case the answer was something blatantly obvious that I had been overlooking.
In this MathStackExchange post the question in the title was asked without much outcome, I feel. Edit: As Douglas Zare kindly observes, there is one more answer in MathStackExchange now. I am not used to basic Probability, and I am trying to prepare a class that I need to teach this year. I feel I am unable to motivate the introduction of random variables. After spending some time speaking about Kolmogoroff's axioms I can explain that they allow to make the following sentence true and meaningful: The probability that, tossing a coin $N$ times, I get $n\leq N$ tails equals $$\tag{$\ast$}{N \choose n}\cdot\Big(\frac{1}{2}\Big)^N.$$ But now people (i.e. books I can find) introduce the "random variable $X\colon \Omega\to\mathbb{R}$ which takes values $X(\text{tails})=1$ and $X(\text{heads})=0$" and say that it follows the binomial rule. To do this, they need a probability space $\Omega$: but once one has it, one can prove statement $(\ast)$ above. So, what is the usefulness of this $X$ (and of random variables, in general)? Added: So far my question was admittedly too vague and I try to emend. Given a discrete random variable $X\colon\Omega\to\mathbb{R}$ taking values $\{x_1,\dots,x_n\}$ I can define $A_k=X^{-1}(\{x_k\})$ for all $1\leq k\leq n$. The study of the random variable becomes then the study of the values $p(A_k)$, $p$ being the probability on $\Omega$. Therefore, it seems to me that we have not gone one step further in the understanding of $\Omega$ (or of the problem modelled by $\Omega$) thanks to the introduction of $X$. Often I read that there is the possibility of having a family $X_1,\dots,X_n$ of random variables on the same space $\Omega$ and some results (like the CLT) say something about them. But then I know no example—and would be happy to discover—of a problem truly modelled by this, whereas in most examples that I read there is either a single random variable; or the understanding of $n$ of them requires the understanding of the power $\Omega^n$ of some previously-introduced measure space $\Omega$. It seems to me (but admit to have no rigourous proof) that given the above $n$ random variables on $\Omega$ there should exist a $\Omega'$, probably much bigger, with a single $X\colon\Omega'\to\mathbb{R}$ "encoding" the same information as $\{X_1,\dots,X_n\}$. In this case, we are back to using "only" indicator functions. I understand that this process breaks down if we want to make $n\to \infty$, but I also suspect that there might be a deeper reason for studying random variables. All in all, my doubts come from the fact that random variables still look to me as being a poorer object than a measure (or, probably, of a $\sigma$-algebra $\mathcal{F}$ and a measure whose generated $\sigma$-algebra is finer than $\mathcal{F}$, or something like this); though, they are introduced, studied, and look central in the theory. I wonder where I am wrong. Caveat: For some reason, many people in comments below objected that "throwing random variables away is ridiculous" or that I "should try to come out with something more clever, then, if I think they are not good". That was not my point. I am sure they must be useful, lest all textbooks would not introduce them. But I was unable to understand why: many useful and kind answers below helped much.
The polynomial Freiman-Ruzsa conjecture states that there exists $k > 0$, such that for all $\epsilon > 0$, and for all large $n$ and all functions $ f: \mathbb{F}_2^n \mapsto\mathbb{F}_2^n$, the following holds. If $$Pr_{x, x' \in \mathbb{F}_2^n} [f(x) + f(x') = f(x+x')] \geq \epsilon \;,$$ then there is a matrix $M \in \mathbb{F}_2^{n \times n}$ such that $$Pr_{x \in \mathbb{F}_2^n} [ f(x) = M x] \geq \epsilon^k]\;.$$ My question is whether we can prove this conjecture for the special case when $f$ is known to be a bijection.
uses two matrices $A$ and $B$ and calculates the product $AB$. It is an online math tool specially programmed to perform multiplication operation between the two matrices $A$ and $B$. The matrix multiplication is not commutative operation. In the matrix multiplication $AB$, the number of columns in matrix $A$ must be equal to the number of rows in matrix $B$. 3x3 matrix multiplication calculator Matrices are a powerful tool in mathematics, science and life. Matrices are everywhere and they have significant applications. For example, spreadsheet such as Excel or written a table represents a matrix. The word "matrix" is the Latin word and it means "womb". This term was introduced by J. J. Sylvester (English mathematician) in 1850.The first need for matrices was in the studying of systems of simultaneous linear equations. A matrix is a rectangular array of numbers, arranged in the following way $$A=\left( \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right)=\left[ \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right]$$ There are two notation of matrix: in parentheses or box brackets. The terms in the matrix are called its entries or its elements. Matrices are most often denoted by upper-case letters, while the corresponding lower-case letters, with two subscript indices, are the elements of matrices. For examples, matrices are denoted by $A,B,\ldots Z$ and its elements by $a_{11}$ or $a_{1,1}$, etc. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. The size of a matrix is a Descartes product of the number of rows and columns that it contains. A matrix with $m$ rows and $n$ columns is called an $m\times n$ matrix. In this case $m$ and $n$ are its dimensions. If a matrix consists of only one row, it is called a row matrix. If a matrix consists only one column is called a column matrix. A matrix which contains only zeros as elements is called a zero matrix. A square matrix is a matrix with the same number of rows and columns. A square matrix with all elements as zeros except for the main diagonal, which has only ones, is called an identity matrix. For instance, the following matrices $$I_1=(1),\; I_2=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right),\ldots ,I_n=\left( \begin{array}{cccc} 1 & 0 & \ldots & 0 \\ 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & \ldots & 1 \\ \end{array} \right)$$ are identity matrices of size $1\times1$, $2\times 2, \ldots$ $n\times n$, respectively. Many operations with matrices make sense only if the matrices have suitable dimensions. In other words, they should be the same size, with the same number of rows and the same number of columns. When we deal with matrix multiplication, matrices $A=(a_{ij})_{m\times p}$ with $m$ rows, $p$ columns and $B=(b_{ij})_{r\times n}$ with $r$ rows, $n$ columns can be multiplied if and only if $p=r$. This means, that the number of columns of the first matrix, $A$, must be equal to the number of rows of the second matrix, $B$. The product of these matrix is a new matrix that has the same number of rows as the first matrix, $A$, and the same number of columns as the second matrix, $B$. So, the corresponding product $C=A\cdot B$ is a matrix of size $m\times n$. Elements $c_{ij}$ of this matrix are $$c_{ij}=a_{i1}b_{1j}+a_{i2}b_{2j}\ldots+a_{ip}b_{pj}\quad\mbox{for}\;i=1,\ldots,m,\;j=1,\ldots,n.$$ For example, $3\times 3$ matrix multiplication is determined by the following formula $$\begin{align}&\left( \begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{array} \right)\cdot \left( \begin{array}{ccc} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} &b_{32} & b_{33} \\ \end{array} \right)\\&= \left(\begin{array}{ccc} a_{11}b_{11}+a_{12}b_{21}+a_{13}b_{31}& a_{11}b_{12}+a_{12}b_{22}+a_{13}b_{32}& a_{11}b_{13}+a_{12}b_{23}+a_{13}b_{33} \\ a_{21}b_{11}+a_{22}b_{21}+a_{23}b_{31} &a_{21}b_{12}+a_{22}b_{22}+a_{23}b_{32}& a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}\\ a_{31}b_{11}+a_{32}b_{21}+a_{33}b_{31} &a_{31}b_{12}+a_{32}b_{22}+a_{33}b_{32} & a_{31}b_{13}+a_{32}b_{23}+a_{33}b_{33}\\ \end{array}\right)\end{align}$$ Properties of Matrix Multiplication One of the main application of matrix multiplication is in solving systems of linear equations. Transformations in two or three dimensional Euclidean geometry can be represented by $2\times 2$ or $3\times 3$ matrices. Dilation, translation, axes reflections, reflection across the $x$-axis, reflection across the $y$-axis, reflection across the line $y=x$, rotation, rotation of $90^o$ counterclockwise around the origin, rotation of $180^o$ counterclockwise around the origin, etc, use $2\times 2$ and $3\times 3$ matrix multiplications. Practice Problem 1 : Find the product $AB$ for $$A=\left( \begin{array}{cc} 4& 20 \\ 5 & 5 \\ 2 &-6 \\ \end{array} \right)\quad\mbox{and}\quad B=\left( \begin{array}{cc} 3 & 2 \\ 3 & 3 \\ \end{array} \right)$$ Practice Problem 2 : Find the image of a transformation of the vertex matrix $\left( \begin{array}{cc} 3 & 2 \\ 3 & 3 \\ \end{array} \right)$ when it is rotated $90^o$ counterclockwise around the origin. The matrix multiplication calculator, formula, example calculation (work with steps), real world problems and practice problems would be very useful for grade school students (K-12 education) to understand the matrix multiplication of two or more matrices. Using this concept they can solve systems of linear equations and other linear algebra problems in physics, engineering and computer science.
What is the step by step numerical approach to calculate the pseudo-inverse of a matrix with M rows and N columns, using LU decomposition? So far, I have found this, but it uses singular value decomposition. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Given an $m\times n$ matrix $\mathbf A$, there are a number of non-SVD methods for computing the Moore-Penrose inverse. Most of them require an accurate determination of "numerical rank"; to drive the point home, would you say the matrix $$\begin{pmatrix}1&&\\&1&\\&&\varepsilon^{2/3}\end{pmatrix}$$ (where $\varepsilon$ is machine epsilon) has rank $3$ or rank $2$? It is well-known that no method based on Gaussian elimination is foolproof with respect to rank determination, and thus SVD methods are preferable. Of course, if your matrix has full rank, the classical formula applies: $$\mathbf A^\dagger=(\mathbf A^\top\mathbf A)^{-1}\mathbf A^\top$$ but of course the formation of the cross-product matrix is wrought with danger on its own. Having cautioned you on why persisting on the use of Gaussian elimination is unsound, let me mention a few papers (which you could have found on your own by searching with, say, Google Scholar). As a start you will want to look at this gentle introduction. From here, you will want to look at paper 1, paper 2, paper 3, and paper 4, among others. (You can find more using the search terms "Moore Penrose inverse" or "generalized inverse" along with elimination.) As a bonus, you might also be interested in the Schulz iteration (see e.g. paper 5), $$\mathbf Z_{k+1}=\mathbf Z_k(2\mathbf I-\mathbf A\mathbf Z_k)$$ which is nothing more than the Newton-Raphson method applied to matrix inversion.
Abstract Let $\pi$ be an $\mathrm{SL}(3,\mathbb Z)$ Hecke-Maass cusp form satisfying the Ramanujan conjecture and the Selberg-Ramanujan conjecture, and let $\chi$ be a primitive Dirichlet character modulo $M$, which we assume to be prime for simplicity. We will prove that there is a computable absolute constant $\delta>0$ such that $$ L\left(\tfrac{1}{2},\pi\otimes\chi\right)\ll_{\pi} M^{\frac{3}{4}-\delta}. $$ [B] V. Blomer, "Subconvexity for twisted $L$-functions on ${ GL}(3)$," Amer. J. Math., vol. 134, iss. 5, pp. 1385-1421, 2012. @article{B, mrkey = {2975240}, author = {Blomer, Valentin}, title = {Subconvexity for twisted {$L$}-functions on {${\rm GL}(3)$}}, journal = {Amer. J. Math.}, fjournal = {American Journal of Mathematics}, volume = {134}, year = {2012}, number = {5}, pages = {1385--1421}, issn = {0002-9327}, coden = {AJMAAN}, mrclass = {11F67 (11F70)}, mrnumber = {2975240}, mrreviewer = {Jannis A. Antoniadis}, doi = {10.1353/ajm.2012.0032}, zblnumber = {1297.11046}, } [DB] D. Bump, Automorphic Forms on ${ GL}(3,{\bf R})$, New York: Springer-Verlag, 1984, vol. 1083. @book{DB, mrkey = {0765698}, author = {Bump, Daniel}, title = {Automorphic Forms on {${\rm GL}(3,{\bf R})$}}, series = {Lecture Notes in Math.}, volume = {1083}, publisher = {Springer-Verlag}, year = {1984}, pages = {xi+184}, isbn = {3-540-13864-1}, mrclass = {11F55 (11F70)}, mrnumber = {0765698}, mrreviewer = {Stephen Gelbart}, address = {New York}, zblnumber = {0543.22005}, } [Bu] D. A. Burgess, "On character sums and primitive roots," Proc. London Math. Soc., vol. 12, pp. 179-192, 1962. @article{Bu, mrkey = {0132732}, author = {Burgess, D. A.}, title = {On character sums and primitive roots}, journal = {Proc. London Math. Soc.}, fjournal = {Proceedings of the London Mathematical Society. Third Series}, volume = {12}, year = {1962}, pages = {179--192}, issn = {0024-6115}, mrclass = {10.41}, mrnumber = {0132732}, mrreviewer = {L. Carlitz}, doi = {10.1112/plms/s3-12.1.179}, zblnumber = {0106.04003}, } [DFI-1] W. Duke, J. Friedlander, and H. Iwaniec, "Bounds for automorphic $L$-functions," Invent. Math., vol. 112, iss. 1, pp. 1-8, 1993. @article{DFI-1, mrkey = {1207474}, author = {Duke, W. and Friedlander, J. and Iwaniec, H.}, title = {Bounds for automorphic \hbox{{$L$}-functions}}, journal = {Invent. Math.}, fjournal = {Inventiones Mathematicae}, volume = {112}, year = {1993}, number = {1}, pages = {1--8}, issn = {0020-9910}, coden = {INVMBH}, mrclass = {11F66 (11F11)}, mrnumber = {1207474}, mrreviewer = {George Gilbert}, doi = {10.1007/BF01232422}, zblnumber = {0765.11038}, } [G] D. Goldfeld, Automorphic Forms and $L$-Functions for the Group ${ GL}(n,\Bbb R)$, Cambridge: Cambridge Univ. Press, 2006, vol. 99. @book{G, mrkey = {2254662}, author = {Goldfeld, Dorian}, title = {Automorphic Forms and {$L$}-Functions for the Group {${\rm GL}(n,\bold R)$}}, series = {Cambridge Stud. Adv. Math.}, volume = {99}, note = {with an appendix by Kevin A. Broughan}, publisher = {Cambridge Univ. Press}, address = {Cambridge}, year = {2006}, pages = {xiv+493}, isbn = {978-0-521-83771-2; 0-521-83771-5}, mrclass = {11F55 (11F66 11F70 11R39)}, mrnumber = {2254662}, mrreviewer = {Emmanuel P. Royer}, doi = {10.1017/CBO9780511542923}, zblnumber = {1108.11039}, } [IK] H. Iwaniec and E. Kowalski, Analytic Number Theory, Providence, RI: Amer. Math. Soc., 2004, vol. 53. @book{IK, mrkey = {2061214}, author = {Iwaniec, Henryk and Kowalski, Emmanuel}, title = {Analytic Number Theory}, series = {Amer. Math. Soc. Colloq. Publ.}, volume = {53}, publisher = {Amer. Math. Soc.}, address = {Providence, RI}, year = {2004}, pages = {xii+615}, isbn = {0-8218-3633-1}, mrclass = {11-02 (11Fxx 11Lxx 11Mxx 11Nxx)}, mrnumber = {2061214}, mrreviewer = {K. Soundararajan}, zblnumber = {1059.11001}, } [JPSS] H. Jacquet, I. I. Piatetskii-Shapiro, and J. A. Shalika, "Rankin-Selberg convolutions," Amer. J. Math., vol. 105, iss. 2, pp. 367-464, 1983. @article{JPSS, mrkey = {0701565}, author = {Jacquet, H. and Piatetskii-Shapiro, I. I. and Shalika, J. A.}, title = {Rankin-{S}elberg convolutions}, journal = {Amer. J. Math.}, fjournal = {American Journal of Mathematics}, volume = {105}, year = {1983}, number = {2}, pages = {367--464}, issn = {0002-9327}, coden = {AJMAAN}, mrclass = {11F67 (11F70 11R39 22E55)}, mrnumber = {0701565}, mrreviewer = {Freydoon Shahidi}, doi = {10.2307/2374264}, zblnumber = {0525.22018}, } [L] X. Li, "Bounds for ${ GL}(3)\times { GL}(2)$ $L$-functions and ${ GL}(3)$ $L$-functions," Ann. of Math., vol. 173, iss. 1, pp. 301-336, 2011. @article{L, mrkey = {2753605}, author = {Li, Xiaoqing}, title = {Bounds for {${\rm GL}(3)\times {\rm GL}(2)$} {$L$}-functions and {${\rm GL}(3)$} {$L$}-functions}, journal = {Ann. of Math.}, fjournal = {Annals of Mathematics. Second Series}, volume = {173}, year = {2011}, number = {1}, pages = {301--336}, issn = {0003-486X}, coden = {ANMAAH}, mrclass = {11F67 (11F37 11F66)}, mrnumber = {2753605}, mrreviewer = {Wen-Wei Li}, doi = {10.4007/annals.2011.173.1.8}, zblnumber = {05960661}, } [M] S. D. Miller, "Cancellation in additively twisted sums on ${ GL}(n)$," Amer. J. Math., vol. 128, iss. 3, pp. 699-729, 2006. @article{M, mrkey = {2230922}, author = {Miller, Stephen D.}, title = {Cancellation in additively twisted sums on {${\rm GL}(n)$}}, journal = {Amer. J. Math.}, fjournal = {American Journal of Mathematics}, volume = {128}, year = {2006}, number = {3}, pages = {699--729}, issn = {0002-9327}, coden = {AJMAAN}, mrclass = {11F67 (11F70)}, mrnumber = {2230922}, mrreviewer = {Emmanuel P. Royer}, zblnumber = {1142.11033}, doi = {10.1353/ajm.2006.0027}, } [MS] S. D. Miller and W. Schmid, "Automorphic distributions, $L$-functions, and Voronoi summation for ${ GL}(3)$," Ann. of Math., vol. 164, iss. 2, pp. 423-488, 2006. @article{MS, mrkey = {2247965}, author = {Miller, Stephen D. and Schmid, Wilfried}, title = {Automorphic distributions, {$L$}-functions, and {V}oronoi summation for {${\rm GL}(3)$}}, journal = {Ann. of Math.}, fjournal = {Annals of Mathematics. Second Series}, volume = {164}, year = {2006}, number = {2}, pages = {423--488}, issn = {0003-486X}, coden = {ANMAAH}, mrclass = {11F66 (11F70 11M41)}, mrnumber = {2247965}, mrreviewer = {Andre Reznikov}, doi = {10.4007/annals.2006.164.423}, zblnumber = {1162.11341}, } [Mu1] R. Munshi, "Bounds for twisted symmetric square $L$-functions," J. Reine Angew. Math., vol. 682, pp. 65-88, 2013. @article{Mu1, mrkey = {3181499}, author = {Munshi, Ritabrata}, title = {Bounds for twisted symmetric square {$L$}-functions}, journal = {J. Reine Angew. Math.}, fjournal = {Journal für die Reine und Angewandte Mathematik. [Crelle's Journal]}, volume = {682}, year = {2013}, pages = {65--88}, issn = {0075-4102}, mrclass = {11F66 (11F67 11M41)}, mrnumber = {3181499}, mrreviewer = {A. Perelli}, zblnumber = {6221004}, } [Mu2] R. Munshi, Bounds for twisted symmetric square $L$-functions – II. @misc{Mu2, author = {Munshi, Ritabrata}, title = {Bounds for twisted symmetric square {$L$}-functions - {II}}, note = {unpublished}, } [Mu3] R. Munshi, "Bounds for twisted symmetric square $L$-functions – III," Adv. Math., vol. 235, pp. 74-91, 2013. @article{Mu3, mrkey = {3010051}, author = {Munshi, Ritabrata}, title = {Bounds for twisted symmetric square {$L$}-functions - {III}}, journal = {Adv. Math.}, fjournal = {Advances in Mathematics}, volume = {235}, year = {2013}, pages = {74--91}, issn = {0001-8708}, mrclass = {11F66 (11M41)}, mrnumber = {3010051}, mrreviewer = {Kazuyuki Hatada}, doi = {10.1016/j.aim.2012.11.010}, zblnumber = {1271.11055}, } [Mu] R. Munshi, "The circle method and bounds for $L$-functions – I," Math. Ann., vol. 358, iss. 1-2, pp. 389-401, 2014. @article{Mu, mrkey = {3158002}, author = {Munshi, Ritabrata}, title = {The circle method and bounds for {$L$}-functions - {I}}, journal = {Math. Ann.}, fjournal = {Mathematische Annalen}, volume = {358}, year = {2014}, number = {1-2}, pages = {389--401}, issn = {0025-5831}, mrclass = {11F66 (11M41)}, mrnumber = {3158002}, mrreviewer = {A. Perelli}, doi = {10.1007/s00208-013-0968-4}, zblnumber = {06269821}, } [Mu4] R. Munshi, "The circle method and bounds for $L$-functions, II: Subconvexity and twists of GL(3) $L$-functions," Amer. J. Math., vol. 137, pp. 791-812, 2015. @article{Mu4, author = {Munshi, Ritabrata}, title = {The circle method and bounds for {$L$}-functions, {II}: {S}ubconvexity and twists of {GL}(3) {$L$}-functions}, journal = {Amer. J. Math.}, fjournal = {American Journal of Mathematics}, volume = {137}, year = {2015}, pages = {791--812}, doi = {10.1353/ajm.2015.0018}, } [Mu0] R. Munshi, The circle method and bounds for $L$-functions—III. $t$-aspect subconvexity for GL(3) $L$-functions, 2013. @misc{Mu0, author = {Munshi, Ritabrata}, title = {The circle method and bounds for {$L$}-functions---{III}. $t$-aspect subconvexity for {GL}(3) {$L$}-functions}, arxiv = {1301.1007}, year = {2013}, } [Mu5] R. Munshi, Hybrid subconvexity for Rankin-Selberg $L$-functions. @misc{Mu5, author = {Munshi, Ritabrata}, title = {Hybrid subconvexity for {R}ankin-{S}elberg {$L$}-functions}, note = {preprint}, SORTYEAR={2015}, } [HMQ] R. Holowinsky, R. Munshi, and Z. Qi, Hybrid subconvexity bounds for ${L}(\tfrac{1}{2},\mathrm{{S}ym}^2 f\otimes g)$, 2014. @misc{HMQ, author={Holowinsky, R. and Munshi, R. and Qi, Z.}, TITLE={Hybrid subconvexity bounds for ${L}(\tfrac{1}{2}, \mathrm{{S}ym}^2 f\otimes g)$}, NOTE={preprint}, YEAR={2014}, ARXIV = {1401.6695}, }
63 0 I'm not sure this is the right thread to post my problem : I'm trying to define a [itex]x(\theta, \phi) = \sin^3{\theta} \; \cos{\phi},[/itex] [itex]y(\theta, \phi) = \sin^3{\theta} \; \sin{\phi},[/itex] [itex]z(\theta) = \sin^2{\theta} \; \cos{\theta}.[/itex] The surface element is this : [itex]dS(\theta, \phi) = \sin^7{\theta} \; d\theta \; d\phi.[/itex] For a simple sphere, we get [itex]dS_{sphere}(\theta, \phi) = \sin{\theta} \; d\theta \; d\phi = du \; d\phi,[/itex] where [itex]u = \cos \theta[/itex] is the natural variable to define the uniform distribution on the sphere. In the case of my toroidal surface defined above, the "natural" variable (if I'm not doing a mistake) is really complicated : [itex]u = \cos{\theta} - \cos^3{\theta} + \tfrac{3}{5} \cos^5{\theta} - \tfrac{1}{7}\cos^7{\theta},[/itex] so [itex]dS(u, \phi) = du \; d\phi[/itex]. This is the variable I should use to define an However, how should I define the three parametric coordinates [itex]x(u, \phi)[/itex], [itex]y(u, \phi)[/itex], [itex]z(u)[/itex] ? I'm unable to invert the function above to give [itex]\cos \theta = f(u) = ?[/itex] I'm using I'm trying to define a uniform distributionon the toroidal surface associated to a dipolar magnetic field(or electric). More specifically, the surface (in 3D euclidian space) is parametrised as this, using the usual polar coordinates : [itex]x(\theta, \phi) = \sin^3{\theta} \; \cos{\phi},[/itex] [itex]y(\theta, \phi) = \sin^3{\theta} \; \sin{\phi},[/itex] [itex]z(\theta) = \sin^2{\theta} \; \cos{\theta}.[/itex] The surface element is this : [itex]dS(\theta, \phi) = \sin^7{\theta} \; d\theta \; d\phi.[/itex] For a simple sphere, we get [itex]dS_{sphere}(\theta, \phi) = \sin{\theta} \; d\theta \; d\phi = du \; d\phi,[/itex] where [itex]u = \cos \theta[/itex] is the natural variable to define the uniform distribution on the sphere. In the case of my toroidal surface defined above, the "natural" variable (if I'm not doing a mistake) is really complicated : [itex]u = \cos{\theta} - \cos^3{\theta} + \tfrac{3}{5} \cos^5{\theta} - \tfrac{1}{7}\cos^7{\theta},[/itex] so [itex]dS(u, \phi) = du \; d\phi[/itex]. This is the variable I should use to define an uniform distribution of pointson the surface. However, how should I define the three parametric coordinates [itex]x(u, \phi)[/itex], [itex]y(u, \phi)[/itex], [itex]z(u)[/itex] ? I'm unable to invert the function above to give [itex]\cos \theta = f(u) = ?[/itex] Help please ! I'm using to do my calculations. Mathematica
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
A recursive algorithm and a series expansion related to the homogeneous Boltzmann equation for hard potentials with angular cutoff Sorbonne Université, CNRS, LPSM, UMR 8001, F-75005 Paris, France We consider the spatially homogeneous Boltzmann equation for hard potentials with angular cutoff. This equation has a unique conservative weak solution $ (f_t)_{t\geq 0} $, once the initial condition $ f_0 $ with finite mass and energy is fixed. Taking advantage of the energy conservation, we propose a recursive algorithm that produces a $ (0,\infty)\times {\mathbb{R}}^3 $ random variable $ (M_t,V_t) $ such that $ \mathbb{E}[M_t {\bf 1}_{\{V_t \in \cdot\}}] = f_t $. We also write down a series expansion of $ f_t $. Although both the algorithm and the series expansion might be theoretically interesting in that they explicitly express $ f_t $ in terms of $ f_0 $, we believe that the algorithm is not very efficient in practice and that the series expansion is rather intractable. This is a tedious extension to non-Maxwellian molecules of Wild's sum [ Mathematics Subject Classification:Primary: 82C40; Secondary: 60K35. Citation:Nicolas Fournier. A recursive algorithm and a series expansion related to the homogeneous Boltzmann equation for hard potentials with angular cutoff. Kinetic & Related Models, 2019, 12 (3) : 483-505. doi: 10.3934/krm.2019020 References: [1] E. A. Carlen, M. C. Carvalho and E. Gabetta, Central limit theorem for Maxwellian molecules and truncation of the Wild expansion, [2] E. A. Carlen, M. C. Carvalho and E. Gabetta, On the relation between rates of relaxation and convergence of Wild sums for solutions of the Kac equation, [3] E. A. Carlen and F. Salvarani, On the optimal choice of coefficients in a truncated Wild sum and approximate solutions for the Kac equation, [4] [5] E. Dolera, E. Gabetta and E. Regazzini, Reaching the best possible rate of convergence to equilibrium for solutions of Kac's equation via central limit theorem, [6] [7] [8] M. Kac, Foundations of kinetic theory, [9] X. Lu and C. Mouhot, On measure solutions of the boltzmann equation part Ⅰ: Moment production and stability estimates, [10] [11] [12] [13] [14] [15] [16] C. Villani, A review of mathematical topics in collisional kinetic theory, [17] [18] show all references References: [1] E. A. Carlen, M. C. Carvalho and E. Gabetta, Central limit theorem for Maxwellian molecules and truncation of the Wild expansion, [2] E. A. Carlen, M. C. Carvalho and E. Gabetta, On the relation between rates of relaxation and convergence of Wild sums for solutions of the Kac equation, [3] E. A. Carlen and F. Salvarani, On the optimal choice of coefficients in a truncated Wild sum and approximate solutions for the Kac equation, [4] [5] E. Dolera, E. Gabetta and E. Regazzini, Reaching the best possible rate of convergence to equilibrium for solutions of Kac's equation via central limit theorem, [6] [7] [8] M. Kac, Foundations of kinetic theory, [9] X. Lu and C. Mouhot, On measure solutions of the boltzmann equation part Ⅰ: Moment production and stability estimates, [10] [11] [12] [13] [14] [15] [16] C. Villani, A review of mathematical topics in collisional kinetic theory, [17] [18] [1] [2] Niklas Hartung. Efficient resolution of metastatic tumor growth models by reformulation into integral equations. [3] Giuseppe Di Fazio, Maria Stella Fanciullo, Pietro Zamboni. Harnack inequality for degenerate elliptic equations and sum operators. [4] Daniel Guo, John Drake. A global semi-Lagrangian spectral model of shallow water equations with time-dependent variable resolution. [5] Giacomo Albi, Lorenzo Pareschi, Mattia Zanella. Opinion dynamics over complex networks: Kinetic modelling and numerical methods. [6] [7] Carlos Escudero, Fabricio Macià, Raúl Toral, Juan J. L. Velázquez. Kinetic theory and numerical simulations of two-species coagulation. [8] [9] Mirosław Lachowicz, Andrea Quartarone, Tatiana V. Ryabukha. Stability of solutions of kinetic equations corresponding to the replicator dynamics. [10] [11] [12] Anaïs Crestetto, Nicolas Crouseilles, Mohammed Lemou. Kinetic/fluid micro-macro numerical schemes for Vlasov-Poisson-BGK equation using particles. [13] Sigurdur Hafstein, Skuli Gudmundsson, Peter Giesl, Enrico Scalas. Lyapunov function computation for autonomous linear stochastic differential equations using sum-of-squares programming. [14] Bartosz Bieganowski, Jaros law Mederski. Nonlinear SchrÖdinger equations with sum of periodic and vanishing potentials and sign-changing nonlinearities. [15] Niclas Bernhoff. Boundary layers for discrete kinetic models: Multicomponent mixtures, polyatomic molecules, bimolecular reactions, and quantum kinetic equations. [16] N. Bellomo, A. Bellouquid. From a class of kinetic models to the macroscopic equations for multicellular systems in biology. [17] [18] [19] Jean Dolbeault. An introduction to kinetic equations: the Vlasov-Poisson system and the Boltzmann equation. [20] Naoufel Ben Abdallah, Antoine Mellet, Marjolaine Puel. Fractional diffusion limit for collisional kinetic equations: A Hilbert expansion approach. 2018 Impact Factor: 1.38 Tools Metrics Other articles by authors [Back to Top]
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724 Like this: [/url][/wiki][/url] [/wiki] [/url][/code] Many different combinations work. To reproduce, paste the above into a new post and click "preview". x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X I wonder if this works on other sites? (Remove/Change ) Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Related:[url=http://a.com/] [/url][/wiki] My signature gets quoted. This too. And my avatar gets moved down Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote: Related: [ Code: Select all [wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki] ] My signature gets quoted. This too. And my avatar gets moved down It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places. Here, I'll fix it: [/wiki][url]conwaylife.com[/url] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: It appears I fixed @Saka's open <div>. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce toroidalet Posts: 1018 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? The post before the one you quoted. The code was: Code: Select all [wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up. Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage. Last edited by Saka on June 21st, 2017, 10:51 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA I actually laughed at the terminology. "IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways, [/wiki] I like making rules Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it... -Fluffykitty Pusher Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] Screenshot? New one yay. -Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side. Last edited by Saka on June 21st, 2017, 10:20 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA Someone should create a phpBB-based forum so we can experiment without mucking about with the forums. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X The testing grounds have now become similar to actual military testing grounds. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm We also have this thread. Also, is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you: Code: Select all [wiki][viewer][/wiki][viewer][/viewer][/viewer] Last edited by fluffykitty on June 22nd, 2017, 11:50 am, edited 1 time in total. I like making rules 83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact: oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar. EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale. Code: Select all x = 8, y = 10, rule = B3/S23 3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo! No football of any dui mauris said that. Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki] This dosen't do good things Edit: Code: Select all [wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url] Neither does this ^ What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer] I get about five different scroll bars when I preview this Edit: Code: Select all [viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki] Makes a really long post and makes the rest of the thread large and centred Edit 2: Code: Select all [url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer] Just don't do this (Sorry I'm having a lot of fun with this) ^ What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy Here's another small one: Code: Select all [url][wiki][viewer][/wiki][/url][/viewer] fg Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: Code: Select all [wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki] [/code] Is a pinch broken Doesn’t this thread belong in the sandbox? I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" 77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: 77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Now it's half an aidan mode testing grounds. Also, fluffykitty's messmaker: Code: Select all [viewer][wiki][*][/viewer][/*][/wiki][/quote] I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf
Sixteenth SSC CGL level Question Set on Trigonometry This is the sixteenth Question set of 10 practice problem exercise for SSC CGL exam on topic Trigonometry. Students should complete this question set in prescribed time first and then only refer to the solution set. We found from our analysis of the Trigonometry problems that this topic is built on a That's why it's possible to solve any problem in this topic area fast and quick, following elegant problem solving methods if you are used to applying problem solving techniques based on related basic and rich subject concepts. small set of basic and rich concepts. We have tried to show you how this can be done in the solution set. But please, first take this test in prescribed time. Sixteenth Question set- 10 problems for SSC CGL exam: topic Trigonometry - time 12 mins Problem 1. The simplified value of $(sec\theta - cos\theta)^2 + (cosec\theta - sin\theta)^2 - (cot\theta - tan\theta)^2$ is, $\displaystyle\frac{1}{2}$ $0$ $2$ $1$ Problem 2. If $\displaystyle\frac{sin\theta + cos\theta}{sin\theta - cos\theta} = \frac{5}{4}$, then the value of $\displaystyle\frac{tan^2\theta + 1}{tan^2\theta - 1}$ will be, $\displaystyle\frac{41}{40}$ $\displaystyle\frac{40}{41}$ $\displaystyle\frac{25}{16}$ $\displaystyle\frac{41}{9}$ Problem 3. If $sin\theta + cosec\theta =2$, then the value of $sin^{100}\theta + cosec^{100}\theta$ is, 100 3 2 1 Problem 4. The greatest value of $sin^4\theta + cos^4\theta$ is, $1$ $\displaystyle\frac{1}{2}$ $3$ $2$ Problem 5. If $\displaystyle\frac{sin\theta}{x} = \displaystyle\frac{cos\theta}{y}$, then $sin\theta - cos\theta$ is, $x - y$ $\displaystyle\frac{x - y}{\sqrt{x^2 + y^2}}$ $\displaystyle\frac{y - x}{\sqrt{x^2 + y^2}}$ $x + y$ Problem 6. If $tan\theta - cot\theta = 0$ find the value of $sin\theta + cos\theta$. $\sqrt{2}$ $0$ $1$ $2$ Problem 7. If $sin21^0 = \displaystyle\frac{x}{y}$ then $sec21^0 - sin69^0$ is, $\displaystyle\frac{y^2}{x\sqrt{y^2 - x^2}}$ $\displaystyle\frac{x^2}{y\sqrt{y^2 - x^2}}$ $\displaystyle\frac{x^2}{y\sqrt{x^2 - y^2}}$ $\displaystyle\frac{y^2}{x\sqrt{x^2 - y^2}}$ Problem 8. If $\displaystyle\frac{sec\theta+ tan\theta}{sec\theta - tan\theta}=\displaystyle\frac{5}{3}$ then $sin\theta$ is, $\displaystyle\frac{3}{4}$ $\displaystyle\frac{1}{3}$ $\displaystyle\frac{2}{3}$ $\displaystyle\frac{1}{4}$ Problem 9. If $(1 + sin A)(1 + sin B)(1 + sin C) = (1 - sin A)(1 - sin B)( 1 - sin C)$, then the expression on each side of the equation equals, $1$ $tan A.tan B.tan C$ $cos A.cos B.cos C$ $sin A.sin B.sin C$ Problem 10. If $\theta = 60^0$, then, $\displaystyle\frac{1}{2}\sqrt{1 + sin\theta} + \displaystyle\frac{1}{2}\sqrt{1 - sin\theta}$ is, $cos\displaystyle\frac{\theta}{2}$ $cot\displaystyle\frac{\theta}{2}$ $sec\displaystyle\frac{\theta}{2}$ $sin\displaystyle\frac{\theta}{2}$ You will find the detailed conceptual solutions to these questions in . SSC CGL level Solution Set 16 on Trigonometry Note: You will observe that in many of the Trigonometric problems rich algebraic concepts and techniques are to be used. In fact that is the norm. Algebraic concepts are frequently used for elegant solutions of Trigonometric problems. But compared to difficulties of purely algebraic problem solving, trigonometry problems are simpler because by applying a few basic and rich trigonometric concepts along with algebraic concepts elegant solutions are reached faster. Watch the video solutions in the two-part video. Part 1: Q1 to Q5 Part 2: Q6 to Q10 Answers to the questions Problem 1. Answer: Option d: $1$. Problem 2. Answer: Option a : $\displaystyle\frac{41}{40}$. Problem 3. Answer: Option c: 2. Problem 4. Answer: Option a: $1$. Problem 5. Answer: Option b: $\displaystyle\frac{x - y}{\sqrt{x^2 + y^2}}$. Problem 6. Answer: Option a : $\sqrt{2}$. Problem 7. Answer: Option b: $\displaystyle\frac{x^2}{y\sqrt{y^2 - x^2}}$. Problem 8. Answer: Option d: $\displaystyle\frac{1}{4}$. Problem 9. Answer: Option c: $cos A.cos B.cos C$. Problem 10. Answer: Option a: $cos\displaystyle\frac{\theta}{2}$. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Question Set 16 on Trigonometry
Cyclic asymptotic behaviour of a population reproducing by fission into two equal parts 1. Laboratoire de Géodésie, IGN-LAREG, Bâtiment Lamarck A et B, 35 rue Hélène Brion, 75013 Paris, France 2. Sorbonne Universités, Inria, UPMC Univ Paris 06, Mamba project-team, Laboratoire Jacques-Louis Lions, Paris, France 3. Wolfgang Pauli Institute, c/o Faculty of Mathematics of the University of Vienna, Vienna, Austria 4. Laboratoire de Mathématiques de Versailles, UVSQ, CNRS, Université Paris-Saclay, 45 Avenue des États-Unis, 78035 Versailles cedex, France $ \frac{\partial}{\partial t} u(t,x) + \dfrac{\partial}{ \partial x} \big(x u(t,x)\big) + B(x) u(t,x) = 4 B(2x)u(t,2x), $ $ B(x), $ $ L^2 $ Keywords:Growth-fragmentation equation, self-similar fragmentation, long-time behaviour, general relative entropy, periodic semigroups, non-hypocoercivity. Mathematics Subject Classification:Primary: 35Q92, 35B10, 35B40, 47D06, 35P05; Secondary: 35B41, 92D25, 92B25. Citation:Étienne Bernard, Marie Doumic, Pierre Gabriel. Cyclic asymptotic behaviour of a population reproducing by fission into two equal parts. Kinetic & Related Models, 2019, 12 (3) : 551-571. doi: 10.3934/krm.2019022 References: [1] W. Arendt, A. Grabosch, G. Greiner, U. Groh, H. P. Lotz, U. Moustakas, R. Nagel, F. Neubrander and U. Schlotterbeck, [2] [3] D. Balagué, J. A. Cañizo and P. Gabriel, Fine asymptotics of profiles and relaxation to equilibrium for growth-fragmentation equations with variable drift rates, [4] [5] J. Banasiak and L. Arlotti, [6] J. Banasiak and W. Lamb, The discrete fragmentation equation: Semigroups, compactness and asynchronous exponential growth, [7] [8] [9] G. I. Bell and E. C. Anderson, Cell growth and division: I. a mathematical model with applications to cell volume distributions in mammalian suspension cultures, [10] [11] [12] [13] M. J. Cáceres, J. A. Cañizo and S. Mischler, Rate of convergence to an asymptotic profile for the self-similar fragmentation and growth-fragmentation equations, [14] [15] [16] M. Doumic and M. Escobedo, Time asymptotics for a critical case in fragmentation and growth-fragmentation equations, [17] [18] M. Doumic, M. Hoffmann, N. Krell and L. Robert, Statistical estimation of a growth-fragmentation model observed on a genealogical tree, [19] K.-J. Engel and R. Nagel, [20] M. Escobedo, S. Mischler and M. Rodriguez Ricard, On self-similarity and stationary problem for fragmentation and coagulation models, [21] P. Gabriel and F. Salvarani, Exponential relaxation to self-similarity for the superquadratic fragmentation equation, [22] G. Greiner and R. Nagel, Growth of cell populations via one-parameter semigroups of positiveoperators, in [23] [24] B. Haas, Asymptotic behavior of solutions of the fragmentation equation with shattering: an approach via self-similar Markov processes, [25] A. J. Hall and G. C. Wake, Functional-differential equations determining steady size distributions for populations of cells growing exponentially, [26] [27] P. Laurençot, B. Niethammer and J. J. L. Velázquez, Oscillatory dynamics in Smoluchowski's coagulation equation with diagonal kernel, [28] [29] P. Michel, S. Mischler and B. Perthame, General entropy equations for structured population models and scattering, [30] P. Michel, S. Mischler and B. Perthame, General relative entropy inequality: An illustrationon growth models, [31] S. Mischler and J. Scher, Spectral analysis of semigroups and growth-fragmentation equations, [32] K. Pakdaman, B. Perthame and D. Salort, Adaptation and fatigue model for neuron networks and large time asymptotics in a nonlinear fragmentation equation, [33] B. Perthame, [34] [35] [36] [37] A. A. Zaidi, B. Van Brunt and G. C. Wake, Solutions to an advanced functional partial differential equation of the pantograph type, [38] show all references References: [1] W. Arendt, A. Grabosch, G. Greiner, U. Groh, H. P. Lotz, U. Moustakas, R. Nagel, F. Neubrander and U. Schlotterbeck, [2] [3] D. Balagué, J. A. Cañizo and P. Gabriel, Fine asymptotics of profiles and relaxation to equilibrium for growth-fragmentation equations with variable drift rates, [4] [5] J. Banasiak and L. Arlotti, [6] J. Banasiak and W. Lamb, The discrete fragmentation equation: Semigroups, compactness and asynchronous exponential growth, [7] [8] [9] G. I. Bell and E. C. Anderson, Cell growth and division: I. a mathematical model with applications to cell volume distributions in mammalian suspension cultures, [10] [11] [12] [13] M. J. Cáceres, J. A. Cañizo and S. Mischler, Rate of convergence to an asymptotic profile for the self-similar fragmentation and growth-fragmentation equations, [14] [15] [16] M. Doumic and M. Escobedo, Time asymptotics for a critical case in fragmentation and growth-fragmentation equations, [17] [18] M. Doumic, M. Hoffmann, N. Krell and L. Robert, Statistical estimation of a growth-fragmentation model observed on a genealogical tree, [19] K.-J. Engel and R. Nagel, [20] M. Escobedo, S. Mischler and M. Rodriguez Ricard, On self-similarity and stationary problem for fragmentation and coagulation models, [21] P. Gabriel and F. Salvarani, Exponential relaxation to self-similarity for the superquadratic fragmentation equation, [22] G. Greiner and R. Nagel, Growth of cell populations via one-parameter semigroups of positiveoperators, in [23] [24] B. Haas, Asymptotic behavior of solutions of the fragmentation equation with shattering: an approach via self-similar Markov processes, [25] A. J. Hall and G. C. Wake, Functional-differential equations determining steady size distributions for populations of cells growing exponentially, [26] [27] P. Laurençot, B. Niethammer and J. J. L. Velázquez, Oscillatory dynamics in Smoluchowski's coagulation equation with diagonal kernel, [28] [29] P. Michel, S. Mischler and B. Perthame, General entropy equations for structured population models and scattering, [30] P. Michel, S. Mischler and B. Perthame, General relative entropy inequality: An illustrationon growth models, [31] S. Mischler and J. Scher, Spectral analysis of semigroups and growth-fragmentation equations, [32] K. Pakdaman, B. Perthame and D. Salort, Adaptation and fatigue model for neuron networks and large time asymptotics in a nonlinear fragmentation equation, [33] B. Perthame, [34] [35] [36] [37] A. A. Zaidi, B. Van Brunt and G. C. Wake, Solutions to an advanced functional partial differential equation of the pantograph type, [38] [1] [2] [3] Jacek Banasiak, Wilson Lamb. The discrete fragmentation equation: Semigroups, compactness and asynchronous exponential growth. [4] Jan Prüss, Vicente Vergara, Rico Zacher. Well-posedness and long-time behaviour for the non-isothermal Cahn-Hilliard equation with memory. [5] [6] Daniel Balagué, José A. Cañizo, Pierre Gabriel. Fine asymptotics of profiles and relaxation to equilibrium for growth-fragmentation equations with variable drift rates. [7] [8] Elena Bonetti, Pierluigi Colli, Mauro Fabrizio, Gianni Gilardi. Modelling and long-time behaviour for phase transitions with entropy balance and thermal memory conductivity. [9] Jacek Banasiak, Luke O. Joel, Sergey Shindin. The discrete unbounded coagulation-fragmentation equation with growth, decay and sedimentation. [10] [11] Shota Sato, Eiji Yanagida. Singular backward self-similar solutions of a semilinear parabolic equation. [12] Shota Sato, Eiji Yanagida. Forward self-similar solution with a moving singularity for a semilinear parabolic equation. [13] Marek Fila, Michael Winkler, Eiji Yanagida. Convergence to self-similar solutions for a semilinear parabolic equation. [14] Xinmin Xiang. The long-time behaviour for nonlinear Schrödinger equation and its rational pseudospectral approximation. [15] Peter V. Gordon, Cyrill B. Muratov. Self-similarity and long-time behavior of solutions of the diffusion equation with nonlinear absorption and a boundary source. [16] [17] [18] [19] Elena Bonetti, Giovanna Bonfanti, Riccarda Rossi. Long-time behaviour of a thermomechanical model for adhesive contact. [20] 2018 Impact Factor: 1.38 Tools Metrics Other articles by authors [Back to Top]
\begin{align} u'' &+ \Gamma^0_{00}(u')^2 + 2\Gamma^0_{01}u'v' + \Gamma^0_{11}(v')^2 = 0,\\ v'' &+ \Gamma^1_{00}(u')^2 + 2\Gamma^1_{01}u'v' + \Gamma^1_{11}(v')^2 = 0, \end{align} where $\Gamma^m_{ij}(u(s),v(s))$ is the Christoffel symbol of second kind. The geodesic solution $u = u(s),v = v(s)$ is a curve defined for the interval $s\in[s_0,s_1]$. These equations can be described as a system of first order differential equations by setting $p = u'$, and $q = v'$ : \begin{align} p' &+ \Gamma^0_{00}p^2 + 2\Gamma^0_{01}pq + \Gamma^0_{11}q^2 = 0\\ q' &+ \Gamma^1_{00}p^2 + 2\Gamma^1_{01}pq + \Gamma^1_{11}q^2 = 0 \end{align} with the initial condition given as $u(s_0) = u_0, u'(s_0) = du_0, v(s_0) = v_0, v'(s_0) = dv_0$. Here is a code snippet of my implementation def f(y,s,C,u,v): y0 = y[0] # u y1 = y[1] # u' y2 = y[2] # v y3 = y[3] # v' dy = np.zeros_like(y) dy[0] = y1 dy[2] = y3 C = C.subs({u:y0,v:y2}) # Evaluate C for u,v = (u0,v0) dy[1] = -C[0,0][0]*dy[0]**2 -\ 2*C[0,0][1]*dy[0]*dy[2] -\ C[0,1][1]*dy[2]**2 dy[3] = -C[1,0][0]*dy[0]**2 -\ 2*C[1,0][1]*dy[0]*dy[2] -\ C[1,1][1]*dy[2]**2 return dydef solve(C,u0,s0,s1,ds): s = np.arange(s0,s1+ds,ds) # The Christoffel symbol of 2nd kind, C, is a function of (u,v) from sympy.abc import u,v return sc.odeint(f,u0,s,args=(C,u,v)) # integration method : LSODA I have implemented several generic test cases : torus, sphere, egg carton, and catenoid. However, there seem to be some issue with the solver. On a sphere, for example, the geodesic curve is the great circle (see reference). When I try to find the geodesic curve and plot on a sphere (with same parmaters as reference provided), the curve starts to veer off. There seem to be some sort of numerical instability that is altering the course of the geodesic curve over the interval $s\in[s_0,s_1]$. Is there any way I can make my solver more stable ? I have tried to reduce the step-size of the solver, but that has not made things any better (visually at least...I could probably try to estimate the convergence rate). Edit 1: Edit 2: I have pasted the test case on the following link : Geodesic on a sphere. I forgot to mention this, but I am using the following versions SymPy : 0.7.7.dev SciPy : 0.16.0 It should now be possible for anyone to reproduce the same results.
Sixteenth SSC CGL level Solution Set, topic Trigonometry This is the sixteenth solution set of 10 practice problem exercise for SSC CGL exam on topic Trigonometry. Students should complete the corresponding question set in prescribed time first and then only refer to the solution set. We found from our analysis of the Trigonometry problems that this topic is built on a small set of basic and rich concepts. It makes possible for you to solve any problem in this topic area fast and quick, following elegant problem solving methods. We have tried to show you how this can be done in this solution set. But before going through these solutions please take the test first in prescribed time by referring to . SSC CGL level Question Set 16 on Trigonometry Watch the video solutions in the two-part video. Part 1: Q1 to Q5 Part 2: Q6 to Q10 Sixteenth solution set- 10 problems for SSC CGL exam: topic Trigonometry - time 12 mins Problem 1. The simplified value of $(sec\theta - cos\theta)^2 + (cosec\theta - sin\theta)^2 - (cot\theta - tan\theta)^2$ is, $\displaystyle\frac{1}{2}$ $0$ $2$ $1$ Solution: Conventional approach is to expand all the three squares, transform and simplify the terms and get the simplified target value. But as is the norm, that is a longer time-consuming path. We have to search for fast elegant solution. First clue: The third square expression being subtracted, a prospective approach is to simplify the first two square expressions in hopefully terms of $tan\theta$ and $cot\theta$ so that these can be combined with the negative terms from the expanded third expression for achieving simplified final result. Looking at the first two terms we identify a rich concept that will help us do that. Rich concept of Factoring out the inverse Being well-acquainted with basic trigonometric concepts, whenever we encounter an expression of $(sec\theta - cos\theta)$, a sum of inverses, we recognize that we can take the inverse $sec\theta$ out of the brackets leaving the well known friendly expression $(1 - cos^2\theta)$ within the brackets, that will immediately be transformed to $sin^2\theta$. Thus a sum of terms will be simplified to a product of terms. In any expression manipulation, we always look for such opportunities to reduce the number of terms in expressions so that product of sum of terms expressions are transformed to product of terms. This is a general algebraic rich concept and technique for simplification. $(sec\theta - cos\theta) = sec\theta(1 - cos^2\theta)$ $\hspace{10mm}= sec\theta{sin^2\theta} = sin\theta{tan\theta}$, and similarly, $(cosec\theta - sin\theta) = cosec\theta(1 - sin^2\theta)$ $\hspace{10mm}= cosec\theta{cos^2\theta} = cos\theta{cot\theta}$. These are very useful patterns for application of the rich concept of factoring out the inverse. Let us focus on the first two terms then and apply the rich concept and technique of factoring out the inverse, $(sec\theta - cos\theta)^2 = sec^2\theta(1 - cos^2\theta)^2 = {sin^2\theta}tan^2\theta$. Similarly, $(cosec\theta - sin\theta)^2 = {cos^2\theta}cot^2\theta$. Now we sum up these terms with the expanded third term and apply principle of collection of friendly terms. The pair of terms involving $tan^2\theta$ and the second pair involving $cot^2\theta$ are combined together. $E = 2 - tan^2\theta(1 - sin^2\theta) - cot^2\theta(1 - cos^2\theta)$ $\hspace{12mm}= 2 - (sin^2\theta + cos^2\theta)$ $\hspace{12mm}= 1$. Answer: Option d: $1$. Key concepts used: Transforming the first two expressions to single term expressions with $tan^2\theta$ and $cot^2\theta$ as factors with the intention of merging with the expanded corresponding terms from the third expression. This is achieved by applying the $sec\theta$ and $cosec\theta$ -- use of rich concept of factoring out the inverse to find similarities between the third expression terms and the first two expressions -- end state analysis of $tan^2\theta$ and $cot^2\theta$ together to cancel out the denominators and leave only $sin^2\theta$ and $cos^2\theta$. collecting the friendly terms Problem 2. If $\displaystyle\frac{sin\theta + cos\theta}{sin\theta - cos\theta} = \frac{5}{4}$, then the value of $\displaystyle\frac{tan^2\theta + 1}{tan^2\theta - 1}$ will be, $\displaystyle\frac{41}{40}$ $\displaystyle\frac{40}{41}$ $\displaystyle\frac{25}{16}$ $\displaystyle\frac{41}{9}$ Solution: Applying the powerful componendo dividendo technique on the input expression we get directly, $\displaystyle\frac{sin\theta}{cos\theta} = \displaystyle\frac{5 + 4}{5 - 4} = 9$, Or, $tan\theta = 9$ Substituting in the target expression, $E = \displaystyle\frac{9^2 + 1}{9^2 - 1} = \displaystyle\frac{82}{80} = \displaystyle\frac{41}{40}$ Answer: Option a : $\displaystyle\frac{41}{40}$. Key concepts used: Componendo dividendo -- substitution. Rich algebraic technique of Componendo and dividendo Problem: Simplify, $\displaystyle\frac{x + y}{x - y} = 3$. First we add 1 to both sides of the equation, $\displaystyle\frac{x + y}{x - y} + 1 = 3 + 1$, Or, $\displaystyle\frac{2x}{x - y} = 4$. Next we subtract 1 from both sides of the equation, $\displaystyle\frac{x + y}{x - y} - 1 = 3 - 1$, Or, $\displaystyle\frac{2y}{x - y} = 2$. Now we divide the first result by the second, $\displaystyle\frac{x}{y} = 2$, a greatly simplified expression with two terms in the numerator and denominator reduced to single terms. This is a powerful algebraic technique frequently applied whenever we encounter the special form of given expression. Problem 3. If $sin\theta + cosec\theta =2$, then the value of $sin^{100}\theta + cosec^{100}\theta$ is, 100 3 2 1 Solution: Whenever the target expression involves large powers of $sin\theta$ or $cos\theta$, we can be sure that from the input expression we would get either the value of $\theta$ or the value of $sin\theta$, as otherwise such large powers can't be evaluated. With this prior knowledge we proceed to expand the given expression, $sin\theta + cosec\theta = 2$, Or, $sin^2\theta - 2sin\theta + 1 = (sin\theta - 1)^2 = 0$ Or, $sin\theta = 1$, and so, $cosec\theta=1$. Putting these convenient values in the target expression we get, $E = 1 + 1 = 2$ Answer: Option c: 2. Key concepts used: Deciding from target expression that value of $sin\theta$ must be obtained from the given expression -- expanding and evaluating the given expression produced the convenient value of $sin\theta$. Problem 4. The greatest value of $sin^4\theta + cos^4\theta$ is, $1$ $\displaystyle\frac{1}{2}$ $3$ $2$ Solution: Using algebraic maxima finding technique we transform part of the given expression to a square of sums. $sin^4\theta + cos^4\theta $ $= (sin^2\theta + cos^2\theta)^2 - 2sin^2\theta{cos^2\theta}$, $=1 - 2sin^2\theta{cos^2\theta}$. The maximum of this expression can only be 1 when, the second term is zero, or when either $sin\theta$ or $cos\theta$ is 0. This is a quick method as it has used the trigonometric relations along with the algebraic maxima technique elegantly. Answer: Option a: $1$. Key concepts used: To use the , converting part of the given expression to square of sums -- finding the maximum condition. maxima technique for quadratic equations Problem 5. If $\displaystyle\frac{sin\theta}{x} = \displaystyle\frac{cos\theta}{y}$, then $sin\theta - cos\theta$ is, $x - y$ $\displaystyle\frac{x - y}{\sqrt{x^2 + y^2}}$ $\displaystyle\frac{y - x}{\sqrt{x^2 + y^2}}$ $x + y$ Solution: Though the target expression looks simple, the task is cut out for us, we must express both $sin\theta$ and $cos\theta$ in terms of $x$ and $y$. With this resolve we start manipulating the given expression in the simplest path possible, $\displaystyle\frac{sin\theta}{x} = \displaystyle\frac{cos\theta}{y}$, Or, $cot\theta = \displaystyle\frac{y}{x}$, Or, $cot^2\theta +1 = cosec^2\theta = \displaystyle\frac{x^2 + y^2}{x^2}$, Or, $sin^2\theta = \displaystyle\frac{x^2}{x^2 + y^2}$, Or, $sin\theta = \displaystyle\frac{x}{\sqrt{x^2 + y^2}}$. Similarly, $cos\theta = \displaystyle\frac{y}{\sqrt{x^2 + y^2}}$. Thus target expression, $sin\theta - cos\theta = \displaystyle\frac{x - y}{\sqrt{x^2 + y^2}}$. Answer: Option b: $\displaystyle\frac{x - y}{\sqrt{x^2 + y^2}}$. Key concepts used: Using the rich concept of $cosec^2\theta = cot^2\theta + 1$ and $sec^2\theta = tan^2\theta + 1$ -- expressing $sin\theta$ and $cos\theta$ in terms of $x$ and $y$. Problem 6. If $tan\theta - cot\theta = 0$ find the value of $sin\theta + cos\theta$. $\sqrt{2}$ $0$ $1$ $2$ Solution: With the form of the given expression we will go straight for equating the $tan$ and $cot$ with the intention of getting an equation in $sin$ and $cos$. So, $tan\theta = cot\theta$, Or, $sin^2\theta = cos^2\theta$. We know this equality occurs only when $sin\theta = cos\theta = \displaystyle\frac{1}{\sqrt{2}}$ at $\theta = 45^0$. So, $sin\theta + cos\theta = 2\times{\displaystyle\frac{1}{\sqrt{2}}} = \sqrt{2}$ Answer: Option a : $\sqrt{2}$. Key concepts used: Transforming the given expression in terms of $sin$ and $cos$ to get the equality condition. Problem 7. If $sin21^0 = \displaystyle\frac{x}{y}$ then $sec21^0 - sin69^0$ is, $\displaystyle\frac{y^2}{x\sqrt{y^2 - x^2}}$ $\displaystyle\frac{x^2}{y\sqrt{y^2 - x^2}}$ $\displaystyle\frac{x^2}{y\sqrt{x^2 - y^2}}$ $\displaystyle\frac{y^2}{x\sqrt{x^2 - y^2}}$ Solution: We know from the beginning that we have to use the rich trigonometric concept of, $sin\theta = cos\left({\displaystyle\frac{\pi}{2} - \theta}\right)$. In our problem situation applying this concept we have, $sin21^0 = cos 69^0$, So, $sin21^0 = cos69^0 = \displaystyle\frac{x}{y}$, Or, $1 - cos^2{69^0} = sin^2{69^0} = \displaystyle\frac{y^2 - x^2}{y^2}$, Or, $sin{69^0} = \displaystyle\frac{\sqrt{y^2 - x^2}}{y}$. The target expression, $sec21^0 - sin69^0 = cosec69^0 - sin69^0$ $= \displaystyle\frac{1 - sin^2{69^0}}{sin69^0}$ $= \displaystyle\frac{cos^2{69^0}}{sin69^0}$, $=\displaystyle\frac{x^2}{y^2}\times{\displaystyle\frac{y}{\sqrt{y^2 - x^2}}}$ $=\displaystyle\frac{x^2}{y\sqrt{y^2 - x^2}}$. Answer: Option b: $\displaystyle\frac{x^2}{y\sqrt{y^2 - x^2}}$. Key concepts used: Rich concept of $sin\theta = cos\left({\displaystyle\frac{\pi}{2} - \theta}\right)$ to get $cos69^0$ and then $sin69^0$ -- then only substituting the complex value at the end to minimize calculations. trigonometric simplification first Problem 8. If $\displaystyle\frac{sec\theta+ tan\theta}{sec\theta - tan\theta}=\displaystyle\frac{5}{3}$ then $sin\theta$ is, $\displaystyle\frac{3}{4}$ $\displaystyle\frac{1}{3}$ $\displaystyle\frac{2}{3}$ $\displaystyle\frac{1}{4}$ Solution: As usual detecting the possibility of taking out $sec\theta$ as a factor from both numerator and denominator we take up the transformation, $\displaystyle\frac{sec\theta+ tan\theta}{sec\theta - tan\theta}=\displaystyle\frac{5}{3}$, Or, $\displaystyle\frac{sec\theta(1 + sin\theta)}{sec\theta(1 - sin\theta)}=\displaystyle\frac{5}{3}$, Or, $\displaystyle\frac{1 + sin\theta}{1 - sin\theta}=\displaystyle\frac{5}{3}$. This is ripe form for applying the Componendo dividendo technique. We thus get, $sin\theta = \displaystyle\frac{5 - 3}{5 + 3} = \displaystyle\frac{1}{4}$ Answer: Option d: $\displaystyle\frac{1}{4}$. Key concepts used: Recognition that $sec\theta$ can be from both numerator and denominator -- and then use factored out componendo dividendo technique. Problem 9. If $(1 + sin A)(1 + sin B)(1 + sin C) = (1 - sin A)(1 - sin B)( 1 - sin C)$, then the expression on each side of the equation equals, $1$ $tan A.tan B.tan C$ $cos A.cos B.cos C$ $sin A.sin B.sin C$ Solution: Observing the possibility of getting $1 - cos^2\theta$ for each of the three expressions on one side if we multiply them together we first equate the expressions to a dummy variable $p$, $(1 + sin A)(1 + sin B)(1 + sin C) $ $\hspace{5mm}= (1 - sin A)(1 - sin B)( 1 - sin C) = p$ which results in two equations, $(1 + sin A)(1 + sin B)(1 + sin C)= p$ and, $(1 - sin A)(1 - sin B)( 1 - sin C) = p$. Multiplying the two together, $p^2 = (1 - sin^2 A)(1 - sin^2 B)(1 - sin^2 C)$ $=cos^2 A. cos^2 B.cos^2 C$. Or, each target expression $p=cos A.cos B.cos C$. Answer: Option c: $cos A.cos B.cos C$. Key concepts used: Identifying the possibility of getting simplified $1 - cos^2\theta$ for each of the three sum terms if multipled together we introduce a dummy variable equaling the expressions -- multiplication and simplification. This is simple algebraic technique using basic trigonometric concepts. Problem 10. If $\theta = 60^0$, then, $\displaystyle\frac{1}{2}\sqrt{1 + sin\theta} + \displaystyle\frac{1}{2}\sqrt{1 - sin\theta}$ is, $cos\displaystyle\frac{\theta}{2}$ $cot\displaystyle\frac{\theta}{2}$ $sec\displaystyle\frac{\theta}{2}$ $sin\displaystyle\frac{\theta}{2}$ Solution: Substituting value of $\theta$, $\sqrt{1 + sin\theta}$ $= \sqrt{1 + \displaystyle\frac{\sqrt{3}}{2}}$ $= \sqrt{\displaystyle\frac{2 + \sqrt{3}}{2}}$ The problem expression is thus $\sqrt{1 + sin\theta}$ in which we need to eliminate the square root applying the powerful surd techniques of transforming a two term surd expression to a square of sum of a two term surd expression. As the surd $\sqrt{3}$ doesn't have 2 as a coefficient, we multiply and diivide by 2, getting, $\sqrt{1 + sin\theta}$ $= \sqrt{\displaystyle\frac{(\sqrt{3} + 1)^2}{4}}$ $=\displaystyle\frac{\sqrt{3} + 1}{2}$, Similarly, $\sqrt{1 - sin\theta} = \displaystyle\frac{\sqrt{3} - 1}{2}$, and the target expression, $E = \displaystyle\frac{\sqrt{3}}{2} = cos30^0 = cos{\displaystyle\frac{\theta}{2}}$ Answer: Option a: $cos\displaystyle\frac{\theta}{2}$. Key concepts used: Application of surd technique of transforming a two term surd expression to a square of a two term surd expression -- simplification. Note: You will observe that in many of these Trigonometric problems rich algebraic concepts and techniques have been used. In fact that is the norm. Algebraic concepts are frequently used for elegant solutions of Trigonometric problems. But compared to difficulties of purely algebraic problem solving, trigonometry problems are simpler because by applying a few basic and rich trigonometric concepts along with algebraic concepts elegant solutions are reached faster. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Solution Set 16 on Trigonometry
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
There are 21 elements with only one isotope, so all their atoms have identical masses. All other elements have two or more isotopes, so their atoms have at least two different masses. But all elements obey the law of definite proportions when they combine with other elements, so they behave as if they had just one kind of atom with a definite mass. In order to solve this dilemma, we define the atomic weight as the weighted average mass of all naturally occurring (occasionally radioactive) isotopes of the element. A weighted average is defined as Atomic Weight = \(\left(\tfrac{\%\text{ abundance isotope 1}}{100}\right)\times \left(\text{mass of isotope 1}\right)~ ~ ~ +\) \(\left(\tfrac{\%\text{ abundance isotope 2}}{100}\right)\times \left(\text{mass of isotope 2}\right)~ ~ ~ + ~ ~ ...\) Similar terms would be added for all the isotopes. The calculation is analogous to the method used to calculate grade point averages in most colleges: GPA = \(\left(\tfrac{\text{Credit Hours Course 1}}{\text{total credit hours}}\right)\times \left(\text{Grade in Course 1}\right)~ ~ ~ +\) \(\left(\tfrac{\text{Credit Hours Course 2}}{\text{total credit hours}}\right)\times \left(\text{Grade in Course 2}\right)~ ~ ~ + ~ ~ ...\) Some Conventions The term or simply "Average Atomic Weight" is commonly used to refer to what is properly called a "Atomic Weight" . Atomic Weights are technically dimensionless, because they cannot be determined as absolute values. They were historically calculated from mass ratios (early chemists could say that magnesium atoms have atoms of mass 24.305/15.999 times as heavy as oxygen atoms, because that is the mass ratio of magnesium to oxygen in MgO). Now atomic weights are calculated from the position of peaks in a mass spectrum. "relative atomic mass" While the peak positions may be labeled in amu, that is only possible if the mass spectrometer is calibrated with a standard, whose mass can only be known relative to another, and it is also technically dimensionless. To solve this dilemma, we an amu as 1/12 the mass of a \({}_{\text{6}}^{\text{12}}\text{C}\) atom. \({}_{\text{6}}^{\text{12}}\text{C}\) can then be used to calibrate a mass spectrometer. For convenience, we often use "token" dimensions of define amu/average atomfor atomic weight, or g/molfor molar mass. The calculation of an atomic weight includes "naturally occurring isotopes", which are defined by the Commission on Isotopic Aundances and Atomic Weights of IUPAC (IUPAC/CIAAW) to include radioactive isotopes with half lives greater than 1 x 10 10 years. Thus thorium, protactinium, and uranium are assigned atomic weights of 232.0, 231.0, and 238.0, but no other radioactive elements have isotopes with long enough lifetimes to be assigned atomic weights. Example \(\PageIndex{1}\): Isotopes Naturally occurring lead is found to consist of four isotopes: 1.40% \({}_{\text{82}}^{\text{204}}\text{Pb}\) whose isotopic weight is 203.973. 24.10% \({}_{\text{82}}^{\text{206}}\text{Pb}\) whose isotopic weight is 205.974. 22.10% \({}_{\text{82}}^{\text{207}}\text{Pb}\) whose isotopic weight is 206.976. 52.40% \({}_{\text{82}}^{\text{208}}\text{Pb}\) whose isotopic weight is 207.977. Calculate the atomic weight of an average naturally occurring sample of lead. Solution Suppose that you had 1 mol lead. This would contain 1.40% (\(\tfrac{1.40}{100}\) × 1 mol) \({}_{\text{82}}^{\text{204}}\text{Pb}\) whose molar mass is 203.973 g mol –1. The mass of 20482Pb would be \[\text{m}_{\text{204}}=n_{\text{204}}\times \text{ }M_{\text{204}}=\left( \frac{\text{1}\text{.40}}{\text{100}}\times \text{ 1 mol} \right)\text{ (203}\text{.973 g mol}^{\text{-1}}\text{)}=\text{2}\text{0.86 g}\] Similarly for the other isotopes \(\begin{align}\text{m}_{\text{206}}=n_{\text{206}}\times \text{ }M_{\text{206}}=\left( \frac{\text{24}\text{.10}}{\text{100}}\times \text{ 1 mol} \right)\text{ (205}\text{.974 g mol}^{\text{-1}}\text{)}=\text{49}\text{.64 g} \\\text{m}_{\text{207}}=n_{\text{207}}\times \text{ }M_{\text{207}}=\left( \frac{\text{22}\text{.10}}{\text{100}}\times \text{ 1 mol} \right)\text{ (206}\text{.976 g mol}^{\text{-1}}\text{)}=\text{45}\text{.74 g} \\\text{m}_{\text{208}}=n_{\text{208}}\times \text{ }M_{\text{208}}=\left( \frac{\text{52}\text{.40}}{\text{100}}\times \text{ 1 mol} \right)\text{ (207}\text{.977 g mol}^{\text{-1}}\text{)}=\text{108}\text{.98 g} \\\end{align}\) Upon summing all four results, the mass of 1 mol of the mixture of isotopes is to be found 2.86 g + 49.64 g + 45.74 g + 108.98 g = 207.22 g Thus the atomic weight of lead is 207.2 g/mol, as mentioned earlier in the discussion. An important corollary to the existence of isotopes should be emphasized at this point. When highly accurate results are obtained, atomic weights may vary slightly depending on where a sample of an element was obtained. For this reason, the IUPAC CIAAW has recently redefined the atomic weights of 10 elements having two or more isotopes [1]. The percentages of different isotopes often depends on the source of the element. For example, oxygen in Antarctic precipitation has an atomic weight of 15.99903, but oxygen in marine N 2O has an atomic weight of 15.9997. "Fractionation" of the isotopes results from slightly different rates of chemical and physical processes caused by small differences in their masses. The difference can be more dramatic when an isotope is derived from transmutation. For example, lead produced by decay of uranium contains a much larger percentage of \({}_{\text{82}}^{\text{206}}\text{Pb}\) than the 24.1 percent given in the example for the average sample. Consequently the atomic weight of lead found in uranium ores is less than 207.2 and is much closer to 205.974, the isotopic weight of \({}_{\text{82}}^{\text{206}}\text{Pb}\). After the discovery of isotopes of the elements by J.J. Thompson in 1913 [2], it was suggested that the scale of relative masses of the atoms (the atomic weights) should use as a reference the mass of an atom of a particular isotope of one of the elements. The standard that was eventually chosen was \({}_{\text{6}}^{\text{12}}\text{C}\) , and it was assigned an atomic-weight value of exactly 12.000 000. Thus the atomic weights given in the Table of Atomic Weights are the ratios of weighted averages (calculated as in the Example) of the masses of atoms of all isotopes of each naturally occurring element to the mass of a single \({}_{\text{6}}^{\text{12}}\text{C}\) atom. Since carbon consists of two isotopes, 98.99% \({}_{\text{6}}^{\text{12}}\text{C}\) isotopic weight 12.000 and 1.11% \({}_{\text{6}}^{\text{13}}\text{C}\) of isotopic weight 13.003, the average atomic weight of carbon is \[\frac{\text{98}\text{.89}}{\text{100}\text{.00}}\text{ }\times \text{ 12}\text{.000 + }\frac{\text{1}\text{.11}}{\text{100}\text{.00}}\text{ }\times \text{ 13}\text{.003}=\text{12}\text{.011}\] for example. Conventional Atomic Weights and "Intervals" Deviations from average isotopic composition are usually not large, and so the were defined by the IUPAC/CIAAW for the elements showing the most variation in abundance. They can be used for nearly all chemical calculations. But at the same time, Conventional Atomic Weight Values were redefined for those elements as ranges, or Atomic Weights , for any work where small differences may be important "intervals" [3]. The table below gives typical values. Element Name Symbol Conventional Atomic Weight Atomic Weight Boron B 10.81 [10.806; 10.821] Carbon C 12.011 [12.0096; 12.0116] Chlorine Cl 35.45 [35.446; 35.457] Hydrogen H 1.008 [1.00784; 1.00811] Lithium Li 6.94 [6.938; 6.997] Nitrogen N 14.007 [14.00643; 14.00728] Oxygen O 15.999 [15.99903; 15.99971] Silicon Si 28.085 [28.084; 28.086] Sulfur S 32.06 [32.059; 32.076] Thallium Tl 204.38 [204.382; 204.385] In the study of nuclear reactions, however, one must be concerned about isotopic weights. This is discussed further in the section on Nuclear Chemistry. SI Definition of the Mole The SI definition of the mole also depends on the isotope \({}_{\text{6}}^{\text{12}}\text{C}\) and can now be stated. One mole is defined as the amount of substance of a system which contains as many elementary entities as there are atoms in exactly 0.012 kg of \({}_{\text{6}}^{\text{12}}\text{C}\). The elementary entities may be atoms, molecules, ions, electrons, or other microscopic particles. This definition of the mole makes the mass of 1 mole of an element in grams numerically equal to the average mass of the atoms in grams. This official definition of the mole makes possible a more accurate determination of the Avogadro constant than was reported earlier. The currently accepted value is N A = 6.02214179 × 10 23mol –1. This is accurate to 0.00000001 percent and contains five more significant figures than 6.022 × 10 23mol –1, the number used to define the mole previously. It is very seldom, however, that more than four significant digits are needed in the Avogadro constant. The value 6.022 × 10 23mol –1will certainly suffice for most calculations needed.
How to Design a Linear Cover Time Random Walk on a Finite Graph Abstract A random walk on a finite graph G = ( V, E) is random token circulation on vertices of G. A token on a vertex in V moves to one of its adjacent vertices according to a transition probability matrix P. It is known that both of the hitting time and the cover time of the standard random walk are bounded by O(| V| 3), in which the token randomly moves to an adjacent vertex with the uniform probability. This estimation is tight in a sense, that is, there exist graphs for which the hitting time and cover times of the standard random walk are \({\it \Omega}(|V|^3)\). Thus the following questions naturally arise: is it possible to speed up a random walk, that is, to design a transition probability for G that achieves a faster cover time? Or, how large (or small) is the lower bound on the cover time of random walks on G? In this paper, we investigate how we can/cannot design a faster random walk in terms of the cover time. We give necessary conditions for a graph G to have a linear cover time random walk, i,e., the cover time of the random walk on G is O(| V|). We also present a class of graphs that have a linear cover time. As a byproduct, we obtain the lower bound \({\it \Omega}(|V|\log |V|)\) of the cover time of any random walk on trees. Preview Unable to display preview. Download preview PDF. References 1.Aleliunas, R., Karp, R.M., Lipton, R.J., Lovász, L., Rackoff, C.: Random walks, universal traversal sequences, and the complexity of maze problems. In: Proc. 20th Symposium on Foundations of Computer Science, pp. 218–223 (1979)Google Scholar 2. 3.Aldous, D.J., Fill, J.: Reversible Markov Chains and Random Walks on Graphs, http://www.stat.berkeley.edu/users/aldous/RWG/book.html 4. 5. 6. 7. 8. 9. 10. 11. 12.Lovász, L.: Random walks on graphs: Survey. In: Bolyai Society, Mathematical Studies 2, Combinatorics, Paul Erd\(\ddot{o}\)s is Eighty, Keszthely, Hungary, vol. 2, pp. 1–46 (1993)Google Scholar 13. 14. 15. 16.Nonaka, Y., Ono, H., Sadakane, K., Yamashita, M.: The Hitting and Cover Times of Metropolis Walks (submitted for publication)Google Scholar 17.