text
stringlengths
256
16.4k
What works If you nest the definition of the fixpoint on lists inside the definition of the fixpoint on trees, the result is well-typed. This is a general principle when you have nested recursion in an inductive type, i.e. when the recursion goes through a constructor like list. Fixpoint size (t : LTree) : nat := let size_l := (fix size_l (l : list LTree) : nat := match l with | nil => 0 | h::r => size h + size_l r end) in match t with Node l => 1 + size_l l end. Or if you prefer to write this more tersely: Fixpoint size (t : LTree) : nat := match t with Node l => 1 + (fix size_l (l : list LTree) : nat := match l with | nil => 0 | h::r => size h + size_l r end) l end. (I have no idea who I heard it from first; this was certainly discovered independently many times.) A general recursion predicate More generally, you can define the “proper” induction principle on LTree manually. The automatically generated induction principle LTree_rect omits the hypothesis on the list, because the induction principle generator only understand non-nested strictly positive occurences of the inductive type. LTree_rect = fun (P : LTree -> Type) (f : forall l : list LTree, P (Node l)) (l : LTree) => match l as l0 return (P l0) with | Node x => f x end : forall P : LTree -> Type, (forall l : list LTree, P (Node l)) -> forall l : LTree, P l Let's add the induction hypothesis on lists. To fulfill it in the recursive call, we call the list induction principle and pass it the tree induction principle on the smaller tree inside the list. Fixpoint LTree_rect_nest (P : LTree -> Type) (Q : list LTree -> Type) (f : forall l, Q l -> P (Node l)) (g : Q nil) (h : forall t l, P t -> Q l -> Q (cons t l)) (t : LTree) := match t as t0 return (P t0) with | Node l => f l (list_rect Q g (fun u r => h u r (LTree_rect_nest P Q f g h u)) l) end. Why The answer to why lies in the precise rules for accepting recursive functions. These rules are perforce subtle, because there is a delicate balance between allowing complex cases (such as this one, with nested recursion in the datatype) and unsoundness. The Coq reference manual introduces the language (the calculus of inductive constructions, which is the proof language of Coq), mostly with formally precise definitions, but if you want the exact rules regarding induction and coinduction you'll need go to the research papers, on this topic Eduardo Giménez's [1]. Starting with the Coq manual, in the notation of the Fix rule, we have the fixpoint definition of $\mathsf{Fix} f_i \, \{ f_1 : A_1 := t_1 \; ; \;\; f_2 : A_2 := t_2 \}$ where: $$\begin{align*} \Gamma_1 &= (x : \mathtt{LTree}) & A_1 &= \mathrm{nat} & t_1 &= \mathsf{case}(x, \mathtt{LTree}, \lambda y. g_1 (f_2 y)) \\ \Gamma_2 &= (l : \mathtt{list}\;\mathtt{LTree}) & A_2 &= \mathrm{nat} & t_2 &= \mathsf{case}(l, \mathtt{list \: LTree}, \lambda h \: r. g_2 (f_1 h) (f_2 r)) \\\end{align*}$$ In order for the fixpoint definition to be accepted, “if $f_j$ occurs [in $t_i$] then … the argument should be syntactically recognized as structurally smaller than [the argument passed to $f_i$]” (simplifying because the functions have a single argument). Here, we need to check that $i=1$, $j=2$: l must be structurally smaller than t in size, ok. $i=2$, $j=1$: h must be structurally smaller than l in size_l, looks ok but isn't! $i=2$, $j=2$: r must be structurally smaller than l in size_l, ok. The reason why h is not structurally smaller than l according to the Coq interpreter is not clear to me. As far as I understand from discussions on the Coq-club list [1] [2], this is a restriction in the interpreter, which could in principle be lifted, but very carefully to avoid introducing an inconsistency. References Cocorico, the nonterminating Coq wiki: Mutual Induction Coq-Club mailing list: The Coq Development Team. The Coq Proof Assistant: Reference Manual. Version 8.3 (2010). [web] ch. 4. Eduardo Giménez. Codifying guarded definitions with recursive schemes. In Types'94: Types for Proofs and Programs, LNCS 996. Springer-Verlag, 1994. doi:10.1007/3-540-60579-7_3 [Springer] Eduardo Giménez. Structural Recursive Definitions in Type Theory. In ICALP'98: Proceedings of the 25th International Colloquium on Automata, Languages and Programming. Springer-Verlag, 1998. [PDF]
Everything on a computer relies in some way on some algorithms. Every algorithm takes time and space to execute. The complexity of that code is an estimation of this time/space for its execution. The lesser the complexity, the better the execution. The complexity, or Big O notation, then allow us to be able to compare algorithms without implementing or running them. It helps us to design an efficient algorithm, and that is why any tech interview in the world asks about the running complexity of a program. The complexity is an estimation of asymptotic behavior. This is a measure of how an algorithm scales, not a measurement of specific performance. Understand how algorithm performances are calculated. Estimate the time or space requirements of an algorithm. Identify the complexity of a function. Keep in mind that Big O is an asymptotic measure and stays a fuzzy lens into performance. What is next? Here is a general picture of the standard running times used when analyzing an algorithm : it is a great picture to keep in mind each time we write some code. The graph above shows the number of operations performed by an algorithm against the number of input elements. It can be seen how quick its complexity may penalize an algorithm. The complexity that can be achieved depends on the problem. Let us see below ordinary running times and some corresponding algorithm implementations. Constant - O( 1) Constant time means the running time is constant : it is like as fast as the light whatever the input size is! It concerns for instance most of the basic operations you may have with any programming language (e.g. assignment, arithmetic, comparisons, accessing array’s elements, swap values...). auto a = -1; // O(1) auto b = 1 + 3 * 4 + (6 / 5); // O(1) auto d = swap(a, b); // O(1) if (a < b) auto c = new Object(); // O(1) auto array = {1, 2, 3, 4, 6}; // O(1) auto i = array[4]; // O(1) ... Logarithmic - O( log n) Those are the fastest kind of algorithms complexity that depends on the data input size. The most well-known and easy to understand algorithm showing this complexity is the binary search which code is shown below. Index BinarySearch(Iterator begin, Iterator end, const T key) { auto index = -1; auto middle = begin + distance(begin, end) / 2; // While there is still objects between the two iterators and no object has been found yet while(begin < end && index < 0) { if (IsEqual(key, middle)) // Object Found: Retrieve index index = position(middle); else if (key > middle) // Search key within upper sequence begin = middle + 1; else // Search key within lower sequence end = middle; middle = begin + distance(begin, end) / 2; } return index; } Commonly : divide and conquer algorithms may result in a logarithmic complexity. Linear - O( n) The running time increases at most linearly with the size of the input : if the input size is n, it will perform ~n operations as well. Linear time is the best possible time complexity in situations where the algorithm has to read its entire input sequentially. Consider the example where we have to compute the sum of gains and losses (total gain). For sure, we will have to go through each value at least once: T Gain(Iterator begin, Iterator end) { auto gain = T(0); for (auto it = begin; it != end; ++it) gain += *it; return gain; } If you have a unique for loop with simple data, it most probably results in O(n) complexity. Quasilinear - O( n log n) Algorithms with a O( nlog n) running time are slightly slower than those in O( n). This is for instance the best time complexity dealing with sorting algorithms. In many cases, the n log n complexity is simply the result of performing a logarithmic operation n times or vice versa. As example, the classic merge sort algorithm: void MergeSort(Iterator begin, Iterator end) { const auto size = distance(begin, end); if (size < 2) return; auto pivot = begin + (size / 2); // Recursively break the vector into two pieces MergeSort(begin, pivot); // O(log n) MergeSort(pivot, end); // O(log n) // Merge the two pieces - considered to be in O(n) Merge(begin, pivot, end); // O(n) } They often are the result of some optimization from a O( n 2) algorithm. Quadratic - O( n 2) This is the kind of consumptive complexity encountered regularly. Those algorithms often arise when we want to compare each element with all of the others. The Bubble sort, one of the first algorithm generally studied in class, illustrate perfectly this running time. Here is its most naive version: void BubbleSort (Iterator begin, Iterator end) { // For each pair of elements - bubble the greater up. for (auto it = begin; it != end; ++it) for (auto itB = begin; itB < end; ++itB) if (Compare(itB, (itB + 1))) swap(itB, (itB + 1)); } If you have a double for loop with simple data, it most probably results in O( n 2) complexity. Exponential - O( 2) n n Here is the start of the kind of algorithms that are very consumptive. Usually n 2 complexity is acceptable while 2 n is almost always unacceptable. The naive recursive fibonacci implementation is its most well-known example: int Fibonacci(int n) { if (n <= 1) return 1; return Fibonacci(n - 1) + Fibonacci(n - 2); } To see how quickly the running time may climb, if you have: - 5 elements --> 32 operations - 10 elements --> 1024 operations - 100 elements --> 1,2676506×10³⁰ operations ! Still, for some problems we cannot achieve a better complexity. This is for instance the case of the traveling salesman problem that can best reach this complexity thanks to dynamic programming optimizations. Factorial - O( n!) It's the slowest of them all and is considered impracticable. We mostly wouldn't have to deal with such algorithms. This is for instance the case of the traveling salesman problem via brute-force search : it tries all permutations and searches using brute-force which one is cheapest. To avoid being too long and annoy most of the reader, we will only put some useful mathematics equalities as a reminder for those that want to play further with complexity calculation. Math Sums Master Theorem for Divide and Conquer Recurrences $$\sum_{i=0}^{n-1} {C}^{ste} = n * {C}^{ste}$$ $$T(n) = O(n) + 2T(\frac{n}{2}) --> O(n log n)$$ $$\sum_{i=0}^{n-1} n = n(n-1)$$ $$T(n) = O(1) + T(\frac{n}{2}) --> O(log n)$$ $$\sum_{i=0}^{n-1} i = \frac{n(n + 1)}{2} - n = \frac{n(n - 1)}{2}$$ $$\sum_{i=0}^{n-1} (n+i) = \sum_{i=0}^{n-1} n + \sum_{i=0}^{n-1} i$$ Asymptotics of the generalized harmonic number H n,r: $$H_{n,r} = \sum_{k=1}^{n} \frac{1}{k^r}$$ $$For \quad n \approx 1$$ $$H_{n,1} \in O(log(n))$$
Suppose $L$ has a regular parametrix . Assume $U$ is a distribution given in an open set $\Omega \subset R^d$ and $L(U)=f$ , with $f$ a $C^{\infty}$ function in $\Omega$ , then $U$ agrees with a $C^{\infty}$ function on $\Omega$ I have learned this theorem in Stein's functional analysis Page$_{133}$ and the author proved this theorem in the following step . $(1)$ Show that it suffices to prove $U$ agrees with a $C^\infty$ function on any ball $B$ with its closure $B^* \subset \Omega$ . $(2)$ Show that for any ball $B$ with its closure $B^* \subset \Omega$ , we can find a function $f_B \in C^{\infty}$ which depends on the ball $B$ such that $U$ agree with $f_B$ on $B$ . After finished the above two step , the author state that the theorem is proved , but I did not see it . My attempt : For step $(1)$ , assume $U$ argees with $g$ . We need to prove for all $\varphi \in D(\Omega)$ , we have $$U(\varphi)=\int g(x)\varphi(x) \,dx$$ If this is not true for some $\varphi$ . We can find a function $\phi$ such that $L\phi=\varphi$ . Then $$\int g(x)\varphi(x) \,dx \neq U(\varphi)=L(U)(\phi)=\int f(x) \phi(x) \,dx$$ WLOG , we have $g(x)\varphi(x) \lt f(x) \phi(x)$ in some open ball $O$ . Then $$\int g(x) \varphi(x) \chi_O(x) \,dx \lt \int f(x)\phi(x) \chi_O(x) \,dx$$ Notice that $\varphi(x)\chi_O(x)=L(U)(\phi(x)\chi_O(x))$ . However , these two functions are not in $C^\infty$ . So I want to show $$U(\varphi(x)\chi_O(x))=\int g(x) \varphi(x)\chi_O(x) \,dx$$ and $$L(U)(\varphi(x)\chi_O(x))=\int f(x) \varphi(x)\chi_O(x) \,dx$$ My question : $(a)$ Can we show step $(1)$ follow the discussion above ? If we do not have the assumption $L(U)=f$ , does $(1)$ still valid ? $(b)$ In part $(2)$ , for each ball $B$ , we have different functions $f_B \in C^{\infty}(\Omega)$ , so how to use $(2)$ to show $U$ agrees with a function $f$ ?
Wikipedia claims that Laurent series cannot in general be multiplied. I am wondering why not? Suppose we have $f(z),g(z)$ analytic in the annulus: $r<|z-a|<R (0\le r<R\le\infty)$, then $$ f(z)=\sum_{n=-\infty}^\infty c_n(z-a)^n $$ is the Laurent series of $f(z)$ in the annulus, and $$ g(z)=\sum_{n=-\infty}^\infty d_n(z-a)^n $$ is the Laurent series of $g(z)$ in the annulus. We also know that the Laurent series absoltely and uniformly converges to funtions, then why can't we take the convolution of $f(z)$ and $g(z)$ which should be the Laurent series of $f(z)g(z)$ ? If you know $f,g$ converges together on some nonempty annulus, then yes you can multiply them (because of local absolute uniform convergence). The wikipedia statement is really referring to the case for general Laurent series $f,g$ which you don't know whether they converge at all. Then you may not be able to multiply, e.g., $f(z)=\sum_{n\geq 0}z^n$, $g(z)=\sum_{n\leq 0}z^n$. If you have two Laurent series, both convergent at least for $a<|z|<b$, then you can form the product series, which then represents the product of the two individual functions (an example is treated here: Products of Laurent Series ). If, however, the two series have no common annulus of convergence you cannot hope for much. But there is a difference to the convolution multiplication of two power series $\sum_{j\geq0}a_j z^j$ and $\sum_{k\geq0} b_kz^k$: Each individual coefficient $c_r$ of the product series $\sum_{r\geq0}c_r z^r$ can be obtained by a finite number of operations, while with the product of two Laurent series (with infinite ends on both sides) each coefficient of the product series is the sum of an infinite series of complex products.
Let $\{ \xi _a \}_{a \in [0;1]}$ be a family of independent uniformly distributed on $[0;1]$ random variables on some probability space $(\Omega, \mathscr{F},P)$, indexed by a continuous parameter. Let $u$ be an independent of $\{ \xi _a \}_{a \in [0;1]}$ uniformly distributed on $[0;1]$ random variable. For $\omega \in \Omega$, define the map $$ \alpha : \Omega \to \mathbb{R}, \ \ \\ \ \alpha (\omega) = \xi_{u(\omega)} (\omega). $$ Is $\alpha$ a random variable? I think the answer is negative, since the family $\{ \xi _a \}_{a \in [0;1]}$ is uncountable. How could I prove this?
The Annals of Statistics Ann. Statist. Volume 11, Number 1 (1983), 267-274. Testing Whether New is Better than Used with Randomly Censored Data Abstract A life distribution $F$, with survival function $\bar{F} \equiv 1 - F$, is new better than used (NBU) if $\bar{F}(x + y) \leq \bar{F}(x)\bar{F}(y)$ for all $x, y \geq 0$. We propose a test of $H_0 : F$ is exponential, versus $H_1 : F$ is NBU, but not exponential, based on a randomly censored sample of size $n$ from $F$. Our test statistic is $J^c_n = \int \int \bar{F}_n(x + y) dF_n(x) dF_n(y)$, where $F_n$ is the Kaplan-Meier estimator. Under mild regularity on the amount of censoring, the asymptotic normality of $J^c_n$, suitably normalized, is established. Then using a consistent estimator of the null standard deviation of $n^{1/2}J^c_n$, an asymptotically exact test is obtained. We also study, using tests for the censored and uncensored models, the efficiency loss due to the presence of censoring. Article information Source Ann. Statist., Volume 11, Number 1 (1983), 267-274. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176346077 Digital Object Identifier doi:10.1214/aos/1176346077 Mathematical Reviews number (MathSciNet) MR684884 Zentralblatt MATH identifier 0504.62086 JSTOR links.jstor.org Subjects Primary: 62N05: Reliability and life testing [See also 90B25] Secondary: 62G10: Hypothesis testing Citation Chen, Yuan Yan; Hollander, Myles; Langberg, Naftali A. Testing Whether New is Better than Used with Randomly Censored Data. Ann. Statist. 11 (1983), no. 1, 267--274. doi:10.1214/aos/1176346077. https://projecteuclid.org/euclid.aos/1176346077 Corrections See Correction: Yuan Yan Chen, Myles Hollander, Naftali A. Langberg. Corrections: Testing Whether New is Better than Used with Randomly Censored Data. Ann. Statist., Volume 11, Number 4 (1983), 1267--1267.
Introduction to Binary concept In the digital world we deal exclusively with "on" and "off" as our only two states. This makes a number system with only two numbers a natural choice. In this topic we'll cover the binary number system, what it is and how to convert between it and our usual decimal numbers. Our usual number system that we use day to day is called the "decimal" system. This comes from the Latin "decimus" meaning "tenth" because each place can have 10 different values. When you write a decimal number each digit has a "place value". So the number "15" in decimal means: $$15 = 5\times 1 + 1 \times 10$$ If we created a new number system that only allowed each digit to be a 0 or a 1 we'd have the binary system. When we write a binary number we often follow it with a subscript of "2" to indicate that it's binary; when we write a decimal number we follow it with a subscript "10" to show that it's decimal. In the binary case the number "101" would be: $$101_2 = 1\times 1 + 0\times 2 + 1\times 4$$ Where the "place values" are increasing powers of two rather than powers of ten as in the decimal case. fact To convert a binary number to decimal multiply each of the digits by its "place value" (\(2^n\) where \(n\) is the number of digits from the right starting at 0) and sum them. fact The first 8 place values (always counting from the rightmost digit) are: \(2^0\) 1 \(2^1\) 2 \(2^2\) 4 \(2^3\) 8 \(2^4\) 16 \(2^5\) 32 \(2^6\) 64 \(2^7\) 128 \(2^8\) 256 You've likely seen these numbers plenty of times when dealing with technology, you'll soon become very familiar with them as you learn to work with digital electronics. You may have seen the number 255 used frequently with digital things, this is because 255 is the largest number that will fit into an 8 digit binary number (starting at 0). An 8 digit binary number is called a "byte" (with each digit called a "bit"), and it's historically the most common size of a storage unit in a digital system. example Convert \(1011_2\) to decimalThe place values (starting from the rightmost digit) are: \(1, 2, 4, 8\) So our number is: $$1011_2 = 1\times 1 + 1\times 2 + 0\times 4 + 1\times 8$$ $$1011_2 = 11_{10}$$ example Convert \(00001011_2\) to decimalYou may notice that reading from right to left this is the same number as in the example above. Adding leading 0s to a binary number has no effect, just like adding leading 0s to a decimal number has no effect. Many times a binary number will be shown with 8 digits (or 16 or 32 or 64) by packing extra 0s on the left. So \(00001011_2 = 11_{10}\). Converting a binary number to decimal is fairly straightforward, we just multiply a 1 or a 0 by its place value and add all the results together. Converting a decimal number to binary is a little more involved, there are more steps to the process but the process isn't that complicated. example Say we have the number \(200_{10}\) and we want to turn it into a binary number, figuring out whether we need a "2" or a "4" is tricky just by looking at it but it turns out that if we always add the largest power of two we can we won't be wrong. So for instance the largest power of two that is less than 200 is 128. We write down: \(10000000_2\) as our start for the binary value, now we have to add 1s to make up the remaining \(200-128=72\) left. Now we do that same again, the largest power of two less than 72 is 64 so we add a 1 at the 64 place: \(11000000_2\) and we have to make up the other \(72-64 = 8\). Now 8 is a power of two so we add a 1 to the 8 place: \(11001000_2 = 200_{10}\) and we're done. fact To convert a decimal number to binary start with your binary number \(B = 0\) and your decimal number \(D\) which is given. Find the largest power of two \(P \leq D\) Add a 1 to \(B\) in the place for \(P\) and set \(D = D-P\) Repeat until \(D = 0\). example Convert \(216_{10}\) to binaryWe start with \(B = 0\) and \(D = 216_{10}\) The largest power of two less than \(D\) is \(P = 128_{10} = 10000000_2\) So \(B = 10000000_2\) and \(D = 216_{10} - 128_{10} = 88_{10}\) Lather, rinse, repeat. \(P = 64_{10} = 1000000_2\) \(B = 11000000_2,\quad D = 88_{10} - 64_{10} = 24_{10}\) \(P = 16_{10} = 10000_2\) \(B = 11010000_2,\quad D = 24_{10}-16_{10} = 8_{10}\) \(P = 8_{10} = 1000_2,\quad D = 8_{10}-8_{10} = 0\) \(B = 11011000_2\) And we're done so: \(216_{10} = 11011000_2\) Now counting in binary takes a little getting used to but you'll get the hang of it. To add 1 we start at the right, if that digit is a 0 we change it to a 1 and we're done. If that digit is a 1 we change it to a zero and add 1 to the next digit to the left using the same system (if it's 0 make it a 1, if it's a 1 make it 0 and increment the next digit). fact The first 8 numbers in binary are: Decimal Binary 0 0 1 1 2 10 3 11 4 100 5 101 6 110 7 111 fact Another popular method of converting a decimal number to binary is called the SOAR table. SOAR stands for: Step Operation Answer Result The steps are: Divide your decimal number by 2 Write the integer part of the answer under the "Answer" column Write the remaineder part of the answer under the "Remainder" column Repeat step 1 with the integer part of your answer until \(A=0\) The binary number is the digits in the "Remainder" column read bottom up example Find \(29_{10}\) in binary using the SOAR tableWe'll set up our table to begin: S O A R 1 \(\frac{29}{2}\) 14 1 2 \(\frac{14}{2}\) 7 0 3 \(\frac{7}{2}\) 3 1 4 \(\frac{3}{2}\) 1 1 5 \(\frac{1}{2}\) 0 1 practice problems
To construct a specific kind of undirected graph $G=(V,E)$, which $|V|=n>2$. For convenience, label the vertices with $v_1,v_2,\dots ,v_n\in V$, and $(v_i,v_j)\in E$ means there is a edge between vertices $v_i,v_j$. And the graph has the following property: $(v_1,v_2)\in E$ For $i\geq 3$, $(v_i,v_1)\in E\Rightarrow (v_i,v_2)\notin E$,and $(v_i,v_2)\in E\Rightarrow (v_i,v_1)\notin E$ For $i\neq j,(v_i,v_j) \notin E \Rightarrow \exists v_{k_1},v_{k_2}(k_1\neq k_2)$ which $(v_{k_1},v_i),(v_{k_2},v_i),(v_{k_1},v_j),(v_{k_2},v_j)\in E$ and $\forall l\neq k_1,k_2, (v_l,v_i) \notin E$ or $(v_l,v_j)\notin E$ But I couldn't construct one when $n$ is odd, for instance, when $n=5$, etc. If $n\geq 3$ is an odd number, is it possible to construct a graph meet the above-mentioned property?
Introduction For a completely general functional $F[\phi]$ your final equation is not correct. Let's pick a paticular discretization of the $x$ axis $\mathcal D \equiv \{x_1,x_2,...,x_N\}$ where $x_{i+1}-x_i \equiv \Delta x=\frac{x_N-x_1}{N}$ is the distance between consecutive points. Obviously, the continuum limit corresponds to the limit $\Delta x \rightarrow 0$, or equivalently $N \rightarrow + \infty$, which we're going to take at the end of our manipulations. Under this discretization scheme $\mathcal D$, any function $\phi(x)$ simply corresponds to a piece-wise constant function $\tilde \phi_{\mathcal D}$ with value $\phi(x_i)$ at $x_i$, which again approaches the actual function $\phi$ as $\Delta x \rightarrow 0$. Since this is a piece-wise constant function, the set of its values at all points of $\mathcal D$, i.e. $(\phi({x_1}),...,\phi({x_N}))$, uniquely specifies it. This implies that the outcome of any functional $F$ acting on this discretized function is also uniquely given by the set $(\phi({x_1}),...,\phi({x_N}))$. In other words, I can write the outcome of this operation in terms of some multivariate function $f$ as $F[\tilde\phi_{\mathcal D}] \equiv f(\phi(x_1),...,\phi(x_N))$. To keep my notation clean, I'll use the shorthand $\phi_i := \phi(x_i)$ from here on out. The answer to the question Using this discretization concept, the functional integral in question can be defined as:$$\int \mathcal D\phi \ e^{-F[\phi]}=\lim_{N \rightarrow +\infty} \int_{\mathbb R^N} \prod_{i=1}^N d\phi_i \ e^{-f(\phi_1,...,\phi_N)} \qquad (*)$$ Now if you could write $f(\phi_1,...,\phi_N)$ as the sum of some single variable functions in the form $f(\phi_1,...,\phi_N) \equiv \sum_{i=1}^N f_i(\phi_i)$, You would have $e^{-f(\phi_1,...\phi_N)} = \prod _{i=1}^Ne^{-f_i(\phi_i)}$, leading to your last equation: $$\int \mathcal D \phi \ e^{-F[\phi]} = \lim_{N \rightarrow + \infty} \prod_{i=1}^N \int_{\mathbb R} d\phi_i \ e^{-f_i(\phi_i)}$$ However, for a general functional $F$, and its corresponding discretized multivariable function $f$, the $e^{-f(\phi_1,...,\phi_N)}$ cannot be necessarily written as a product of exponentials of single variables. So in general:$$e^{-f(\phi_1,...\phi_N)} \neq \prod _{i=1}^Ne^{-f_i(\phi_i)}$$Meaning that your final equation wouldn't work anymore.
Difference between revisions of "Rd-continuous" Line 5: Line 5: =References= =References= − * {{BookReference|Dynamic Equations on Time Scales|2001|Martin Bohner|author2=Allan Peterson|prev=Regulated function|next= + * {{BookReference|Dynamic Equations on Time Scales|2001|Martin Bohner|author2=Allan Peterson|prev=Regulated function|next=}}: Definition $1.58$ Revision as of 23:25, 4 January 2017 Let $\mathbb{T}$ be a time scale and $f \colon \mathbb{T} \rightarrow \mathbb{R}$ be a regulated function. We say that $f$ is rd-continuous if for any right dense point $t \in \mathbb{T}$, $f(t) = \displaystyle\lim_{\xi \rightarrow t^+} f(\xi)$. In other words, $f$ is rd-continuous if it is regulated and continuous at right dense points. The notation $C_{\mathrm{rd}}(\mathbb{T},X)$ denotes the set of rd-continuous functions $g \colon \mathbb{T} \rightarrow X$. We denote the set of rd-continuous functions that are $n$-times delta differentiable by the notation $C_{\mathrm{rd}}^n(\mathbb{T},X)$.
I am working on a simple pendulum problem. The $y$ direction is vertical and the $x$ direction is horizontal. Displacement in the $x$ direction is taken to be much less than the length of the string, $L$. One of the small angle approximations given for this problem was $${\theta \over 2} \approx {y \over x}. $$ where $y$ and $x$ represents the coordinates of the pendulum. Why is this true? One of the small angle approximations I know is $$\tan \theta \approx \theta, $$ giving $$\frac{x}{y}\approx\theta.$$ Where did the factor of two in the first equation come from?
I have a quadratic form $\mathbf{x}^T A \mathbf{x}$ (where $A\in \mathbb{R}^{n\times n}$ is symmetric matrix and $\mathbf{x}\in \mathbb{R}^n$) that I want to minimize given the normalization constraint $\mathbf{x}^T\mathbf{x}=1$. Because $\mathbf{A}$ is a adjacency matrix of an undirected graph then I know that it is symmetric and real and also sparse. What is an appropriate memory conservative algorithm to solve this kind of problem? Is it good to solve the eigensystem $\mathbf{A}\mathbf{x}=\lambda \mathbf{x}$ and then taking as solution the first smallest nonzero eigenvalue and its related eigenvector? If this is the way to proceed how is it possible to get the smallest eigenvalue with subspace iteration?
By Dyutiman Das This article is the final project submitted by the author as a part of his coursework in Executive Programme in Algorithmic Trading (EPAT®) at QuantInsti. Do check our Projects page and have a look at what our students are building. Some stocks move in tandem because the same market events affect their prices. However, idiosyncratic noise might make them temporarily deviate from the usual pattern and a trader could take advantage of this apparent deviation with the expectation that the stocks will eventually return to their long term relationship. Two stocks with such a relationship form a “pair”. We have talked about the statistics behind pairs trading in a previous article. Introduction This article describes a trading strategy based on such stock pairs. The rest of the article is organized as follows. We will be talking about the basics of trading an individual pair, the overall strategy that chooses which pairs to trade and present some preliminary results. In the end, we will describe possible strategies for improving the results. Pair trading Let us consider two stocks, x and y, such that y = \alpha + \beta x + e \alpha and \beta are constants and e is white noise. The parameters {<strong>alpha, \beta} could be obtained from a linear regression of prices of the two stocks with the resulting spread e_{t} = y_{t} - (\alpha + \beta x_{t}) Let the standard deviation of this spread be \sigma_{t}. The z-score of this spread is z_{t} = e_{t}/\sigma_{t} The trading strategy is that when the Trading Strategy z-score is above a threshold, say 2, the spread can be shorted, i.e. sell 1 unit of y and buy \beta units of x. we expect that the relationship between x and y will hold in the future and eventually the z-score will come down to zero and even go negative and then the position could be closed. By selling the spread when it is high and closing out the position when it is low, the strategy hopes to be statistically profitable. Conversely, if the z-score is below a lower threshold say -2, the strategy will go long the spread, i.e. buy 1 unit of y and sell \beta units of x and when the z score rises to zero or above the position can be closed realizing a profit. There are a couple of issues which make this simple strategy difficult to implement in practice: The constants \alpha and \beta are not constants in practice and vary over time. They are not market observables and hence have to be estimated with some estimates being more profitable than others. The long term relationship can break down, the spread can move from one equilibrium to another such that the changing {\alpha,\beta} gives an “open short” signal and the spread keeps rising to a new equilibrium such that when the “close long” signal come the spread is above the entry value resulting in a loss. The parameters {\alpha, \beta} can be estimated from the intercept and slope of a linear regression of the prices of y against the prices of x. Note that linear regression is not reversible, i.e. the parameters are not the inverse of regressing x against y. So the pairs (x,y) is not the same as (y,x). While most authors use ordinary least squares regression, some use total least squares since they assume that the prices have some intraday noise as well. However, the main issue with this approach is that we have to pick an arbitrary lookback window. Determining Parameters In this paper, we have used Kalman filter which is related to an exponential moving average. This is an adaptive filter which updates itself iteratively and produces \alpha, \beta, e and \sigma simultaneously. We use the python package pykalman which has the EM method that calibrates the covariance matrices over the training period. Another question that comes up is whether to regress prices or returns. The latter strategy requires holding equal dollar amount in both long and short positions, i.e. the portfolio would have to be rebalanced every day increasing transaction cost, slippage, and bid/ask spread. Hence we have chosen to use prices which is justified in the next subsection. The stability of the long term relationship is determined by determining if the pairs are co-integrated. Note that even if the pairs are not co-integrated outright, they might be for the proper choice of the leverage ratio. Once the parameters have been estimated as above, the spread time series e_{t} is tested for stationarity by the augmented Dickey Fuller (ADF) test. In python, we obtain this from the adfuller function in the statsmodels module. The result gives the t-statistics for different confidence levels. We found that not many pairs were being chosen at the 1% confidence level, so we chose 10% as our threshold. Stability of the Long Term Relationship One drawback is that to perform the ADF test we have to choose a lookback period which reintroduces the parameter we avoided using the Kalman filter. The trading strategy deploys an initial amount of capital. To diversify the investment five sectors will be chosen: financials, biotechnology, automotive etc. A training period will be chosen and the capital allocated to each sector is decided based on a minimum variance portfolio approach. Apart from the initial investment, each sector is traded independently and hence the discussion below is limited to a single sector, namely financials. Choosing Sectors and Stocks Within the financial sector, we choose about n = 47 names based on large market capitalization. We are looking for stocks with high liquidity, small bid/ask spread, ability to short the stocks etc. Once the stock universe is defined we can form n (n-1) pairs, since as mentioned above (x,y) is not the same as (y,x). In our financial portfolio, we would like to maintain up to five pairs at any given time. On any day that we want to enter into a position (for example the starting date) we run a screen on all the n (n-1) pairs and select the top pair(s) according to some criteria some of which are discussed next. For each pair, the signal is obtained from the Kalman filter and we check if |e| > nz \sigma, where nz is the z-score threshold to be optimized. This ensures that this pair has an entry point. We perform this test first since this is inexpensive. If the pair has an entry point, then we choose a lookback period and perform the ADF test. Choosing Pairs The main goal of this procedure is not only to determine the list of pairs which meets the standards but rank them according to some metrics which relates to the expected profitability of the pairs. Once the ranking is done we enter into the positions corresponding to the top pairs until we have a total of five pairs in our portfolio. In the following, we calibrated the Kalman filter over Cal11 and then used the calibrated parameters to trade in Cal12. In the following, we kept only one stock-pair in the portfolio. Results In the tests shown we kept the maximum allowed drawdown per trade to 9%, but allowed a maximum loss of 6% in one strategy and only 1% in the other. As we see from above the performance improves with the tightening of the maximum allowed loss per trade. The Sharpe ratio (assuming zero index) was 0.64 and 0.81 respectively while the total P&L was 9.14% and 14%. The thresholds were chosen based on the simulation in the training period. Future Work Develop better screening criterion to identify the pairs with the best potentials. I already have several ideas and this will be ongoing research. Optimize the lookback window and the buy/sell Z-score thresholds. Gather more detailed statistics in the training period. At present, I am gathering statistics of only the top 5 (based on my selection criteria). However, in future, I should record statistics of all pairs that pass. This will indicate which trades are most profitable. In the training period, I am measuring profitability by the total P&L of the trade, from entry till the exit signal is reached. However, I should also record max profit so that I could determine an earlier exit threshold. Run the simulation for several years, i.e. calibrate one year and then test the next year. This will generate several year’s worths of out-of-sample tests. Another window to optimize is the length of the training period and how frequently the Kalman filter has to be recalibrated. Expand the methodology to other sectors beyond financials. Explore other filters instead of just Kalman filter. If you are a coder or a tech professional looking to start your own automated trading desk. Learn automated trading from live Interactive lectures by daily-practitioners. Executive Programme in Algorithmic Trading (EPAT®) covers training modules like Statistics & Econometrics, Financial Computing & Technology, and Algorithmic & Quantitative Trading. Enroll now! Next Steps Note: The work presented in the article has been developed by the author, Mr. Dyutiman Das. The underlying codes which form the basis for this article are not being shared with the readers. For readers who are interested in further readings on implementing pairs trading using Kalman Filter, please find the article below. Link: Statistical Arbitrage Using the Kalman Filter by Jonathan Kinlay
By a standard technique of inductive killing everything relevant (in this case decreasing homeomorphisms between uncountable $G_\delta$-subsets of the real line) it is possible to prove the following fact. Theorem (CH).Under CH the real line contains an uncountable subset $X$ admitting no strictly decreasing function $f:Z\to X$, defined on some uncountable subset $Z$ of $X$. On the other hand, a known PFA-results of Baumgartner (about the order isomorphness of any $\aleph_1$-dense subsets of the real line) implies the following Theorem (PFA).Under PFA, for any uncountable subset $X\subset\mathbb R$ there exists a strictly decreasing function $f:Z\to X$, defined on some uncountable subset $Z\subset X$. Now Question.Can this PFA-theorem be proved under a weaker assumption like OCA or (MA$+\neg$ CH)?
I am trying to reproduce the results of Sofue's 2016 paper (https://arxiv.org/abs/1510.05752). I'm able to use least squares minimization to model the bulge and disk effectively, but I get into trouble with the halo. By simply minimizing the residuals of this equation: $$M_h(R)=4 \pi \rho_0h^3\left(\ln\left(1+X\right)-\frac{X}{1+X}\right)$$ $$V_h(R)=\sqrt{\frac {G M_h(R)}{R}}$$ I can match the data pretty well, except I get a density that is far less dense than critical, which is impossible (you can't have a halo that is less dense than the surrounding universe). Sofue provides some equations describing $M_{200}$, $R_{200}$ and $X_{200}$, but I can't connect the dots. How do you model an Navarro-Frenk-White (NFW) halo so that it matches the data but has a reasonable density? Update: Sofue has an $R_{MAX}$ value which limits the size of the halo scale radius, but even using this, the minimization always hits $R_{MAX}$ as a limit for the halo scale radius. It appears that any attempt to simply match a dark matter halo to a velocity curve is going to dilute the dark matter to an unreasonable quantity.
In short, for this group 2 hydroxide crystal, the dissolution and solvation process is an exothermic equilibrium. Increasing the temperature drives the reaction in the reverse, favouring crystallisation! However, it would be nice to rationalise some of these terms, so I will lay out a very brief overview of general solubility factors so you can understand the process in a little more detail. Solubility is a complex thermodynamic process that takes many factors into account. It can be understood as the process of dissolving a crystal in a solvent, and then forming a solvent cage around the solute (solvation). It is a function of three interactions at heart; solute-solute solute-solvent solvent-solvent Large solute-solute interactions hinder solubility and are associated with high lattice enthalpies (of crystalline solutes), these must be overcome in order to break up the solid and dissolve it. Solvent-solvent interactions must also be overcome in order to solvate the dissolved solute, therefore for a highly interacting solvent the solubility is reduced. The formation of ordered solvent structures such as clathrin cages etc orders the solvent, and therefore reduces solvent entropy significantly. Therefore solvents with high degrees of freedom hinder solubility. Solute entropy favours solvation, this reflects the higher solubility of flexible molecules, this is because in the solid they are all locked up, while in solution their rotational and vibrational modes are populated. In addition to these if the solute has multiple ionic forms then the solubility will be a function of pH too, so must be held constant in practice. The surface profile of the solid too, as well as the extent of cavitation and porosity are also key factors (since they affect the number of solvent-solute interactions possible). Importantly the type of solvent into which you are trying to dissolve your crystal in is also of fundamental importance! The rule "like dissolves like" is best understood by the (semi-empirical) general solubility equation, \begin{equation}\log S =\frac 12-0.05 (T_m-T)-\log P\end{equation} A large partition coefficient (the ratio of how soluble the crystal is in organic solvent to aqueous solvent) reduces the solubility, (and mathematically since $\log x$ follows $x$ in limits). To further complicate the situation there are several definitions of solubility that we can take. In practice we choose intrinsic aqueous solubility, mainly used for high throughput and speed in drug development. Solubility is really an experimentalists tool for parametrising the extent of Gibbs energy for the dissolving-solvation process, which in turn is usually modelled by complex thermodynamic cycles via gas phase species or super cooled liquids. Modern research into solubility focuses on finding either molecular descriptors through random forest/QSPR or support vector models etc (which are all just number crunching techniques to find correlations between large sets of data) or through fundamental physics and chemistry of the process. \begin{equation}\log S=\frac{(-\Delta G_{sol}/RT)}{\ln 10}\end{equation} So we understand solubility as an equilibrium constant for the Gibbs change of the solvation process (whatever particular cycle you use to get that data). Now that we know something about solubility we can ask ourselves, under what conditions does increasing the temperature reduce solubility? In practice the temperature can effect the entropic term, but also the enthalpies of the crystal lattices, the heats given out by the solute-solvent interactions etc. It is the balance of these process that determines the solubility of something. In your case we are observing a calcium hydroxide crystal dissolve in aqueous solvent. As we increase the temperature the solubility decreases. There is an equilibrium between crystallisation and (dissolution + solvation). Increasing the temperature is shifting that equilibrium to favour the crystallisation rather than (as we crudely expect) in the forward. This is simply an enthalpic term, due to the solvation process being an exothermic process, increasing the temperature drives the equilibrium in the reverse by Le Chatlier's principle. Hope that is a bit clearer, and that you now understand some of the ideas behind solubility. :)
I'm trying to understand what this text means in my textbook about distributed leader election algorithms, but I can't make any sense of it. Either they didn't explain what was meant, or I missed it somewhere. In this section, we show that the leader election algorithm of Section 3.3.2 is asymptotically optimal. That is, we show that any algorithm for electing a leader in and asynchronous ring sends at least $\Omega(n\log n)$ messages. The lower bound we prove is for uniform algorithms, namely, algorithms that do not know the size of the ring. We prove the lower bound for a special variant of the leader election problem, where the elected leader must be the processor with the maximum identifier in the ring; in addition, all the processors must know the identifier of the elected leader. The proof of the lower bound for the more general definition of the leader election problem follows by reduction. Assume we are given a uniform algorithm $A$ that solves the above variant of the leader election problem. We will show that there exists an admissible execution of $A$ in which $\Omega(n\log n)$ messages are sent. Intuitively, this is done by building a "wasteful" execution of the algorithm for rings of size $n/2$, in which many messages are sent.Then we "paste together" two different rings of size $n/2$ to form a ring of size $n$, in such a way that we can combine the wasteful executions of the smaller rings and force $\Theta(n)$ additional messages to be received. Although the preceding discussion referred to pasting together exections, we will actually work with schedules. The reason is that executions include configurations, which pin down the number of processors in the ring. We will want to apply the same sequence of events to different rings, with different numbers of processors. Before presenting the details of the lower bound proof, we first define schedules that can be "pasted together". A schedule $\sigma$ of $A$ for a particular ring is open if there exists an edge $e$ of the ring such that in $\sigma$ no message is delivered over the edge $e$ in either direction; $e$ is an open edge of $\sigma$. There is lots more, but I don't know if I should type it all out if only for copyright reasons. I hope this is enough to help clarify my question and get an explanation.
Here's the example I had which inspired me to post the question in the first place:The game League of Legends was the most-played PC game, in number of hours played, in North America and Europe in 2012. There is a good chance that League of Legends is a part of many of your students' daily life, especially if you are teaching engineering calculus. It doesn'... On quizzes, homeworks, and tests, I repeatedly ask questions like this:Find three different functions that have derivative equal to $x^2 + x$.Forcing them to do antiderivatives and deal with the quantifier on the +C without staring at the notation helps some of them separate the +C from the voodoo magic.I do a similar thing in college algebra classes ... Draw a number line and label all the integers.Tell him that adding $x>0$ is moving $x$ units to the right and subtracting $x>0$ is moving $x$ units to the left.Tell him that adding $0$ is not moving at all.Tell him that adding $x<0$ is moving $-x$ units to the left and subtracting $x<0$ is moving $-x$ units to the right. As someone who teaches calculus to college students, I expect my students to have seen point-slope form. We just start using it (because it's the right way to talk about tangent lines and linearization) without teaching it, because we consider it part of the standard algebra curriculum, so students who haven't seen it are at a disadvantage.Further, ... Bad Optimization ProblemsI thought that Jack M made an interesting comment about this question:There aren't any. There may be situations where it's possible to apply optimization to solve a problem you've encountered, but in none of these cases is it honestly worth the effort of solving the problem analytically. I optimize path lengths every day when I ... No, it is a bad idea to avoid indefinite integrals, the reason being simply that your students will encounter them elsewhere, and therefore need to be familiar with them. Calculus is a service course. The purpose of the course is to make science and engineering majors fluent in the language of calculus as used in their fields.Rather than always using ... Again and again he finds $-4$ greater than $-3$.Ask him who is richer, he who has a smaller debt $($like $3$ rupees$)$, or he who has a bigger debt $($like $4$ rupees$)$, assuming both persons have no money, just debts.He has spent several years seeing $4$ greater than $3$.A debt of $4$ rupees is indeed bigger than one of $3$ rupees. But the one that ... You asked:"How do/would you explain why division by zero does not produce aresult."Any such explanation that is not rooted in student understanding would be talking to ourselves, not to students. Therefore both meaning and student understanding are important. Otherwise, what's the point? So I have grounded my response there.Young students (... I have never quite understood why it was impressive and/or beautiful, and it always frustrates me when people claim that it is. Therefore, I would say "no, it is not good motivation", because beauty is subjective.On the other hand, if you explain to students that the formula is based on $e^{i\theta}=\cos\theta+i\sin\theta$ and this essentially allows you ... Have the students tell you that division by zero is a non-sequitur. This is possible at any age where division is understood at all.Teacher: If there are eight cookies and four children, how many cookies does each child get?Student: Uh, two.Teacher: Yes! This is a division problem. $\frac{8}{4} = 2$. Now, if there are 8 cookies shared by only two ... Apologies, this should be a comment on the answer provided by @Jasper Loy but I don't have enough rep on this site.I just wanted to add that in my experience, struggling students have an easier time grasping negative numbers when the number line is oriented vertically rather than horizontally. I think we as humans naturally make the 'up=greater, down=less'... Point slope form emphasizes the actual meaning of slope.Literally,$$y - b = m(x -a)$$Says"The change in the outputs ($y-b$) is equal to the slope ($m$) times the change in the inputs ($x-a$)".Translating between a verbal statement like this and an equation is essential. Understanding slope is essential. Point slope form of a line is essential.... I think that the following story is quite illuminating for introduction. Naturally, one can/should adapt/change it to better fit the audience, I just wanted to sketch the general idea.Suppose you came here by bus. Which bus was that? You say it was the 42, which is the line that goes from the main station to the university. However, was it really the 42? ... I go a step further than Thomas (see Henry Towsner's answer). In my view,$$ \int f(x) \ dx = \{ F(x) \ | \ F'(x)=f(x) \} $$On a connected domain, it is true that $F'(x)=G'(x)$ implies $F(x)-G(x)=c$ hence, given an integrand which is continuous (or piecewise continuous, insert your favorite weakened set of functions here) we may write:$ \int f(x) \ dx = \{ ... In my experience, one of the problems with series is that usually you have two sequences if you investigate the series $\sum(a_n)$: the sequence $(a_n)$, and the sequence of partial sums $S_n=a_1+\ldots + a_n$. I noticed that trying to stress this distinction helps a lot.To the intuition, I like R. Péter: Playing with Infinity, the chocolate bar example on ... In my opinion, trig substitution is presented in a terrible fashion in every calculus book I have ever seen. "If you see $\sqrt{a^2 - x^2}$, substitute $x = a \sin \theta$, and then use such-and-such trig identity, blah, blah, blah..." Yet another unmotivated rule to memorize.I always present trig substitution as follows: If you see any algebraic ... Even without explicitly introducing the language of "linear maps", "vectors", and so on, you can still develop matrices as a shorthand for such maps, thought of as exchange rates.Example:Machine A can make 3 sprogs and 2 sprakets a day. Machine B can make 1 sprog and 3 sprakets a day. We summarize this data in a table of values:$$\begin{bmatrix} 3 &... Isolines and isosurfacesIsolines and isosurfaces (i.e., lines and areas of equal whatever) correspond to the graphs of implicit functions and are relevant in many sciences, e.g., isopotentials (physics), isobars and isotherms (metereology).Probably the best-known example of this kind are topographical contour lines (lines of equal altitude, see image ... My thinking is that it is just so damn useful for students to be aware of these tricks. The examples/exercise should allow them to develop a sense of when and how it is helpful to simplify an expression in this way, BUT also when it is NOT necessary.Leading up to it by looking at fractions. Should the students write a rational number in the form $3\frac17$ ... I'd say different proofs usually employ different techniques, which in turn might be applicable to different sets of other theorems. So the more proofs I know for one theorem, the higher the chances that I'll be able to adapt at least one of them to a similar (or maybe not so similar) theorem I'm trying to prove.Furthermore, seeing several techniques ... I have found it motivates to explain the determinant as computing a volume.One can work through and convince for $2 \times 2$ and $3 \times 3$ matrices,and perhaps only hint at the$n \times n$ generalization, when $|\det(M)|$ is the volume of the $n$-dimensionalparallelepiped spanned by the column vectors of $M$.&... I like Markov chains and Google PageRank (which is essentially a special kind of Markov chain). It doesn't take very long to explain and motivate Markov chains and to argue that the probability distribution at time $n$ is the $n$'th power of the transition matrix times the distribution at time $0$. You can then start talking about how to calculate powers ... As someone else teaching calculus and higher math to college students, I use point slope form repeatedly:In the/a definition of derivative, I use point-slope form. I have students think about $y_1-y_2=m(x_1-x_2)$ rewritten as $f(x_1)-f(x_2)=m(x_1-x_2)$ rewritten as $f(x+\Delta x)-f(x)=m(x+\Delta x-x)$ rewritten with limits to describe the slope $m$ as the ... What is $\frac 1 a$? It is the unique (real) number such that $a\cdot \frac 1 a=1$. Does there exist a real number that multiplied by $0$ gives $1$? No. Why is this? Because if $0\cdot b=0$ which ever is $b$. This is about not being defined. Still... why is $\frac 1 0=\infty$ not so completely wrong? Because they can see that the smaller is $a$ then the ... A sketch of one idea. I think it's probably better spread over a couple of days.Day one:Start them counting, from zero, out loud to you. Write the numbers on the board as they go. Zero (0), one (1), two (2), ... , ten (10). Stop here. Prompt a discussion about what happened - how is the most recent number different than all of the previous numbers?I ... In third grade we taught division using repeated subtraction. To divide 6 by 2, subtract 2 until you get to 0. 6-2=4, 4-2=2, 2-2=0. It took 3 steps so 6÷2=3. This can also be shown on a number line, where it takes 3 steps of 2 units to go from 6 to 0. Teaching the concept of division this way is just the inverse of what we have done for multiplication.... Imagine a linear mapping $f: R^2 \to R^2, e_1 \mapsto (1.5, 0.5), e_2 \mapsto (0.5, 1.5)$. (As long as $R$ contains the numbers $1.5$ and $0.5$, it could be any ring. The real numbers serve as the most convenient example, however.)Can we "see", what the mapping does? Can we "see" what $f^5$ does? Given a basis of the two eigenvectors, $(1,-1), (1,1)$ we ... I think it turns out that "perfect" numbers do not interact much with other parts of number theory. Some of these very old, elementary, very ad-hoc definitions of special classes of integers have proven (and will prove) to interact interestingly with other ideas, but some seem not to. It's not easy for a beginner to guess the significance or subtlety of one ...
The answer by Martin is good, but I still want to continue along the thought path I had in mind. I hope I can give a different physical basis for confirmation of the number. I'm sure there are better ways to do this, but I want to do it using only the information I have. I want to consider the transfer of some amount of mass ($m$ for now) from near the surface of the sphere to the inside of the sphere (fully integrating it). In both cases we can asses some amount of energy difference between the spherical blob of mass $M+m$ and the state of the sphere with mass $M$ with the $m$ mass hovering just above the surface. So the two states under consideration are: State 1: (M+m) big ball State 2: (M) ball next to (m) ball I have no problem assuming $m \ll M$. Now, I want to write expression for both the gravitational binding energy as well as the surface binding energy of a ball. I'll do this for a generic sphere with a mass of $M$ and uniform density. $$E_g(M) = - \frac{3 G M^2}{5 R(M)}$$ $$E_s(M) = - 4 \pi R(M)^2 \sigma$$ I'm leaving it in this form because we'll all agree that given the mass and the density, finding $R(M)$ isn't a problem. Now I want to write expressions for the difference in energy from state 2 to state 1. This is straight forward for the surface tension energy because the bodies are non-interacting. However, for the gravitational binding energy, there is still a binding energy between the large ball and the small ball that must be included. Keep in mind that state 1 is the lower energy state. $$\Delta E_s = E_s(M) + E_s(m) - E_s(M+m)$$ $$\Delta E_g = E_g(M) + E_g(m) - \frac{G M m}{R(M)} - E_g(M+m)$$ Now, obviously, the idea would be to set these equal, assume that $m$ is small, and then find the $M$ as a solution to that equation. But that doesn't work! I think I have a major conceptual flaw in this approach, where the scaling of the surface area of the small $m$ blob just doesn't follow a scaling that works. I didn't know what to do, so I just removed the $E_s(m)$, abandoning all logical reasoning behind my work. But when I did this and used Martin's values, I obtained the following: $$M=1.05 \times 10^6 kg$$ This was assuming $m=0.1 kg$. If I change that value it doesn't change $M$ very much, which is encouraging. This is a satisfying answer for me, because Martin's answer comes out to around 0.5 million kg.
A series of event times is a type of point process. A good introduction to measuring correlations between point processes, as applied to neuronal spike trains, is given by Brillinger [1976]. One of the early, seminal works on point processes is that of Cox [1955]. The simplest measure of association between two temporal point processes (let's call them $A$ and $B$) is probably the association number, $n$. To calculate $n$ a window of half-width $h$ is defined around each time in series $A$. The individual association number, $c$, is then the number of events in series $B$ that fall within a given window, and $n$ is then defined as $$n(h) = \Sigma_{i=1}^{N} c_i$$This is generally calculated for a range of time lags, $u$, such that we get $n(u,h=const)$ [see, e.g., equations 9 and 10, and figure 3, of Brillinger [1976]]. If series $A$ and $B$ are uncorrelated then the association number will fluctuate, as a function of lag, due to sampling variations, but will have a stable mean. If we normalize by $2hT$, where $T$ is the length of the interval from which our samples were drawn, then we get an estimate of the cross-product density. Correlations at different time lags can then be seen by inspecting the cross-product density for departures from 1 (if the processes are independent the cross-product density should be 1, which is expected at large lags for most physical processes). Assessing the significance of these departures can be addressed in a number of different ways, but many assume that at least one of the processes is Poisson [Brillinger, 1976; Mulargia, 1992]. If those assumptions are met then 95% confidence intervals on the cross-product density can be estimated by [Brillinger, 1976]$$1 \pm \frac{1.96}{2\sqrt{}2hTp_Ap_B}$$where $p_A$ and $p_B$ are the mean intensities of series $A$ and $B$, given by $p_A = N/T$, where $N$ is the number of events in $A$ (similar for $B$). Excursions outside the C.I. are therefore indicative of a significant association between the event sequences at certain lags. If neither series is Poisson then a bootstrapping approach can be used to estimate confidence intervals [Morley and Freeman, 2007]. When taking this approach it's important to understand the system as resampling the series $A$ and $B$ may not work without applying, say, a moving block bootstrap to preserve correlations in the spike trains. The approach taken by Morley and Freeman was to instead resample from the individual association numbers. ... we see that n(u, h) is a summation of the N individual associations c$_i$ for given u, h. Using this set of individual associations, we can construct a new series, c*$_i$, by drawing with replacement a random selection of N individual associations. Summing these N randomly-sampled associations gives a bootstrap estimate of the association number for given u, h. Repeating this for every lag u, we construct a bootstrap estimate of the association number with lag n*(u, h). Performing this bootstrapping procedure K times allows us to model the sampling variation in n(u, h). A further treatment of assessing confidence intervals using bootstrapping techniques is given by Niehof and Morley [2012], but the above should work for two series of neuronal spike trains (or similar simple system). References: Brillinger, D. R. (1976), Measuring the association of point processes: A case history, Am. Math. Mon., 83(1), 16–22. Cox, D. R. (1955), Some statistical methods connected with series of events, J. R. Stat. Soc., Ser. B, 17(2), 129–164. Mulargia, F. (1992), Time association between series of geophysical events, Phys. Earth Planet. Inter., 72, 147–153. Morley, S. K., and M. P. Freeman (2007), On the association between northward turnings of the interplanetary magnetic field and substorm onsets, Geophys. Res. Lett., 34, L08104, doi:10.1029/2006GL028891. Niehof, J.T., and S.K. Morley (2012). “Determining the Significance of Associations between Two Series of Discrete Events : Bootstrap Methods”. United States. doi:10.2172/1035497
Changes Maths formatting Solving the above four equations yields: :<math>cos\theta = 1 - \frac{I \omega^2}{gl(2m+M)}</math> Plugging in for angular velocity from the initial equations above yields: :<math>cos\theta = 1 - \frac{m^2 v^2 l}{Ig(2m+M)}</math> Calculating the moment of inertia <math>I</math> now becomes necessary for a rod of length ''l '' and mass ''M '', with a small block of mass ''m '' at its end. An ordinary rod of length ''l '' has the following moment of inertia relative to an axis of rotation at one end: :<math> \mathbf{I }= \frac{M}{3} \times{l^2 }</math> ::<math>I \ \stackrel{\mathrm{def}}{=}\ \sum_{i=1}^{N} {m_{i} r_{i}^2}\,\!</math> where ''m '' is the mass at each (perpendicular) distance ''r '' from the axis of rotation. Thus the moment of inertia ''I '' for a rod of length ''l '' and mass ''M '', with a small block of mass ''m '' at its end is simply this: :<math> \mathbf{I }= \frac{M}{3} \times{l^2 } + m \times{l^2 }</math> which is: :<math> \mathbf{I }= \frac{1}{3} \times(3m+M) \times{l^2 }</math> Plugging this back into the unsolved equation above yields: :<math>\ mathbf{cos\theta} = 1 - \frac {{3m^2 }{v^2 }}{ l\times{g}\times{(2m+M)(3m+M) }}</math> If we complicate the problem further by assuming the small block began with velocity zero from an incline of height h, then applying conservation of energy to the moment in time just prior to its collision with the rod yields the following velocity of impact: :<math> \mathbf{\frac{m \times{v^2 }}{2 }}= m\times{g}\times{h}</math> and hence :<math> \mathbf{v^2 }=2gh</math> and thus the solution is: :<math>cos\theta = 1 - \frac{{6m^2}{h}}{l(2m+M)(3m+M)}</math> or
Under some probability space $(\Omega,\mathcal{F},\Bbb{P})$ equipped with the (augmentation of the) natural filtration ${\bf{F}}=(\mathcal{F}_t)_{t \geq 0}$ of a $\mathbb{P}$-Wiener process $(W_t)_{t\geq 0}$, consider the Itô process $$ X_t = X_0 + \int_0^t \mu(s) ds + \int_0^t \sigma(s) dW_s \tag{1} $$ for some sufficiently well-behaved functions $\mu$ and $\sigma$, such that the stochastic integration can be defined in the Itô sense. Define the integral$$Y_t = \int_0^t X_u du $$ From $(1)$ it follows that \begin{align}Y_t &= \int_0^t \left( X_0 + \int_0^u \mu(s) ds + \int_0^u \sigma(s) dW_s \right) du \\&= X_0 t + \int_0^t \int_0^u \mu(s) ds du + \int_0^t \int_0^u \sigma(s) dW_s du \end{align}Using (stochastic) Fubini theorem one can permute the integration order and write\begin{align}Y_t &= X_0 t + \int_0^t \int_s^t \mu(s) du ds + \int_0^t \int_s^t \sigma(s) du dW_s \\&= X_0 t + \int_0^t (t-s) \mu(s) ds + \int_0^t (t-s) \sigma(s) dW_s \\&= \left(X_0 + \int_0^t \mu(s) ds + \int_0^t \sigma(s) dW_s\right) t - \int_0^t s \mu(s) ds - \int_0^t s \sigma(s) dW_s \\&= X_t t - \underbrace{\int_0^t s \mu(s) ds}_{\text{classic integral}} - \underbrace{\int_0^t s \sigma(s) dW_s}_{\text{Itô integral}} \\\end{align} And one can now appeal to the usual "differential" definition (whether from standard calculus or Itô calculus) to write:\begin{align}dY_t &= \underbrace{X_t dt + t dX_t + 0}_{d(X_t t)\,\,\,\text{Itô's lemma}} - t \mu(t) dt -t \sigma(t) dW_t \\&= X_t dt + t dX_t - t \underbrace{(\mu(t) dt + \sigma(t) dW_t)}_{dX_t} \\&= X_t dt\end{align}Now as mentioned in the comments, because any smooth function $g(X_t)$ will also be an Itô process, you can repeat the reasoning with $\tilde{X}_t := g(X_t)$ to get, for your particular problem,$$ dY_t = \tilde{X}_t dt = g(X_t) dt $$ [Remark] Should $X_u = X(u) \to X(t,u)$ with an additional, explicit dependence on $t$ things can get more complicated. See this related question on math SE. [Edit] Just saw that this was discussed here as well.
I have u-wind, v-wind and w-wind at 11 pressure levels (starting from 1000 to 100). I would like to calculate column averaged kinetic energy. I have the formula to do this as: $KE=\frac 1{g(Ps-Ptop)} $$ \int_{Ptop}^{Ps}\frac 12(u^2+v^2+w^2)dp$ My question is how to use this formula to compute column averaged kinetic energy? Can I do it simply as follows: $KE=\frac 1{9.8(1000-100)}\frac12[ \sum_{Ptop}^{Ps}(u^2+v^2+w^2)] $ Where the [ ] is the sum of $u^2+v^2+w^2$ at all 11 pressure levels
Mesh Analysis in AC Circuits concept Mesh analysis is a circuit analysis technique that turns the currents in a circuit into a series of simultaneous equations using Kirchhoff's Voltage Law. Mesh analysis allows you to find the current or voltage anywhere in the circuit by solving for the current at every point in the circuit. fact Mesh analysis only works on circuits called "planar" circuits which means they can be drawn without any wires crossing over another (unless they join at the crossing point). fact The steps to performing mesh analysis are: Find the impedance for all resistors, capacitors and inductors Replace all elements with their phasor representations Assign each loop in the circuit a current Write Kirchhoff's voltage law for each loop Solve the equations simultaneously to find all the currents example Use mesh analysis to analyse the following circuit and find the current through the capacitor:First we find the impedances of each of the elements. Since \(\omega = 2\pi\): \(Z_C = -j\frac{1}{\omega C} = -j159\Omega\) And the impedances of the resistors are of course, unchanged. Now we replace all elements in the circuit with their phasor representations. The voltage sources are replaced with DC symbols just to avoid any confusion with changing voltage polarities during the analysis. Now we assign each of the loops in the circuit (that don't contain another loop) a "loop current". Next we write Kirchhoff's voltage law around our first loop with current \(I_1\). \(3\angle 0^\circ - 1000\angle 0^\circ I_1 - 159\angle -90^\circ (I_1-I_2) = 0\) The impedance of the capacitor was multiplied by \(I_1-I_2\) because the current through it is given by \(I_c = I_1-I_2\). Now the second equation for the second loop: \(-1\angle 0^\circ - 159\angle -90^\circ (I_2-I_1) - 200\angle 0^\circ I_2 = 0\) If we solve these two equations simultaneously we'll find that: \(I_1 = 0.0025\angle -15.7^\circ\) \(I_2 = 0.0038\angle 119^\circ\) I won't solve them here (or anywhere else on this page) to avoid adding confusing detail. If you're not sure how to solve simultaneous equations go now and learn, you're going to need it. Finally the current running through the capacitor is: \(I_c = I_1 - I_2 = 0.0058\angle -44^\circ\) \(I_c = 0.0058\cos(2\pi t - 44^\circ)\) It's a lot of work to do mesh analysis on a circuit. However without mesh analysis your job would be longer and the steps less clear. There are a few special cases we need to add. We need to know what to do in the presence of current sources, especially when they're in between two loops. fact If there is a current source on a branch that is part of only one loop, the current in that loop is given by the current source. example Find the loop current \(I_1\) in the following circuit:The current source \(I = 5\)A only touches the loop with current \(I_1\) which means that \(I_1 = 5\)A. fact If there is a current source on a branch shared by two different loops we: Replace it with an open circuit and write Kirchhoff's voltage law around the two loops sharing the current source. Write an equation linking the currents of two loops that share the source example Find the loop currents in the following circuit:Since the current source \(I = 3\angle 0^\circ\) is shared by both \(I_1\) and \(I_2\) we replace the current source with an open circuit and write KVL around both \(I_1\) and \(I_2\). \(-159\angle -90^\circ I_1 - 1k\angle 0^\circ I_1 - 200\angle 0^\circ I_2 - 2\angle 0^\circ = 0\) Then write \(I_2 - I_1 = 2\angle 0^\circ\) as our second equation. Solving both equations simultaneously gives us: \(I_1 = 0.329\angle 172.5^\circ\) \(I_2 = 1.67\angle 1.5^\circ\) practice problems
Introduction to Op Amps concept The Operational Amplifier (Op Amp) is an integrated circuit built out of either BJTs or FETs that makes building practicle amplifiers and filters much simpler. The op amp is much easier to think about and design with than single transistors and is a base building block in so many circuits around your home. In this topic we'll cover how the op amp works in a basic way, how to analyse simple op amp circuits, and we'll cover two common amplifiers you can build with a single op amp, the inverting amplifier and non-inverting amplifier. fact The op amp is a three terminal device whose symbol is: The three terminals are the positive input, negative input and output Op Amps also have two other terminals to allow you to hook it up to a power source. One of the big benefits of an op amp is that the power is applied separately to the inputs which helps to make the analysis much simpler (you don't have to check that the device is in the "active" mode, there's just always enough power to operate with). The power terminals are often left off of diagrams because they don't effect the output unless the output is supposed to be outside the range of the power supply. For instance a 5V power supply can't output 12V for obvious reasons. fact The output of an op amp is given by: $$ V_o = A(V_+ - V_-) $$ Where \(A\) is the amplification factor of that particular op amp (like \(\beta\) in BJTs this number differs between every device and is never known beforehand). \(A\) is always large (on the order of 1000) When we build op amp circuits we hook the output to one or both of the inputs in some way (through resistors, caps, other devices) to create a feedback loop. Because \(A\) is very large we can think of an op amp as a device which creates an output that will make sure \(V_{in} = V_{out}\) This can be seen in the unity gain buffer: Here \(V_- = V_+ - \frac{V_o}{A} \approx V_+\) fact To analyse op amp circuits we follow two simple rules: \(V_+ = V_-\) Current does not enter or exit the input terminals example Analyse the following op amp circuit and find the output voltage:Now we start with our first rule of op amp analysis: \(V_+ = V_-\) This means that \(V_- = 0\)V Now we can find an expression for the current through \(R_1\). $$I_{R1} = \frac{V_{in}}{R_1}$$ Now using our second rule, that no current enters or leaves the input terminals we know that all of \(I_{R1}\) must run through \(R_2\): $$I_{R2} = I_{R1}$$ Now since we know that \(V_+ = 0\)V and we know the current through \(R_2\) we can calculate \(V_o\): $$\begin{align} V_o & = 0 - I_{R2}R_2 \\ & = -\frac{V_{in}}{R_1}\cdot R_2 \\ & = -\frac{R_2}{R_1}V_{in} \\ \end{align}$$ The above example is of a particular op amp circuit known as an "inverting amplifier". One of the fundamental op amp circuits you'll need to know. fact The Inverting Amplifier is an op amp circuit that looks like: Its output is given by: $$ V_o = -\frac{R_2}{R_1}V_{in} $$ example Analyse the following op amp circuit and find the output voltage:We'll start this the same way we started the inverting amplifier analysis, by using our first rule \(V_+ = V_-\) $$V_- = V_{in}$$ Now we know that the current through \(R_1\) is given by: $$\begin{align} I_{R1} & = \frac{V_-}{R_1} \\ & = \frac{V_{in}}{R_1} \\ \end{align}$$ And since we know that no current enters or leaves \(V_+\) all of \(I_{R1}\) must be flowing from the output through \(R_2\) $$\begin{align} I_{R2} & = I_{R1} \\ & = \frac{V_{in}}{R_1} \end{align}$$ So we know that $$\begin{align} V_o & = V_+ + I_{R2}R2 \\ & = V_{in} + \frac{V_{in}}{R_1}R_2 \\ & = V_{in}\left(1 + \frac{R_2}{R_1}\right) \\ \end{align}$$ This circuit is called the "non inverting amplifier" since its output has the same polarity as the input. fact The Non Inverting Amplifier is an op amp circuit that looks like: Its output voltage is given by: $$V_o = V_{in}\left(1 + \frac{R_2}{R_1}\right)$$ practice problems
Since you are in infinite dimensions, you would first need to specify in which space the operator $A$ is supposed to act, then you can try to prove that it fulfils the assumptions for the spectral theorem. If we first look at the action of $A$ on an arbitrary sequence of real (I'm assuming that you are working in $\mathbb{R}$) numbers $a=(a_1, a_2,...)$ we see that $A(a) = (a_1, a_1, ...)$ which won't be e.g. in $l^2$, the natural Hilbert space of sequences. Actually, $A$ might seem to only make sense in $l^\infty$ (but it doesn't, as @MartinArgerami points out, because we don't have a countable basis with which to interpret what the action of $A$ on an arbitrary vector $u \in l^\infty$ is), and this is definitely not Hilbert. Because we then lack the notion of a scalar product, we cannot define what orthogonal eigenspaces would be, hence no orthogonal diagonalisation. Note however that we can formally find another "infinite matrix" $P$ such that $P^{-1}$ "exists" in some sense and $D = P A P^{-1}$ is a diagonal infinite matrix, namely $$D = \left(\begin{array}{ccccc} 1 & & & & \\ 0 & 0 & & & \\ 0 & 0 & 0 & & \\ 0 & 0 & 0 & 0 & \\ \vdots & & & & \ddots\end{array}\right)$$ with $$P^{- 1} = \left(\begin{array}{ccccc} 1 & & & & \\ 1 & 1 & & & \\ 1 & 0 & 1 & & \\ 1 & 0 & 0 & 1 & \\ \vdots & & & & \ddots\end{array}\right),\ \ P = \left(\begin{array}{ccccc} 1 & & & & \\ - 1 & 1 & & & \\ - 1 & 0 & 1 & & \\ - 1 & 0 & 0 & 1 & \\ \vdots & & & & \ddots\end{array}\right).$$ Edit: If you are wondering where those matrices came from, it was basically this: it is natural to see how $A$ acts on the canonical basis, and one immediately sees that $A(e_1)=u=(1,1,1,1,...)$ is an eigenvector with eigenvalue 1 and that $A(e_i)=0$ for all $i>1$, so $e_i$ are eigenvectors with eigenvalue 0. You want $P$, $P^{-1}$ such that $D=P A P^{-1}$, where $P^{-1}$ is a change from the "new" basis of eigenvectors into the "old", i.e. the matrix with columns $u, e_2, e_3, ...$ Compute its "inverse" $P$, see if $D$ is all zeros except in the first entry, and you are done. But again, this is all formal and quite wrong, since we don't have a basis to begin with. See Martin's answer for more.
Let $R$ be a ring and $M,N$ be $R-$modules. Let $\pi: M \to N$ be a module homomorphism. Let $K$ be a $R$-submodule of $N$. Then $\pi^{-1}(K)$ is a submodule of $M$ Surprisingly, I couldn't find this result on google. Is my proof correct? Proof: Since $\pi (0) = 0 \in K$, $0 \in \pi^{-1}(K)$ and $\pi^{-1}(K) \neq \emptyset$ Let $x,y \in \pi^{-1}(K).$ Let $r,s \in R$. Then: $\pi(x), \pi(y) \in K$ and hence: $$\pi(rx + sy) = r\pi(x) + s\pi(y) \in K$$ because $K$ is a submodule of $N$. From this, it follows that $rx + sy \in \pi^{-1}(K)$ and we are done.
What you need to know A recurring decimal is a decimal number which has a pattern than repeats over and over after the decimal place. Every recurring decimal can also be written as a fraction. In this topic, we’ll look at how to go from a recurring decimal to and fraction and vice versa. Two examples of recurring decimals, one quite common and one less common, are \dfrac{1}{3} = 0.\dot{3} = 0.33333... \dfrac{6}{11} = 0.\dot{5}\dot{4} = 0.54545454... Notice the dot on top of some of the digits; this tells us what is repeated. The first dot denotes the start of the repeated section, and the second dot denotes the end of it. Convert Fractions to Recurring Decimals To convert a fraction to a recurring decimal we must treat the fraction like it is a division and use some method of division to divide the numerator by the denominator. Here we will use short division, also known as the “bus stop method” (I highly recommend this method of division for all purposes). In this example, we’ll see how it soon becomes obvious that the result of your division is a recurring decimal. Example: Write \dfrac{7}{33} as a decimal. So, we’re going to try dividing 7 by 33. When setting up the bus stop method, you should put in a whole lot of zeros after the decimal place – chances are you’ll only need a few, but it’s better to put more than you need. So, the set up short division should look like Doing the short division (which you can brush up on here (multiplying dividing revision), we get Since the last remainder was 7, we looked for how many times 33 goes into 70, but we already did that once. The answer is twice, and then the remainder is 4, which means we’re now going to be asking the question of how many times 33 goes into 40, but we’ve already done that, too. Carrying this on, the pattern becomes obvious. Therefore, we must have that \frac{7}{33} = 0.\dot{2}\dot{1}. Practice a few of these and you’ll quickly get used to spotting when they start repeating. Convert Recurring Decimals to Fractions This is really what this topic is about. Let’s dive into an example to see how it goes. Example: Write 0.\dot{1}\dot{4} as a fraction. Firstly, set x = 0.\dot{1}\dot{4}, the thing we want to convert to a fraction. Then, 100x = 14.\dot{1}\dot{4}. Now that we have two numbers, x and 100x, with the same digits after the decimal point, if we subtract one from another, the numbers after the decimal point will cancel. 100x - x = 99x = 14.\dot{1}\dot{4} - 0.\dot{1}\dot{4} = 14 Removing the working out steps from this line, we have 99x = 14 Then, if we divide both sides of this by 99, we get x = \dfrac{14}{99}. Okay, so how does this method work? Firstly, always assign the thing your converting to be x. Once you’ve done this, the aim is to end up with two numbers (both will be some multiple of ten times by x) that have exactly the same recurring digits after the decimal point. This really is key, because then when you subtract one from the other (in the main step of the process), the digits after the decimal point will cancel and you’ll be left with a nice whole number. At that point, it’s just a case of solving a straightforward linear equation by doing one division, and you’ve got your answer. Let’s see another example – remember, the aim is to get two multiples of x both with the same thing after the decimal place. Example: Write 0.8\dot{3} as a fraction. Firstly, notice how the 8 has no dot above it so isn’t repeating. This will make a difference. So, let x = 0.8\dot{3}. This time, if we multiply this by 10, 100, 1,000 etc, we’re not going to end up with something that has the same digits after the decimal point as x. However, if instead we take 10x = 8.\dot{3} and 100x = 83.\dot{3}, then we have two multiplies of x that do have the same digits after the decimal point. So, subtracting one from the other, we get 100x - 10x = 90x = 83.\dot{3} - 8.\dot{3} = 83 - 8 = 75 Removing the working out steps from this line, we have 90x = 75 Dividing both sides by 90, we get that x = \dfrac{75}{90} = \dfrac{5}{6}. This time we were unable to subtract x from anything to get the desired outcome, so we had to be clever and multiply x by 10 and 100 before doing the subtraction. You will need to do something like this anytime there’s a decimal digit in your number that isn’t involved in the recurring part. Example Questions 1) Write \dfrac{10}{11} as a recurring decimal. Treating this fraction like a division, we will use short division (or otherwise) to find the result of dividing 10 by 11. The result of the short division should look like 2) Write 0.\dot{3}9\dot{0} as a fraction. Set x = 0.\dot{3}9\dot{0}. Then, we get 1,000x = 390.\dot{3}9\dot{0} Both of x and 1,000x have the same thing after the decimal place, so they are suitable to subtract from one another. Doing so, we get 1,000x - x = 999x = 390.\dot{3}9\dot{0} - 0.\dot{3}9\dot{0} = 390 Removing the working-out steps, we have 999x = 390 Then finally, divide both sides by 999 to get x = \dfrac{390}{999}. 3) Write 1.5\dot{4} as a fraction. (HINT: don’t be put off by the fact that it’s a number bigger than 1 – carry out the method as usual and you should be fine) Set x = 1.5\dot{4}. Then, we get 10x = 15.\dot{4} This does not have the same thing after the decimal place as x so we cannot subtract yet. Instead, consider 100x = 154.\dot{4} Then, of 10x and 100x have the same thing after the decimal place, so they are suitable to subtract from one another. Doing so, we get 100x - 10x = 90x = 154.\dot{4} - 15.\dot{4} = 154 - 15 = 139 Removing the working-out steps, we have 90x = 139 Then finally, divide both sides by 90 to get x = \dfrac{139}{90}.
Convert Between Larger and Smaller Units KS2 Revision What you need to know Things to remember: We do the opposite of what you think we might do. To get from bigger units to smaller units, we have to multiply. The numbers move to the left. To get from smaller units to bigger units, we have to divide. The numbers move to the right. We have seen kilometres before, but let’s have a quick recap. 1km = 1000m How do we turn 1 into 1000? We multiply by 1000! So, to turn km into m, we multiply by 1000. \text{km to m} \rightarrow \times1000 But if we want to go back from m to km, we have to do the opposite… divide by 1000! \text{m to km} \rightarrow \div1000 What is 3.25km in m? \text{km to m} \rightarrow \times1000 Hint: To multiply by 1000, we move the numbers to the left three place values. 3.25\times1000=3250 3.25km = 3250m What is 267m in km? \text{m to km} \rightarrow \div1000 Hint: To divide by 1000, we move the numbers to the right three place values. 267\div1000=0.267 267m = 0.267km If we’re looking at smaller units, like m and cm, we need to remember that: 1m = 100 cm How do we turn 1 into 100? We multiply by 100! So, to turn m into cm, we multiply by 100. \text{m to cm} \rightarrow \times100 But if we want to go back from cm to m, we have to do the opposite… divide by 100! \text{cm to m} \rightarrow \div100 What is 2.77m in cm? \text{m to cm} \rightarrow \times100 Hint: To multiply by 100, we move the numbers to the left two place values. 2.77\times100=277 2.77m = 277cm What is 57cm in m? \text{cm to m} \rightarrow \div100 Hint: To divide by 100, we move the numbers to the right left two place values. 57\div100=0.57 57cm = 0.57m Converting between units of volume should be a lot easier now, if we remember that 1L = 1000ml. This is the same conversion as with km and m! \text{L to ml} \rightarrow \times1000 But if we want to go back from ml to L, we have to do the opposite… divide by 1000! \text{ml to L} \rightarrow \div1000 What is 5.896L in ml? \text{L to ml} \rightarrow \times1000 Hint: To multiply by 1000, we move the numbers to the left three place values. 5.896\times1000=5896 5.896L = 5896ml What is 753ml in L? \text{ml to L} \rightarrow \div1000 Hint: To divide by 1000, we move the numbers to the right three place values. 753\div1000=0.753 753ml = 0.753L Luckily for us, 1kg=1000g, so we can use the same conversions… AGAIN! \text{kg to g} \rightarrow \times1000 But if we want to go back from g to kg, we have to do the opposite… divide by 1000! \text{g to kg} \rightarrow \div1000 What is 3.201kg in g? \text{kg to g} \rightarrow \times1000 Hint: To multiply by 1000, we move the numbers to the left three place values. 3.201\times1000=3201 3.201kg = 3201g What is 15g in kg? \text{g to kg} \rightarrow \div1000 Hint: To divide by 1000, we move the numbers to the right three place values. We will need to add some extra 0s! 15\div1000=0.015 15g = 0.15kg Example Questions Question 1: What is 4.46km in m? \text{km to m} \rightarrow \times1000 Hint: To multiply by 1000, we move the numbers to the left three place values. 4.46\times1000=4460 4.46km = 4460m Question 2: What is 23ml in L? \text{ml to L} \rightarrow \div1000 Hint: To divide by 1000, we move the numbers to the right three place values. We’ll need an extra 0 here! 23\div1000=0.023 23ml = 0.023L
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Friedrich, Tobias; Kötzing, Timo; Lagodzinski, J. A. Gregor; Neumann, Frank; Schirneck, MartinAnalysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear Constraints. Theoretical Computer Science 2019 Linear functions have gained great attention in the run time analysis of evolutionary computation methods. The corresponding investigations have provided many effective tools for analyzing more complex problems. So far, the runtime analysis of evolutionary algorithms has mainly focused on unconstrained problems, but problems occurring in applications frequently involve constraints. Therefore, there is a strong need to extend the methods for analyzing unconstrained problems to a setting involving constraints. In this paper, we consider the behavior of the classical (1+1) evolutionary algorithm on linear functions under linear constraint. We show tight bounds in the case where the constraint is given by the OneMax function and the objective function is given by either the OneMax or the BinVal function. For the general case we present upper and lower bounds. Many important graph theoretic notions can be encoded as counting graph homomorphism problems, such as partition functions in statistical physics, in particular independent sets and colourings. In this article we study the complexity of~\($\#_p\textsc{HomsTo}H$\), the problem of counting graph homomorphisms from an input graph to a graph \($H$\) modulo a prime number~\($p$\). Dyer and Greenhill proved a dichotomy stating that the tractability of non-modular counting graph homomorphisms depends on the structure of the target graph. Many intractable cases in non-modular counting become tractable in modular counting due to the common phenomenon of cancellation. In subsequent studies on counting modulo~\($2$\), however, the influence of the structure of~\($H$\) on the tractability was shown to persist, which yields similar dichotomies. Our main result states that for every tree~\($H$\) and every prime~\($p$\) the problem \($\#_p\textsc{HomsTo}H$\) is either polynomial time computable or \($\#_p\mathsf{P}$\)-complete. This relates to the conjecture of Faben and Jerrum stating that this dichotomy holds for every graph \($H$\) when counting modulo~2. In contrast to previous results on modular counting, the tractable cases of \($\#_p\textsc{HomsTo}H$\) are essentially the same for all values of the modulo when \($H$\) is a tree. To prove this result, we study the structural properties of a homomorphism. As an important interim result, our study yields a dichotomy for the problem of counting weighted independent sets in a bipartite graph modulo some prime~\($p$\). These results are the first suggesting that such dichotomies hold not only for the one-bit functions of the modulo~2 case but also for the modular counting functions of all primes~\($p$\). Bläsius, Thomas; Eube, Jan; Feldtkeller, Thomas; Friedrich, Tobias; Krejca, Martin S.; Lagodzinski, J. A. Gregor; Rothenberger, Ralf; Severin, Julius; Sommer, Fabian; Trautmann, JustinMemory-restricted Routing With Tiled Map Data. IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2018: 3347-3354 Modern routing algorithms reduce query time by depending heavily on preprocessed data. The recently developed Navigation Data Standard (NDS) enforces a separation between algorithms and map data, rendering preprocessing inapplicable. Furthermore, map data is partitioned into tiles with respect to their geographic coordinates. With the limited memory found in portable devices, the number of tiles loaded becomes the major factor for run time. We study routing under these restrictions and present new algorithms as well as empirical evaluations. Our results show that, on average, the most efficient algorithm presented uses more than 20 times fewer tile loads than a normal A*. Kötzing, Timo; Lagodzinski, J. A. Gregor; Lengler, Johannes; Melnichenko, AnnaDestructiveness of Lexicographic Parsimony Pressure and Alleviation by a Concatenation Crossover in Genetic Programming. Parallel Problem Solving From Nature (PPSN) 2018: 42--54 For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search,where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted ConcatenationCrossover GP. For this purpose three variants of the well-studied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions,while local search cannot be efficient for all three variants independent of employing bloat control. While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat (unnecessary growth of solutions) slowing down optimization. Theoretical analyses could so far not bound bloat and required explicit assumptions on the magnitude of bloat. In this paper we analyze bloat in mutation-based genetic programming for the two test functions ORDER and MAJORITY. We overcome previous assumptions on the magnitude of bloat and give matching or close-to-matching upper and lower bounds for the expected optimization time. In particular, we show that the \((1+1)\) GP takes (i) \(\Theta(T_\text{init + n\, \log n)\) iterations with bloat control on ORDER as well as MAJORITY; and (ii) \(O(T_{\text{init ,log \,T_{\text{init + n(\log n)^3)\) and \(\Omega(T_\text{init + n \,\log n)\) (and \(\Omega(T_\text{init \,\log \,T_{\text{init}})\) for \(n = 1\)) iterations without bloat control on MAJORITY. Friedrich, Tobias; Kötzing, Timo; Lagodzinski, J. A. Gregor; Neumann, Frank; Schirneck, MartinAnalysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear Constraints. Foundations of Genetic Algorithms (FOGA) 2017: 45-54 Linear functions have gained a lot of attention in the area of run time analysis of evolutionary computation methods and the corresponding analyses have provided many effective tools for analyzing more complex problems. In this paper, we consider the behavior of the classical (1+1) Evolutionary Algorithm for linear functions under linear constraint. We show tight bounds in the case where both the objective function and the constraint is given by the OneMax function and present upper bounds as well as lower bounds for the general case. Furthermore, we also consider the LeadingOnes fitness function. Algorithm Engineering Our research focus is on theoretical computer science and algorithm engineering. We are equally interested in the mathematical foundations of algorithms and developing efficient algorithms in practice. A special focus is on random structures and methods.
The amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Contents The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with amsmath. Let's check an example: \begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation} You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned. This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code: \usepackage{amsmath} To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document. \begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation For equations longer than a line use the multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. \begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. As mentioned before, the ampersand character & determines where the equations align. Let's check a more complex example: \begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*} Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. The asterisk trick to set/unset the numbering of equations also works here. For more information see
The principle of least action in classical mechanics (with the Lagrangian formalism) states: Theorem.A path $\gamma$ between configurations $q_{(1)}$ at time $t_1$ and $q_{(2)}$ at time $t_2$ is a solution to the Euler-Lagrange equations associated to a Lagrangian $\mathcal L$ if and only if it is a stationary point of the action functional$$I_\mathcal L[\gamma] := \int_{t_1}^{t_2}\mathcal L(\gamma,\gamma',t)\ dt$$ associated with the Lagrangian $\mathcal L$. When illustrating this principle, people tend to make drawings like the following: All of these synchronous paths do go from configuration $q_{(1)}$ at time $t_1$ to configuration $q_{(2)}$ at time $t_1$. However, shoulnd't the self-intersecting ones not be allowed? If a curve passes through configuration $\bar q$ at two different times $t_a$ and $t_b$, it will do so in general at two different generalized velocities $\dot q_a$ and $\dot q_b$. But the principle does not guarantee the existence (nor the uniqueness) of a curve $\gamma$, and instead assumes its existence by hypothesis: to know that the principle may be applied, i.e. that a solution $\gamma$ of the Euler-Lagrange equations exists and is unique, we need to assume that the two configurations $q_{(1)}$ and $q_{(2)}$ are very (infinitesimally) close in time, so that the usual initial-value existence theorems from the theory of ordinary differential equations may be applied. This means that, in case the self-intersecting trajectory $\gamma$ were to be a solution in this setting, the generalized velocities $\dot q_a$ and $\dot q_b$ would be achieved within an infinitesimally long span of time, that is, the system would be at configuration $\bar q$ and have two different generalized velocities – a very un-physical situation! Is my reasoning correct?
I took Stanford’s machine learning class, CS 229, this past quarter. For my final project, I worked with Daniel Perry to apply a few different machine learning algorithms to the problem of recommending heroes for Dota 2 matches. TL;DR: We achieved about 70% accuracy for predicting match outcomes based on hero selection alone using logistic regression, and we made a small improvement on that result using K-nearest neighbors. Download final report Code on GitHub Introduction Dota 2, the sequel to a Warcraft III mod called Defense of the Ancients (DotA), is an online multiplayer computer game that has attracted professional players and the arrival of international tournaments. Each match consists of two teams of five players controlling “heroes” with an objective of destroying the opposing team’s stronghold. There are 106 unique heroes in Dota 2 (circa December 2013). Why work on a hero recommendation engine for Dota 2? A widely-believed conjecture is that the heroes chosen by each team drastically influences the outcome of the match. The positive and negative relationships between heroes can give one team an implicit advantage over the other team. Valve’s annual Dota 2 tournament, The International, had a prize pool of over $2.8 million dollars in August 2013. With that much money on the line, professional teams recognize the importance of hero selection. Some matches take up to ten minutes for hero selection by both teams. Professional players spend thousands of hours practicing the game, so they tend to gain an intuition for picking heroes. This “gut instinct” is not as accessible to more casual players, so we thought it would be interesting to explore how machine learning could help solve the problem of recommending heroes. Related Work In researching prior work, we discovered a web application called Dota 2 Counter-Pick (no longer available) that uses machine learning to recommend heroes for Dota 2 matches. From the website’s about page: We model hero picking as a zero-sum-game and learn the game matrix by logistic regression. When suggesting picks and bans we assume that teams are mini-max agents that take turns picking one hero at a time. Our growing database includes over 1 million matches from the high skill bracket where players did not disconnect or leave. For 63% of these matches, our method predicted the winning team correctly based on picks alone. We were very inspired by the accuracy of Dota 2 Counter-Pick, and we set out to improve on their results. Data Collection We used the Steam Web API for collecting data about public Dota 2 matches. We wrote a Python script (called dotabot2.py in the GitHub repository) and set it up on a cron job to record data from the 500 most recent public matches every 20 minutes. We only considered matches that satisfied the following requirements: The game mode is either all pick, single draft, all random, random draft, captain’s draft, captain’s mode, or least played. These game modes are the closest to the true vision of Dota 2, and every hero has the potential to show up in a match. The skill level of the players is “very-high,” which corresponds to roughly the top 8% of players. We believe utilizing only very-high skill level matches allows us to best represent heroes at their full potential. No players leave the match before the game is completed. Such matches do not capture how the absent players’ heroes affect the outcome of the match. We filtered our dataset on some of these requirements after running dotabot2.py, so that script’s is_valid_match() function does not account for all three of these requirements. The data for each match is structured as JSON and includes which heroes were chosen for each team, how those heroes performed over the course of the game, and which team ultimately won the game. We stored the JSON for each match in a MongoDB database during data collection. We collected data for 56,691 matches between November 5, 2013 and December 7, 2013. We exported 90% of the matches from our database to form a training set of 51,022 matches. We exported the remaining 10% of our database to form a test set of 5,669 matches. Picking a Feature Vector Machine learning algorithms typically require an input query to be described as a vector of features. For our feature vector, we came up with the following scheme to describe which heroes were chosen by each team in a match: In Dota 2 matches, one team is called the “radiant” and the other team is called the “dire.” These terms are roughly analogous to “home” and “away,” as they only determine the starting point of each team on the game world map, which is roughly symmetric. There are 106 heroes in Dota 2 (circa December 2013), but the web API uses hero ID numbers that range from 1 to 108 (two hero IDs are not used), so for our algorithms we used a 216-element feature vector, x, such that: \begin{aligned}x_i &= \begin{dcases} 1 \text{ if a radiant player played as the hero with id } i\\ 0, \text{ otherwise} \end{dcases}\\x_{108+i} &= \begin{dcases} 1 \text{ if a dire player played as the hero with id } i\\ 0, \text{ otherwise} \end{dcases}\end{aligned} We also defined our label, y, to be: \begin{aligned}y = \begin{dcases} 1 \text{ if the radiant team won}\\ 0, \text{ otherwise} \end{dcases}\end{aligned} Making Predictions Since our dataset contains information about heroes on teams in specific radiant vs. dire configurations, simply running our algorithms on each match in our dataset using the feature vector described above does not fully utilize all of the data. Instead, we make predictions using the following procedure. Given a match feature vector, which we call radiant_query: Run the algorithm on radiant_queryto get radiant_prob, the probability that the radiant team in radiant_querywins the match. Construct dire_queryby swapping the radiant and dire teams in radiant_queryso that the radiant team is now the bottom half of the feature vector and the dire team is now the top half of the feature vector. Run the algorithm on dire_queryto get dire_prob, the probability that the radiant team in radiant_queryloses the match if it was actually the dire team instead. Calculate the overall probability overall_probas the average of radiant_proband (1 - dire_prob). Predict the outcome of the match specified by radiant_queryas the radiant team winning if overall_prob > 0.5and as the dire team winning otherwise. This procedure accounts for matches in our dataset that might not have the team configuration of a given query in one direction (e.g. radiant vs. dire) but may have the configuration in the other direction (e.g. dire vs. radiant). Validating the Importance of Hero Selection Using Logistic Regression Logistic regression is a model that predicts a binary output using a weighted sum of predictor variables. We first trained a simple logistic regression model with an intercept term to predict the outcome of a match. A plot of our learning curve for logistic regression is shown below. The test accuracy of our model asymptotically approaches 69.8% at about an 18,000 training set size, indicating that 18,000 matches is an optimal training set size for our logistic regression model. We believe that this logistic regression model shows that hero selection alone is an important indicator of the outcome of a Dota 2 match. However, since logistic regression is purely a weighted sum of our feature vector (which only indicates which heroes are on either team), logistic regression fails to capture the synergistic and antagonistic relationships between heroes. Predicting Match Outcome Using K-Nearest Neighbors K-nearest neighbors (KNN) is a non-parametric method for classification and regression that predicts objects’ class memberships based on the k-closest training examples in the feature space. We chose to implement KNN in order to better model the relationships between heroes instead of simply taking into account wins when a hero is present. At a high level, we continue to focus on wins and the hero composition of teams, however with KNN, we have an avenue to weigh matches according to how similar they are to a query match we are interested in. For example, if we are interested in projecting who will win a specific five on five matchup (our query match), a match with nine of the heroes from the query match present will give us more information on who will win the query match than a match with only one hero from the query match present. We used a custom weight and distance function and chose to utilize all training examples as “nearest neighbors.” Our polynomial weight function described below aggressively gives less weight to dissimilar training examples: \begin{aligned}w_i &= (\frac{ \sum_{j=1}^{216} AND(q_j, x^{(i)}_j)}{NUM\_IN\_QUERY})^d\end{aligned} Here, x^{(i)} represents the feature vector for training match i and is compared by the logical AND operator to query vector q. Index j represents the hero ID index of each respective vector. NUM\_IN\_QUERY represents the number of heroes present in the query vector. For example, NUM\_IN\_QUERY is 10 if the query contains all 5 heroes for each team. The function is normalized to be between 0 and 1, and it gives more weight to matches that more closely resemble the query match. To do this, the function compares the query match vector to the training match vector and counts every instance where a hero is present in both vectors. A larger d parameter will result in similar matches getting much more weight than dissimilar matches. Alternatively, a low d, for example d = 1, will result in each match being weighted solely by how many heroes in common the match has with the query match. Stated another way, a high d will choose to put more emphasis on the synergistic and antagonistic relationships between heroes, while a lower d will put more emphasis on the independent ability of a hero. To choose the optimal d dimension parameter described above, we used k-fold cross validation with k = 2 on 20,000 matches from our training set and varied d across otherwise identical KNN models. Since KNN must compare the query match to every match in the training set and compute weights and probabilities, this process was quite slow and took about ten hours. Due to time constraints, using more folds or more matches would have taken too long to finish. A graph of the accuracies achieved when varying the weight dimension for a KNN model trained on our training set and evaluated on our test set is shown below. We found the optimal weight dimension to be d = 4, which achieved a mean accuracy of 67.43% during the k-fold cross validation. A plot of our learning curve for KNN is shown below. The test accuracy of our model monotonically increases with training set size up to nearly 70% for around 50,000 training matches. Because we do not see the learning curve level off, this may imply that more data could further improve our accuracy. Recommending Heroes Based on Predicted Match Outcome We recommend heroes for a team using a greedy search that considers every possible hero that could be added to the team and ranks the candidates by probability of the team winning against the opposing team if the candidate was added to the team. Our recommendation engine is modular so that either of our algorithms could be used to recommend heroes. Given the ID numbers of the heroes on both teams, the recommendation engine proceeds as follows: Create a feature vector for the match. Create a set of new feature vectors, each with a different candidate hero added to the original feature vector. Run an algorithm to compute the probability of victory with each feature vector from step 2, averaging the probabilities of the team winning in both radiant and dire configurations as described earlier. Sort the candidates by probability of victory to give ranked recommendations. Future Work Despite the high accuracy of our K-nearest neighbor model, the performance was quite slow. The five data points we used in our learning curve took about four hours total to calculate, and the k-fold cross validation used to find the optimal weight dimension took over 12 hours. We believe that the performance of our K-nearest neighbor model could be improved in a number of ways. First, the binary feature vector used could be stored as an integer in binary representation to improve memory usage. Second, the calculation of weighted distances could be parallelized across multiple CPU cores. Third, a GPU could be utilized to vectorize the weighted distance function calculations. Although the version of the game did not change during the time we collected match data, a new patch is released for Dota 2 every few months that dramatically changes the balance of the game. Therefore, we believe that a sliding window of match history data that resets when a new patch is released could help maintain data relevancy. For this reason, it would be useful to find if there is a training match size for which performance levels off so that we could collect only as much data as is needed. Finally, we could experiement with a different search algorithm for our recommendation engine such as A* which would account for the opposing team picking heroes that could counter our initial recommendations. However, due to time constraints we were not able to implement A* search for our recommendation engine. Ultimately, we believe there are many promising directions which future explorations in this area could take. Conclusion We don’t claim to be experts on Dota 2 or machine learning, but we are encouraged by our results that machine learning could be an effective tool in recommending heroes for Dota 2 matches. Feel free to contribute to the project on GitHub and contact us with any questions.
I have $m$ points $D = \{x_1, \dots, x_m\}$ with $x_i \in \mathbb{R}^n$. After some preprocessing / building up data structures for those points, I get $T$ queries $y_i \in \mathbb{R}^n$ with $i=1, \dots, T$. Each query comes at a time and independently. The datastructures for the queries should not be modified by the past queries. After receiving an query $y_i$, the $k$ nearest neighbors $$NN_k(y_i) \subset D \text{ with } |NN_k(y_i)| = k \text{ and }$$ $$\forall x_a \in NN_k(y_i) \forall x_b \in D \setminus NN_k(y_i) : \|x_a - y_j\|_2 \leq \|x_b - y_j\|_2$$ All points $x_i$ and all queries $y_j$ are on the unit hypersphere, so $\|x_i\|_2 = \|y_i\|_2 = 1$. How do I process those queries fast? Limits To get a feeling for what matters, here are some orders of magnitude: $100 \leq n \leq 1000$: Dimension of points $m \geq 100\,000$: Candidates $T \geq 10\,000\,000$: Number of queries $3 \leq k \leq 20$: Expected number of returned points Reasonable memory consumption (e.g. less than 2GB for the data structure) The query time for a single brute force approach is in $\mathcal{O}(m \cdot n)$ as it just compares every candidate point in $D$ with the query and stores the closest $k$ of them. Misc I've just implemented this with Python (see code). k=5, n=128, m=100000, T=100: 0.56s per query.(Brute force approach) That is actually much better than I expected. However, I guess this could be an order of magnitude faster with a good data structure / smarter algorithm. I wrote the Python script in a way which should make it easy to run your own experiments on your tests, if you like.
Search Now showing items 1-10 of 17 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ... Measurement of prompt D-meson production in p-Pb collisions at $\sqrt{s_{NN}}$ =5.02TeV (American Physical Society, 2014-12-05) The pT-differential production cross sections of the prompt charmed mesons $D^0, D^+, D^{*+}$ and $D_s^+$ and their charge conjugate in the rapidity interval -0.96 < $y_{cms}$ < 0.04 were measured in p-Pb collisions at a ...
What you need to know If we have collected a lot of data, we might display it with a frequency table. These contain two rows/columns: one with all the values that appeared when we collected the data, and one containing the number of times each value showed up. You should know how to read a frequency table and know how to use a frequency table to calculate the mean, median, and mode of the data. Example: Below is a frequency table of data based on survey wherein 89 women were asked what their shoe size was. Calculate the mean, median, and mode of the data. This frequency table tells us: 5 of the people asked had size 4 feet, 12 had size 4.5 feet, 18 had size 5 feet, and so on. If this data were written as a list, it would begin like 4, 4, 4, 4, 4, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 5, 5, 5, 5, ... 89 people is a lot of data points, so you can see why we put it into a table. That said, keeping in your head that it can be written as a list like this is useful. Now let’s find those averages. – Mode: simply identify the shoe size with the highest frequency: 5. – Median: we’re still looking for the middle data point. In this case, the number of people is 89, so the median is the \frac{89 + 1}{2} = 45\text{th} term. Frequency tables present all the data to us in order, so to find the 45th term we have to add the frequencies as we go along as such: 5 + 12 + 18 = 35, so the 35th person is the last one with size 5 feet. 5 + 12 + 18 + 19 = 54, so the 54th person is the last one with size 5.5 feet. Clearly, the 45th person is somewhere between these two, and all the people between these two data points fall into the size 5.5 category, thus the median is 5.5. – Mean: to calculate the mean, we add up all the data points and divide by 89. Our frequency table tells us that there are 5 people who are size 4 and 12 who were size 4.5 etc, so rather than adding them up 1-by-1, we can do (5 \times 4) + (12 \times 4.5) etc. So: \begin{aligned}\text{Mean }=&[(5\times 4)+(12\times 4.5)+(18\times 5)+(5.5\times 19)+(6\times 11)+(6.5\times 4)\\&+(7\times 8)+(7.5\times 5)+(8\times 5)+(8.5\times 0)+(9\times 2)]\div 89 \\ &= 5.8 \text{ (1dp)}\end{aligned} Now, we must also see how to work with a grouped frequency table – a method often used for displaying continuous data, e.g. length or weight. In a grouped frequency table, values aren’t all recorded/displayed individually. Instead, the range of all possible values is split into groups or classes, and we write down how many data points appeared in each class. Example: The grouped frequency table below shows data on the weights of 117 cats. Find: the modal class, the class containing the median, and an estimate for the mean. Additionally, construct a frequency polygon from this table. You can see that we aren’t explicitly asked for the mean, median, or mode. This is because we don’t know the individual weights of the cats, we only know how many fell in between certain values. As a result, we can only find estimates of these measures/say whereabouts they are. – The modal class: we have no idea what number was most common, but we can see which class has the highest frequency – this is the modal class. Here, this is 3.5 < w \leq 4. – The class containing the median: there are 117 cats in total, so the median is the \frac{117 + 1}{2} = 59\text{th} cat. We don’t know what this is, but we can say which class it is in. We know that 22+14=36 cats weighed less than or equal to 3.5kg, and we also know that 22+14+39=75 cats weighed less than or equal to 4kg, so the 59th cat (the median) must be somewhere in the 3.5 < w \leq 4 class. – The estimate for the mean: we can’t find the exact mean, so we must estimate. To do this, we pretend that every data point in each class is situated right at its midpoint (the easiest way to find the midpoint is to add up the lower bound and the upper bound and divide by 2). For example, the midpoint of the first class is 2.5kg, so we pretend that the first 9 cats all weighed 2.5kg. At this point, it’s a good idea to add another column to your table containing the midpoint values. Then, treat these values like the actual values, and calculate the mean like in the first example. \text{Mean } = \dfrac{(22\times 2.5)+(14\times3.25)+(39\times 3.75)+(12\times 4.25)+(13\times 5.25)}{117} = 3.7 \text{ kg (1dp)} The final part of the question asks us to construct a frequency polygon. This is nothing to worry about, all one must do is this: plot the frequency on the y-axis with the midpoint of each class on the x-axis, plot the smallest/largest values of weight on the x-axis at zero on the y-axis, and then join the points with straight lines. In this case, the result looks like the graph on the right. Example Questions The most common number of bathrooms is 1, so the mode is 1. There are 30+21+5+7+3=66 data points in total, so the median is the \dfrac{66+1}{2} = 33.5\text{th} term. The first 30 terms are 1, and the next 21 terms are 2, so clearly both the 33rd and 34th terms are 2, therefore the median is 2. It is impossible to find the mean because we don’t how many bathrooms the three people in the “5 or more” category actually have in their home. 2) Below is a grouped frequency table of data collected on the lengths of students’ journeys to school. a) Find an estimate for the mean of this data. b) Construct a frequency polygon from this data. Then we add together the results of multiply each frequency by its associated midpoint and divide the answer by 12+18+34+33+19=116. \text{Mean } = \dfrac{(12\times 5)+(18\times 12)+(34\times 17)+(33\times 26)+(19\times 38.5)}{116} = 21.1 \text{mins (3sf)} Now, construct a frequency polygon by plotting the midpoints on the x-axis with their associated frequencies on the y-axis, and joining the points with straight lines.
%BEGINLATEXPREAMBLE%\usepackage{amsfonts}\usepackage{amsmath} \usepackage{amssymb} \usepackage[numbers]{natbib}%ENDLATEXPREAMBLE% The Hybrid Approach In many hybrid approaches the main challenge resides in coupling the two different velocity fields arising from RANS (statistically averaged) and LES (filtered) . This is often done by applying a matching criteria, i.e. assuming the same turbulent viscosity at the interface, the same kinetic energy or dissipation etc. This poses a problem since these values represent totally different properties of different velocity fields. Instead of a single one velocity field to couple RANS and LES, the model presented here allows an overlap of both fields, with the RANS model driving the near wall RANS velocity field, without damping in any way the dissipation of resolved fluctuations. Many sub-grid models assume that the flow contains an inertial sub range and hence the sub-grid motions can be assumed to be isotropic. This is true only if the grid is small enough for the anisotropy introduced by the mean shear to be neglected. At high Reynolds numbers, the refinement of the grid becomes too costly, therefore restricting the LES method to low Reynolds numbers flows. As the solid boundary is approached, the mean shear becomes high enough to introduce anisotropy across a range of diminishing scales. It is then necessary for the model to represent at the same time subgrid-scale contributions to the mean shear stress and isotropic dissipation effects. Modelling The instantaneous velocity can be decomposed as %BEGINLATEX% \begin{equation*} U = \left\langle U \right\rangle +u' \end{equation*} %ENDLATEX% where %$\left\langle U\right\rangle$% is the averaged velocity and %$u'$% is the fluctuating component. Schumann %REFLATEX{Sch75}% proposed to split the residual stress tensor into a "locally isotropic" part and an "inhomogeneous" part. The isotropic part is proportional to the fluctuating strain and does not affect the mean flow equations but determines the rate of energy dissipation. The inhomogeneous part is proportional to the mean strain and controls the shear stress and mean velocity profile: %BEGINLATEX% \begin{equation*} \tau^r_{ij}- \frac{2}{3} \tau_{kk}\delta_{ij} = -\underbrace{ 2\nu_r ( \overline{S}_{ij}-\langle \overline{S}_{ij} \rangle)}_{\mbox{\small locally isotropic}} -\underbrace{ 2\nu_a \langle \overline{S}_{ij} \rangle}_{\mbox{\small inhomogeneous}} \end{equation*} %ENDLATEX% where %$\left\langle .\right\rangle $% denotes ensemble averaging of the filtered equations. The viscosities %$\nu_r$% and %$\nu_a$% are based on fluctuating and mean strains. The isotropic part of the residual stress tensor has a zero time mean value. By refining the grid the residual stresses must tend to zero, therefore the inhomogeneous part must have a grid dependence parameter in the turbulent viscosity %$\nu_a$%. Schumann %REFLATEX{Sch75}% used a mixing length model for %$\nu_a$% with the length scale computed as %$L=\min(\kappa y , C_{10} \Delta )$%, where %$C_{10}$% is a constant that is difficult to prescribe for all types of flows. %REFLATEX{Sch75}% and %REFLATEX{GroSch77}% tried to derive a theoretical value for the constant but they were forced to introduced corrective constants to agree with a range of experiments. %REFLATEX{MoiKim82}% used the same principle of splitting the residual stress but in their mixing length model, they use the spanwise size of the cell as the length scale. They argue that for the near wall region in a channel flow, the important structures are streaks that are finely spaced on the spanwise direction. Therefore a coarse resolution in the spanwise direction would lead to larger eddies and a thicker viscous sublayer. Sullivan et al. %REFLATEX{SulMcwMoe94}% developed a similar approach for planetary boundary layer flows but chose %$\nu_a$% to match the Monin-Obukhov similarity theory %REFLATEX{BusWynIzuBra71}%. Bagget %REFLATEX{Bag98}% used a similar approach to compare two hybrid models, one "Schumann-like" and one "DES-like" but found excessive streamwise fluctuations leading to streaks that were much too large. In the context of hybrid LES-RANS, a blending function, %$f_b$%, can be used to introduce a smooth transition between the resolved and the ensemble averaged turbulence parts. In the present study the total residual stress is written as: %BEGINLATEX{label="eq:tau_hyb"}% \begin{equation*} \tau^r_{ij} - \frac{2}{3} \tau_{kk}\delta_{ij}= -2\nu_r f_b ( \overline{S}_{ij}-\langle \overline{S}_{ij} \rangle) - 2(1-f_b)\nu_a \langle \overline{S}_{ij} \rangle \label{eq:tau_hyb} \end{equation*} %ENDLATEX% In this way the averaged stress would be: %BEGINLATEX% \begin{equation*} \left\langle \tau^r_{ij} - \frac{2}{3} \tau_{kk}\delta_{ij} \right\rangle = 2(1-f_b)\nu_a \langle \overline{S}_{ij} \rangle \end{equation*} %ENDLATEX% which is just the RANS stress,and the total shear stress would be %$2(1-f_b)\nu_a \langle \overline{S}_{ij} \rangle + \left\langle u'v'\right\rangle $%. It is therefore necessary that the blending function %$f_b$% tends to one in the region where %$\left\langle u'v'\right\rangle $% is resolved correctly and to zero in the region near the wall where the shear stress is under resolved due to the coarse grid. The total rate of transfer of energy from the filtered motions to the residual scales is given by (assuming that %$\left\langle \nu_r \overline{S}_{ij}\overline{S}_{ij}\right\rangle \approx \nu_r\left\langle \overline{S}_{ij}\overline{S}_{ij}\right\rangle$% %REFLATEX{NicBagMoiCab01}%) %BEGINLATEX% \begin{eqnarray*} -\left\langle \tau_{ij}\overline{S}_{ij} \right\rangle =& 2\left\langle \nu_r f_b (\overline{S}_{ij}-\left\langle \overline{S}_{ij} \right\rangle )\overline{S}_{ij} \right\rangle +2 (1-f_b)\left\langle \nu_a\left\langle \overline{S}_{ij}\right\rangle \overline{S}_{ij}\right\rangle \\ =& 2f_b\nu_r(\left\langle \overline{S}_{ij}\overline{S}_{ij}\right\rangle -\left\langle \overline{S}_{ij}\right\rangle \left\langle \overline{S}_{ij}\right\rangle )+2(1-f_b)\nu_a\left\langle \overline{S}_{ij}\right\rangle \left\langle \overline{S}_{ij}\right\rangle \label{eq:tauave} \end{eqnarray*} %ENDLATEX% which shows how the RANS viscosity contributes to dissipation in association with the mean velocity only, i.e. the resolved turbulent stresses are free to develop independently from the RANS viscosity. The turbulent viscosities models. For the isotropic viscosity %$\nu_r$%, %REFLATEX{Sch75}% used a model based on the sub-grid energy. %REFLATEX{MoiKim82}% used the standard %REFLATEX{Sma63}% model based on the fluctuating strain. Here the later approach is used: %BEGINLATEX% \begin{eqnarray*} \nu_r = (C_s \Delta)^2 \sqrt{2s'_{ij}s'_{ij}} \end{eqnarray*} %ENDLATEX% with %$s'_{ij} = \overline{S}_{ij} - \langle \overline{S}_{ij} \rangle$%. In the frame of unstructured codes, the filter width is taken as twice the cell volume (%$\Delta = 2 Vol$%). In this study, the elliptic relaxation model $ \varphi-f $ of %REFLATEX{LauUriUty04}% is used to calculate the RANS viscosity. This model solves for the ratio %$\varphi = \overline{v^2}/k$% used in the turbulent viscosity as: %BEGINLATEX{label="eq:nuphi"}% \begin{equation*} \nu_a = C_{\mu}\varphi k T \label{eq:nuphi} \end{equation*} %ENDLATEX%where %$T = \max\left(\frac{k}{\varepsilon},C_T\sqrt{\frac{\nu}{\varepsilon}}\right)$%. For the channel flow calculations presented here, the choice of the RANS models hardly makes any difference but the elliptic relaxation method has been shown to perform well on separating and impinging flows and the aim of the present case is only to show the robustness of the coupling even with a sophisticated RANS model. The blending function has been parametrised by the ratio of the turbulent length scale to the filter width: %BEGINLATEX{label="eq:f_blen"}% \begin{equation*} f_b = \tanh \left( C_l \frac{L_t}{\Delta} \right)^n \label{eq:f_blen} \end{equation*} %ENDLATEX% Here %$C_l=1$ and $n=1.5$% are empirical constants. These values were chosen to match the shear stress profile based on channel flow results at %$Re_{\tau}=395$% with DNS data. When using the %$\varphi-f$% model (equation %REFLATEX{eq:nuphi}%), the wall distance is not a desirable parameter and the blending function can be formulated using %$L_t = \varphi k^{3/2}/\varepsilon$%. The blending function has been devised to connect the two length scales smoothly so its value is close to zero near the wall and unity far from it. Similar functions have been used in other hybrid approaches (See %REFLATEX{Abe05}%, %REFLATEX{Ham01}% or %REFLATEX{Spe98}%). Although the function in equation %REFLATEX{eq:f_blen}% is totally empirical, it has been tested for a range of Reynolds numbers and grids and gave satisfactory results (not presented here). The function allows a higher contribution from the LES part as the grid is refined. Different coefficients have been used in the optimisation of the blending function, but the results are not greatly affected but nevertheless are always better than standard LES on the same mesh. In equation (%REFLATEX{eq:tau_hyb}%) the averaged velocity has been calculated as a running average with an averaging window of about 10 times the eddy turnover time. Although it is possible also to use plane averaging in the case of the channel flow, this was not done in order to keep the formulation applicable for 3D flows where no plane averaging is possible. References %BEGINLATEX{label="Sch75"}% \bibitem[Schumann(1975)]{Sch75} U.~Schumann. Subgrid scale model for finite difference simulations of turbulent flows in plane channels and annuli. \emph{Journal of Computational Physics}, 18:\penalty0 676--404, 1975. %ENDLATEX% %BEGINLATEX{label="GroSch77"}% \bibitem[Grotzbach and Schumann(1977)]{GroSch77} G.~Grotzbach and U.~Schumann. \newblock Direct numerical simulation of turbulence velocity -pressure, and temperature fields in channel flows. \newblock In \emph{Symposium on turbulent shear flow}, pages 18--20, 1977. %ENDLATEX% %BEGINLATEX{label="MoiKim82"}% \bibitem[Moin and Kim(1982)]{MoiKim82} P.~Moin and J.~Kim. \newblock Numerical investigation of turbulent channel flow. \newblock \emph{Journal of Fluid Mechanics}, 118:\penalty0 341--377, 1982. %ENDLATEX% %BEGINLATEX{label="SulMcwMoe94"}% \bibitem[Sullivan et~al.(1994)Sullivan, McWilliams , and Moeng]{SulMcwMoe94} P.~Sullivan, J.~McWilliams, and C.~Moeng. \newblock A subgrid-scale model for large eddy simulation of planetary boundary layer. \newblock \emph{Boundary layer methodology}, 71\penalty0 (247--276), 1994. %ENDLATEX% %BEGINLATEX{label="BusWynIzuBra71"}% \bibitem[Businger et~al.(1971)Businger, Wyngaard, Izumi, and Bradley]{BusWynIzuBra71} J.~A. Businger, J.~C. Wyngaard, Y.~Izumi, and E.~E. Bradley. \newblock Flux-profile relationships in the atmospheric surface layer. \newblock \emph{Journal of Atmospheric Science}, 28:\penalty0 181--189, 1971. %ENDLATEX% %BEGINLATEX{label="Bag98"}% \bibitem[Baggett(1998)]{Bag98} J.S Baggett. \newblock On the feasibility of merging {LES} with {RANS} for the near-wall regions of attached turbulent flows. \newblock In \emph{Annual research Briefs}, pages 267--276. Center for turbulence research, Stanford, CA, 1998. %ENDLATEX% %BEGINLATEX{label="NicBagMoiCab01"}% \bibitem[Nicoud et~al.(2001)Nicoud, Baggett, Moin, and Cabot]{NicBagMoiCab01} F.~Nicoud, J.~S. Baggett, P.~Moin, and W.~Cabot. \newblock Large eddy simulation wall-modeling based on suboptimal control theory and linear stochastic estimation. \newblock \emph{Physics of Fluids}, 13:\penalty0 2968--2984, 2001. %ENDLATEX% %BEGINLATEX{label="LauUriUty04"}% \bibitem[Laurence et~al.(2004)Laurence, Uribe, and Utyuzhnikov]{LauUriUty04} D.~Laurence, J.C. Uribe, and S.~Utyuzhnikov. \newblock A robust formulation of the v2-f model. \newblock \emph{Flow, Turbulence and Combustion}, 73:\penalty0 169--185, 2004. %ENDLATEX% %BEGINLATEX{label="Sma63"}% \bibitem[Smagorinsky(1963)]{Sma63} J.~Smagorinsky. \newblock General circulation experiments with the primitive equations: {I} the basic equations. \newblock \emph{Monthly Weather Review}, 91:\penalty0 99--164, 1963. %ENDLATEX% %BEGINLATEX{label="Abe05"}% \bibitem[Abe(2005)]{Abe05} K~Abe. \newblock A hybrid {LES RANS} approach using an anisotropy-resolving algebraic turbulence model. \newblock \emph{International Journal of Heat and Fluid Flow}, 26:\penalty0 204--222, 2005. %ENDLATEX% %BEGINLATEX{label="Ham01"}% \bibitem[Hamba(2001)]{Ham01} F.~Hamba. \newblock An attempt to combine large eddy simulation with the $k-\varepsilon$ model in a channel flow calculation. \newblock \emph{Theoretical and Computational Fluid Dynamics}, 14:\penalty0 323--336, 2001. %ENDLATEX% %BEGINLATEX{label="Spe98"}% \bibitem[Speziale(1998)]{Spe98} C.G. Speziale. \newblock Turbulence modeling for time-dependant {RANS} and {VLES}: a review. \newblock \emph{AIAA Journal}, 26:\penalty0 179--184, 1998. %ENDLATEX%
Difference between revisions of "Main Page" Line 3: Line 3: Let <math>[3]^n</math> be the set of all length <math>n</math> strings over the alphabet <math>1, 2, 3</math>. A ''combinatorial line'' is a set of three points in <math>[3]^n</math>, formed by taking a string with one or more wildcards <math>x</math> in it, e.g., <math>112x1xx3\ldots</math>, and replacing those wildcards by <math>1, 2</math> and <math>3</math>, respectively. In the example given, the resulting combinatorial line is: <math>\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}</math>. A subset of <math>[3]^n</math> is said to be ''line-free'' if it contains no lines. Let <math>c_n</math> be the size of the largest line-free subset of <math>[3]^n</math>. Let <math>[3]^n</math> be the set of all length <math>n</math> strings over the alphabet <math>1, 2, 3</math>. A ''combinatorial line'' is a set of three points in <math>[3]^n</math>, formed by taking a string with one or more wildcards <math>x</math> in it, e.g., <math>112x1xx3\ldots</math>, and replacing those wildcards by <math>1, 2</math> and <math>3</math>, respectively. In the example given, the resulting combinatorial line is: <math>\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}</math>. A subset of <math>[3]^n</math> is said to be ''line-free'' if it contains no lines. Let <math>c_n</math> be the size of the largest line-free subset of <math>[3]^n</math>. − '''Density Hales-Jewett theorem:''' <math>lim_{n \rightarrow \infty} c_n/3^n = 0</math> + '''Density Hales-Jewett theorem:''' <math>lim_{n \rightarrow \infty} c_n/3^n = 0</math> − The original proof of + The original proof of used arguments from ergodic theory. The is to == Unsolved questions == == Unsolved questions == Revision as of 18:56, 11 February 2009 The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for k=4. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if d is large enough compare to c. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length d. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits [ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ] are equal to one of the following: [ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$. Presumably, this should be (perhaps much) simpler than DHJ, k=3. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Please forgive me if this is naive. I am only an aspiring educator after all. Why do we still teach estimation when there are easily accessible/teachable exact techniques? Why do statisticians teach $p$-value estimation instead of exact computation. Why do calculus professors teach Riemann sums alongside integration? Please forgive me if this is naive. I am only an Number Sense:At the elementary level, estimation helps students to develop number sense. As Daniel R. Collins notes, order of magnitude estimates can be quite important. Anecdotally, I once rented an apartment. I moved out of the apartment about halfway through the month, and the landlord offered to prorate the rent. I said "Great!" However, when she operated her calculator, she determined that the prorated rent was more thanthe monthly rent. When I asked how this could be right, she conceded that it was confusing, but the number on the calculator don't lie. Better number sense could have saved us all some time. Emphasizing Uncertainty:At least one of the "exact" techniques that you mention is rarely exact. To wit, most of the probability functions used in statistics don't have nice antiderivatives in terms of elementary functions. This means that any attempt to integrate them is almost certainly going to have to rely on numerical integration. Getting estimates for $p$-values (for example, via the "empirical rule") once again provides a check against technology. It alsocan be used to emphasize that we lack certainty. For example, if I have a table for a $t$ distribution that only gives entries for multiples of ten degrees of freedom, then I am going to have difficulty coming up with an "exact" $p$-value for a hypothesis test involving 47 degrees of freedom. I have to make a choice: should I use 40 or 50 degrees of freedom? In this context, I want use the number that gives me lesscertainty, so I work with 40 d.f. This is still an issue with computers. If a computer gives me a $p$-value that rounds to my level of significance, what should I do? Understanding that there are estimates throughout the process can help me to make that decision. Emphasizing Theory:I believe that we should be teaching mathematics not just in service to other fields, but also in service to the field of mathematics itself. You may not have that many future pure math Ph.D.s in your class, but you still need to make sure that you are preparing those students as well as all of the others. If integration or linear algebra is taught as a bunch of computational techniques, the math majors are going to be in a very poor position when they start taking "real" math classes. To use your example, Riemann sums are emphatically notan estimation technique, but the actual tool that is necessary to build the Riemann integral. If you can't work with Riemann sums, how are you ever going to work through a proof of the fundamental theorem of calculus? I'd say a good approximation is often better that an exact result. This may sound counterintuitive, but as the phrase is vague anyway, here is a longer explanation what I mean: An "exact result" is often a formula, and often it's a complicated formula and one needs at least a calculator to evaluate is. Since calculators use finite arithmetic there is, most likely, some approximation going on in the background (imagine, that the answer is $\sqrt{\pi}$…). An approximation often comes a sequence of approximations and thus, a kind of algorithm alongside with an error bound. For example the series representation $e = \tfrac{1}{0!} + \tfrac{1}{1!} + \tfrac{1}{2!}+\cdots$ gives you 1) a way to compute an approximation of $e$, 2) gives an idea that $e$ is somewhere near $2.5$, 3) can be augmented with an error bound to prove that $e\approx 2.7181 \pm 0.0001$. Some further aspects: Statistics(in contrast to stochastics) is not about calculating certain probabilities, but inferringthem from experiments. And inference is approximation/estimation. In analysis, a good estimate if often good enough for an exact result. An example is the "epsilon of room"-strategy: Instead of proving $a=0$, prove that $a\leq \epsilon$, and $a\geq -\epsilon$ for any $\epsilon>0$ (by using estimates) and then conclude $a=0$. In applications(real world applications, I mean), approximation are much more important than exact computations. This is due to many reasons, one is that data is often uncertain, and an exact result based on uncertain data is worse that a good approximation (no one would do an exact data fit for noisy data) and another is, that approximations are often much quicker to get and still good enough for all practical purposes. Most real-world problems are only approximately described by nice mathematical formulas. Depending on the situation, it can be either silly or dangerous to assume that an "exact" result of a mathematical calculation exactly describes a real-world situation that caused you to perform the calculation. For example, to what extent does an "exact" result of a p-value calculation predict what would happen the next time you randomly tried to use the baseline hypothesis to get a value? When performing real-world integrations, often an approximate answer (which resembles a Riemann-sum) is both more practical to determine, and more accurate, than an "exact" calculation based on a function that more-or-less describes the real-world situation. Furthermore, teaching the approximate method teaches an easy-to-remember rule-of-thumb, which has several advantages: Some people find it easier to remember. It might help the student know when the exact method is likely to be useful. It might help the student remember the logic of how the exact method was derived. It provides a way to sanity check results from the exact method. From the perspective of a professional statistician, I often find myself asking the opposite question: why do we spend so much time learning about closed form solutions in our calculus classes? The reason I say this is that for the majority of applied statistics problems that require any sort of integration (such as the 95% of the field of Bayesian statistics), there is no closed form solutions and numerical methods are required. Similar for optimization problems (i.e. maximum likelihood estimation): some simple problems have closed form solutions (i.e. linear regression)...but that vast majority of problems are solved iteratively. So from my professional perspective, we spent a lot of time learning closed form solutions to various problems...but when it comes time to do the math behind any new statistical methods, 99.9% of the time we end up using iterative methods instead.
Chemical reaction rates and how they relate to changes in temperature. I use this example when teaching chemical engineering students about integrals.The Boltzmann distribution describes the velocity distribution of molecules in a gas. $$f(v) = \sqrt{\frac{2}{\pi} \left( \frac{m}{k_B \cdot T} \right)^3} \cdot v^2 \cdot \exp\left( \frac{-m v^2}{2 k_B T} \right)$$ The integral $\int_a^b f(v) dv$ is then the fraction of the gas molecules with a velocity in the range $[a, b]$.For a certain reaction to occur there must be a minimal amount of kinetic energy present (the activation energy, $E_a$), from that the lowest possible velocity of a gas molecule with this kinetic energy can be determined by $$v_\textrm{min} = \sqrt{\frac{2 E_a}{m}}$$ The fraction of gas molecules that can participate in the reaction is then given by $$\int_{v_\textrm{min}}^{\infty} f(v) \, dv$$ This can be used to investigate how changes in temperature alters the fraction of gas molecules that have sufficient kinetic energy to overcome the activation energy. One application is to test the rule of thumb, which states that an increase in temperature of 10 kelvin will roughly double the reaction rate. Below is Mathematica code for solving this assump = Assumptions -> {k > 0, T > 0, m > 0}; MaxwellBoltzmannSpeedDistribution = Sqrt[2/Pi (m/(k T))^3] v^2 Exp[-m v^2/(2 k T)]; FractionOfParticlesWithEnergyAbove = Integrate[ MaxwellBoltzmannSpeedDistribution, {v, minVelocity, Infinity}, assump]; TemperatureDependency = FractionOfParticlesWithEnergyAbove //. {minVelocity -> Sqrt[2 10^-19/m], k -> 1.3806488*10^(-23), m -> 1.6726 10^-27} res = Table[{T, TemperatureDependency} /. {T -> Tval}, {Tval, 293, 343, 10}]; res // TableForm The above code gives the following output 293 1.05132*10^-10 303 2.33909*10^-10 313 4.94258*10^-10 323 9.96656*10^-10 333 1.926*10^-9 343 3.58024*10^-9
I have a (possibly-) unfair coin, which lands heads with probability $p$ and lands tails with probability $1-p$. I toss it twice; the tosses are independent and identically distributed. The probabilities of the various outcomes will vary depending on $p$. For example, if I know that $p = \frac12$, then the probability of tossing two heads, which I'll abbreviate $\mathbb P(HH)$, can be calculated as $p^2 = \frac14$. Symmetrically, the probability of tossing two tails $\mathbb P(TT) = \frac14$. On the other hand, if I know that $p > \frac12$, e.g. $p = \frac1{\sqrt 3}$, then $HH$ becomes more likely: $\mathbb P(HH) = \frac13$. Correspondingly, $TT$ becomes less likely: $\mathbb P(TT) = (1 - \frac1{\sqrt 3})^2 \approx 0.18$. It seems like I can push $HH$ up above $\frac14$, but only by pushing $TT$ down below that same number. With that in mind, the following question might seem surprising: Under what circumstances does $\mathbb P(HH) = \mathbb P(TT) = \frac13$?
Usually I set equations using the align environment together with \nonumber for those few lines that shall not have a number, or the align* environment if no equation shall be numbered. But what if ... I have a series of equations steps, about 20 lines or so. I would like to have it automatically decide page break when necessary, but it resists and always tries to stay in a whole page, rendering the ... I am trying to get the following formula on two lines like this:\begin{equation}W \in \mathbf{M}_N \rightarrow W' \in \mathbf{M}_{N+1}\\(w_{ij})_{1\leq i,j\leq N} \mapsto (w_{ij})_{1\leq i,j\leq ... Maybe what I want is sinful, but let me explain:I use aligned* environments, because I usually don't need equation numbers. When I need one, I use \addtocounter{equation}{1}\tag{\theequation} from ...
EDIT The first version of my question has not worked as I expected, so that I will try to be a little bit more specific. The final goal I am trying to achieve is the generation of a ten minutes time series: to achieve this I have to perform an FFT operation, and it's the point I have been stumbling upon. Generally the aimed time series will be assigned as the sum of two terms: a steady component $U(t)$ and a fluctuating component $u^{'}(t)$. That is $$u(t) = U(t) + u^{'}(t);$$ So generally, my code follows this procedure: 1) Given data $time = 600 [s];$ $Nfft = 4096;$ $L = 340.2 [m];$ $U = 10 [m/s]$ $df = 1/600 = 0.00167 Hz;$ $f_{n} = Nfft/(2*time) = 3.4133 Hz;$ This means that my frequency array should be laid out as follows: $$ f = (-f_{n}+df):df:f_{n} $$ But, instead of using the whole $f$ array, I am only making use of the positive half: $$ f_{+} = df:f_{n} = 0.00167:3.4133 Hz; $$ 2) Spectrum Definition I define a certain spectrum shape, applying the following relationship $$ S_{u} = \frac{6L/U}{(1 + 6f_{+}L/U)^{5/3}}; $$ 3) Random phase generation I, then, have to generate a set of complex samples with a determined distribution: in my case, the random phase will approach a standard Gaussian distribution $(\mu = 0, \sigma = 1)$. In MATLAB I call nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2)); 4) Apply random phase To apply the random phase, I just do this $$ H_{u} = S_{u}*nn; $$ At this point start my pains! So far, I only generated $Nfft/2 = 2048$ complex samples accounting for the $f_{+}$ content. Therefore, the content accounting for the negative half of $f$ is still missing. To overcome this issue, I was thinking to merge the $real$ and $imaginary$ part of $H_{u}$, in order to get a signal $H_{uu}$ with $Nfft = 4096$ samples and with all real values. But, by using this merging process, the $0-th$ frequency order would not be represented, since the $complex$ part of $H_{u}$ is defined for $f_{+}$. Thus, how to account for the $0-th$ order by keeping a procedure as the one I have been proposing so far?
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
Advances in Differential Equations Adv. Differential Equations Volume 14, Number 7/8 (2009), 663-684. Pointwise decay for the solutions of degenerate and singular parabolic equations Abstract We study the asymptotic behavior, as $t\to\infty$, of the solutions to the evolutionary $p$-Laplace equation \[ v_t={\operatorname{div}}( | {\nabla v} | ^{p-2}\nabla v), \] with time-independent lateral boundary values. We obtain the sharp decay rate of $\max_{x\in{\Omega}} | {v(x,t)-u(x)}| $, where $u$ is the stationary solution, both in the degenerate case $p > 2$ and in the singular case $1 < p > 2$. A key tool in the proofs is the Moser iteration, which is applied to the difference $v(x,t)-u(x)$. In the singular case, we construct an example proving that the celebrated phenomenon of finite extinction time, valid for $v(x,t)$ when $u\equiv 0$, does not have a counterpart for $v(x,t)-u(x)$. Article information Source Adv. Differential Equations, Volume 14, Number 7/8 (2009), 663-684. Dates First available in Project Euclid: 18 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355867230 Mathematical Reviews number (MathSciNet) MR2527689 Zentralblatt MATH identifier 1182.35036 Citation Juutinen, Petri; Lindqvist, Peter. Pointwise decay for the solutions of degenerate and singular parabolic equations. Adv. Differential Equations 14 (2009), no. 7/8, 663--684. https://projecteuclid.org/euclid.ade/1355867230
Let $H$ be a separable infinite dimensional Hilbert space. Denote the Calkin algebra by $Q(H)=B(H)/K(H)$, and $U(Q(H))$ the group of unitaries in $Q(H)$. I'm trying to show that the map $F: U(Q(H))/U(Q(H))_0 \to \mathbb{Z}$ (where $U(Q(H))_0$ denotes the identity component) defined by $F((A+K(H))+U(Q(H))_0)=index(A)$, where $A+K(H)$ is unitary in $Q(H)$, is well-defined, surjective and injective. First, as $A+K(H)\in U(Q(H))$ in particular it is invertible in $Q(H)$ and by Atkinson Theorem $A$ is a Fredholm operator and $\forall k\in K(H), index(A)=index(A+k)$ is well-defined. For well-definiteness of $F$ it is sufficient to check that if $A+K(H)$ is in the connected component of the identity, i.e. there is a path of unitaries in $Q(H)$ between $A+K(H)$ and $1_{B(H)}+K(H)$ then $index(A)=0$. I couldn't show that. I know that if $F(H)$ denotes the Fredholm operators in $B(H)$ then $F_1,F_2 \in F(H)$ are in the same connected component iff they have the same index. But it's just in $B(H)$. Also, I couldn't show $F$ is injective, i.e. if $index(A)=0$ then $A+K(H)\in U(Q(H))_0$. I know that if $index(A)=0$ then $A$ is a perturbation of invertible operator in $B(H)$ by a compact operator. I also know that by the polar decomposition $|A|$ is unitary and that $GL(B(H))$ is path-connected. If I could take a unitary from the class of $A$ in $Q(H)$ then I could construct a path of invertible elements between $1$ and that unitary. By the polar decomposition we even can construct a path of unitaries between them, that could help. Surjectivity is OK, just by taking the class of the unilateral shift in the calkin algebra. There it is unitary and its index equals to $-1$, so we can reach any element of $\mathbb{Z}$. Any help will be appreciated.
I've been trying to read Gross' paper on Heegner points on $X_0(N)$ and I am stuck on a few details. The definition he is working with is that a heegner points is a pair $y=(E,E')$, where $E$ and $E'$ are elliptic curves admitting an isogeny that has cyclic kernel of order $N$ and where $E$ and $E'$ both have complex multiplication by the order $\mathcal{O}$ of discriminant $D$ in a quadratic imaginary field $K$. Gross goes on to explain that we may assume the lattice for $E$ is a fractional ideal $\mathfrak{a}$ and the lattice for $E'$ is $\mathfrak{b}$ such that the ideal $\mathfrak{n}=\mathfrak{a}\mathfrak{b}^{-1}$ is proper ideal of $\mathcal{O}$ such that the quotient $\mathcal{O}/\mathfrak{n}$ is cyclic of order $N$. It is the next line that I don't understand: "Such an ideal will exist if and only if there is a primitive binary quadratic form of discriminant $D$ which properly represents $N$...". The line goes on, but this is one of the things I'm stuck on. I've tried googling some notes/papers on binary quadratic forms, but I can't find anything that helps me understand what a binary quadratic form representing $N$ has to say about an order admitting a cyclic quotient. An explanation or a good reference would be much appreciated. The second and, I think, more important part of my confusion is a bit later on in the same section: Gross goes on to explain that if we have such an $\mathfrak{n}$, we can construct a heegner point as follows. Let $\mathfrak{a}$ be an invertible $\mathcal{O}$-submodule of $K$ and let $[\mathfrak{a}]$ denotes its class in $Pic(\mathcal{O})$. Let $\mathfrak{n}$ be a proper $\mathcal{O}$-ideal with cyclic quotient of order $N$, put $E=\mathbf{C}/\mathfrak{a}$, $E'=\mathbf{C}/\mathfrak{a}\mathfrak{n}^{-1}$. They are related by an obvious isogeny and thus determine a Heegner point, denoted $(\mathcal{O},\mathfrak{n},[\mathfrak{a}])$. Next, given $y=(\mathcal{O},\mathfrak{n},[\mathfrak{a}])$, we can find the image of it in the upper-half plane by picking an oriented basis $\langle\omega_1,\omega_2\rangle$ of $\mathfrak{a}$ such that $\mathfrak{a}\mathfrak{n}^{-1}=\langle\omega_1,\omega_2/N\rangle$. Then $y$ corresponds to the orbit of $\omega_1/\omega_2$ under $\Gamma_0(N)$. Lastly, since $\tau\in K$ it follows that it satisfies $A\tau^2+b\tau+C=0$ for some integers $A,B,C$ such that $gcd(A,B,C)=1$. Finally, what I don't understand is that Gross claims that $D=B^2-4AC, A=NA'$ from some $A'$ and $gcd(A',B,NC)$. I don't see what the $\tau$ we cooked up has to do with the discriminant of our order. I have read a paper that defined a Heegner point to be a quadratic imaginary point in the half-plane such that $\Delta(\tau)=\Delta(N\tau)$. I have seen how this would help with part of the claim above, but I don't see why in this situation, $\Delta(\tau)=\Delta(N\tau)$. In fact, it seems that everything I'm confused about here is the fact that it seems to be the case that $$D=\Delta(\tau)=\Delta(NT),$$ where $\Delta$ denotes discriminant. Any insight into these two questions would very appreciated.
Let $G$ be a compact Lie group and $a\in\mathfrak{g}^*$ (dual of Lie algebra of Lie group $G$). Then let $\mathcal O_a$ be a coadjoint orbit. Then every co-adjoint orbit is Kähler manifold and also projective variety. How can we compute the Kodaira dimension of co-adjoint orbit as projective variety? Motivation: The Kodaira dimension of co-adjoint orbits are important, because we can classify these type of projective varieties by Kodaira dimension which is birationally invariant. In fact I am looking for $$\kappa(\mathcal O_a)=\limsup_{m\to \infty}\frac{\log\text{dim}H^0(\mathcal O_a, K_{\mathcal O_a}^{\otimes m})}{\log m}$$
ISSN: 1534-0392 eISSN: 1553-5258 All Issues Communications on Pure & Applied Analysis May 2017 , Volume 16 , Issue 3 Select all articles Export/Reference: Abstract: For the 2-D quasilinear wave equation $\sum\nolimits_{i,j = 0}^2 {{g_{ij}}} (\nabla u)\partial _{ij}^2u = 0$, whose coefficients are independent of the solution $u$, the blowup result of small data solution has been established in [ 1,2] when the null condition does not hold as well as a generic nondegenerate condition of initial data is assumed. In this paper, we are concerned with the more general 2-D quasilinear wave equation $\sum\nolimits_{i,j = 0}^2 {{g_{ij}}} (u,\nabla u)\partial _{ij}^2u = 0$, whose coefficients depend on $u$ and $\nabla u$ simultaneously. When the first weak null condition is not fulfilled and a suitable nondegenerate condition of initial data is assumed, we shall show that the small data smooth solution $u$ blows up in finite time, moreover, an explicit expression of lifespan and blowup mechanism are also established. Abstract: In this paper, we investigative the large time decay and stability to any given global smooth solutions of the 3D incompressible inhomogeneous MHD systems. We prove that given a solution $(a, u, B)$ of (2), the velocity field and the magnetic field decay to zero with an explicit rate, for $u$ which coincide with incompressible inhomogeneous Navier-Stokes equations [ 1]. In particular, we give the decay rate of higher order derivatives of $u$ and $B$ which are useful to prove our main stability result. For a large solution of (2) denoted by $(a, u, B)$, we show that a small perturbation of the initial data still generates a unique global smooth solution and the smooth solution keeps close to the reference solution $(a, u, B)$. At last, we should mention that the main results in this paper are concerned with large solutions. Abstract: This paper is concerned with a nonlocal dispersal susceptible-infected-susceptible (SIS) epidemic model with Dirichlet boundary condition, where the rates of disease transmission and recovery are assumed to be spatially heterogeneous. We introduce a basic reproduction number $R_0$ and establish threshold-type results on the global dynamic in terms of R 0. More specifically, we show that if the basic reproduction number is less than one, then the disease will be extinct, and if the basic reproduction number is larger than one, then the disease will persist. Particularly, our results imply that the nonlocal dispersal of the infected individuals may suppress the spread of the disease even though in a high-risk domain. Abstract: We consider the Neumann and Dirichlet problems for second-order linear elliptic equations in a bounded Lipschitz domain $\Omega \subset \mathbb{R}^n$, $n \geq 2$, where $A: \mathbb{R}^n \to \mathbb{R}^{n^2}$, $b: \Omega \to \mathbb{R}^n$ and $\lambda \geq 0$ are given. Some $W^{1, 2}$-estimates have been already known, provided that $A \in L^\infty(\Omega)^{n^2}$ and $b \in L^r(\Omega)^n$, where $n \leq r < \infty$ if $n \geq 3$ and $2 < r < \infty$ if $n=2$. Under more regularity assumptions on $A$ and $\Omega$, we establish the existence and uniqueness of weak solutions satisfying $W^{1, p}$-estimates. Our $W^{1, p}$-estimates are uniform on $\lambda \geq 0$ for the case of the Dirichlet problems. For the Neumann problems, the $W^{1, p}$-estimates are uniform with respect to $\lambda \geq 0$ if $f$ and $g$ satisfy some compatibility conditions. These uniform estimates allow us to obtain strong stability results in $W^{1, p}$ with respect to $\lambda $ for the Neumann and Dirichlet problems. Abstract: In this paper, we consider the following perturbed nonlocal elliptic equation where $\Omega$ is a smooth bounded domain in $\mathbb{R}{^N}$, $\lambda$ is a real parameter and $g$ is a non-odd perturbation term. If $f$ is odd in $u$ and satisfies various superlinear growth conditions at infinity in $u$, infinitely many solutions are obtained in spite of the lack of the symmetry of this problem for any $\lambda\in \mathbb{R}$. The results obtained in this paper may be seen as natural extensions of some classical theorems to the case of nonlocal operators. Moreover, the methods used in this paper can be also applied to obtain some new results for the classical Laplace equation with Dirichlet boundary conditions. Abstract: We obtain sufficient conditions for nonexistence of positive solutions to some nonlinear parabolic inequalities with coefficients possessing singularities on unbounded sets. Abstract: Novel global weighted parabolic Sobolev estimates, weighted mixed-norm estimates and a.e. convergence results of singular integrals for evolution equations are obtained. Our results include the classical heat equation the harmonic oscillator evolution equation and their corresponding Cauchy problems. We also show weighted mixed-norm estimates for solutions to degenerate parabolic extension problems arising in connection with the fractional space-time nonlocal equations $(\partial_t-\Delta)^su=f$ and $(\partial_t-\Delta+|x|^2)^su=f$, for $0 < s < 1$. Abstract: In this paper, we establish the $L^p$ estimates for the maximal functions associated with the multilinear pseudo-differential operators. Our main result is Theorem 1.2. There are several major different ingredients and extra difficulties in our proof from those in Grafakos, Honzík and Seeger [ 15] and Honzík [for maximal functions generated by multipliers. First, in order to eliminate the variable $x$ in the symbols, we adapt a non-smooth modification of the smooth localization method developed by Muscalu in 22] [. Then, by applying the inhomogeneous Littlewood-Paley dyadic decomposition and a discretization procedure, we can reduce the proof of Theorem 1.2 into proving the localized estimates for localized maximal functions generated by discrete paraproducts. The non-smooth cut-off functions in the localization procedure will be essential in establishing localized estimates. Finally, by proving a key localized square function estimate (Lemma 4.3) and applying the good-$\lambda$ inequality, we can derive the desired localized estimates. 26, 30] Abstract: We prove weighted Lorentz estimates of the Hessian of strong solution for nondivergence linear elliptic equations $a_{ij}(x)D_{ij}u(x)=f(x)$. The leading coefficients are assumed to be measurable with respect to one variable and have small BMO semi-norms with respect to the other variables. Here, an approximation method, Lorentz boundedness of the Hardy-Littlewood maximal operators and an equivalent representation of Lorentz norm are employed. Abstract: We study a two player zero-sum tug-of-war game with varying probabilities that depend on the game location x. In particular, we show that the value of the game is locally asymptotically Hölder continuous. The main difficulty is the loss of translation invariance. We also show the existence and uniqueness of values of the game. As an application, we prove that the value function of the game converges to a viscosity solution of the normalized p( x) -Laplacian. Abstract: In this paper, we study a fully nonlinear inverse curvature flow in Euclidean space, and prove a non-collapsing property for this flow using maximum principle. Precisely, we show that upon some conditions on speed function, the curvature of the largest touching interior ball is bounded by a multiple of the speed. Abstract: Motivated by relevant physical applications, we study Schrödinger equations with state-dependent potentials. Existence, localization and multiplicity results are established for positive standing wave solutions in the case of oscillating potentials. To this aim, a localized Pucci-Serrin type critical point theorem is first obtained. Two examples are then given to illustrate the new theory. Abstract: Improved Trudinger-Moser-Adams type inequalities in the spirit of Lions were recently studied in [ 21]. The main purpose of this paper is to prove the equivalence of these versions of the Trudinger-Moser-Adams type inequalities and to set up the relations of these Trudinger-Moser-Adams best constants. Moreover, using these identities, we will investigate the existence and nonexistence of the optimizers for some Trudinger-Moser-Adams type ineq Abstract: In this paper, we study the following quadratically coupled Schrödinger system: where $\Omega\subset\mathbb{R}^6$ is a smooth bounded domain, $-\lambda (\Omega) < \lambda_1, \lambda_2 < 0, \mu_1, \mu_2, \alpha, \gamma>0$, and $\lambda (\Omega)$ is the first eigenvalue of $-\Delta$ with the Dirichlet boundary condition. The main difficulty to investigate this kind of equations is caused by the fact that all the quadratic nonlinearities, including the coupling terms, are of critical growth. By the methods used in [Zhenyu Guo, Positive ground state solutions of a nonlinearly coupled Schrödinger system with critical exponents in [Zhenyu Guo, Positive ground state solutions of a nonlinearly coupled Schrödinger system with critical exponents in $\mathbb{R}^4$, J. Math. Anal. Appl., 430(2):950-970, 2015], the existence of positive ground state solutions of the system is established with more ingenious hypotheses. Abstract: This article is devoted to the discussion of the boundary layer which arises from the one-dimensional parabolic elliptic type Keller-Segel system to the corresponding aggregation system on the half space case. The characteristic boundary layer is shown to be stable for small diffusion coefficients by using asymptotic analysis and detailed error estimates between the Keller-Segel solution and the approximate solution. In the end, numerical simulations for this boundary layer problem are provided. Abstract: An attraction-repulsion chemotaxis model with nonlinear chemotactic sensitivity functions and growth source is considered. The global-in-time existence and boundedness of solutions are proved under some conditions on the nonlinear sensitivity functions and growth source function. Our results improve the earlier ones for the linear sensitivity functions. Abstract: In this paper, we consider the critical nonlinear Schrödinger equations in ${\mathbb{R}^2}$ with an oscillating nonlinearity, in a radial geometry. We numerically investigate the influence of the oscillations on the time of existence for the corresponding solution, on the spirit of the recent result of Cazenave and Scialom. It can be observed that the solution converges to the solution of a limit equation obtained with the weak limit of the oscillatory term, starting either with Gaussian data as well as standing waves solutions. Abstract: We study the second order nonlinear differential equation where $\alpha_{i}, \beta_{j}>0$, $a_{i}(x), b_{j}(x)$ are non-negative Lebesgue integrable functions defined in $\mathopen{[}0, L\mathclose{]}$, and the nonlinearities $g_{i}(s), k_{j}(s)$ are continuous, positive and satisfy suitable growth conditions, as to cover the classical superlinear equation $u"+a(x)u.{p} = 0$, with $p>1$.When the positive parameters $\beta_{j}$ are sufficiently large, we prove the existence of at least $2.{m}-1$positive solutions for the Sturm-Liouville boundary value problems associated with the equation.The proof is based on the Leray-Schauder topological degree for locally compact operators on open and possibly unbounded sets.Finally, we deal with radially symmetric positive solutions for the Dirichlet problems associated with elliptic PDEs. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
ISSN: 1937-1632 eISSN: 1937-1179 All Issues Discrete & Continuous Dynamical Systems - S February 2012 , Volume 5 , Issue 1 Issue on fast reaction - slow diffusion scenarios: PDE approximations and free boundaries Select all articles Export/Reference: Abstract: This issue is focussed on the modeling, analysis and simulation of fast reaction-slow transport scenarios as well as corresponding fast-reaction limits. Within this framework, internal sharp and thin reaction layers form and travel through the spatial domain often producing unexpected effects. Such situations appear in a variety of significant applications; for example flame propagation in combustion, segregation and aggregation of biological individuals, chemical attack on reactive porous materials (such as concrete or natural stone), dissolution and precipitation reactions in minerals, tumor growth, grain boundary motion, and temperature-induced phase transitions in shape-memory alloys represent typical cases in which the fast process is localized within a a prioriunknown internal active layer. For more information please click the “Full Text” above. Abstract: In our previous works we proposed and studied the mathematical model for the position of the joint of a shape memory alloy and a bias springs in case the temperature is known. The purpose of this paper is to establish a mathematical model with unknown temperature and to show a local existence of a solution to the model in time. Abstract: We study a one-dimensional model describing the motion of a shape-memory alloy spring at a small characteristic time scale, called here fast-temperature-activation limit. At this level, the standard Falk's model reduces to a nonlinear elliptic partial differential equation (PDE) with Newton boundary condition. We show existence and uniqueness of a bounded weak solution and approximate this numerically. Interestingly, in spite of the nonlinearity of the model, the approximate solution exhibits nearly a linear profile. Finally, we extend the reduced model to the simplest PDE system for shape memory alloys that can capture oscillations and then damp out these oscillations numerically. The numerical results for both limiting cases show excellent agreement. The graphs show that the valve opens in an instant, which is realistic behavior of the free boundary. Abstract: We consider the elastic theory of single crystals at constant temperature where the free energy density depends on the local concentration of one or more species of particles in such a way that for a given local concentration vector certain lattice geometries (phases) are preferred. Furthermore we consider possible large deformations of the crystal lattice. After deriving the physical model, we indicate by means of a suitable implicite time discretization an existence result for measure-valued solutions that relies on a new existence theorem for Young measures in infinite settings. This article is an overview of [2]. Abstract: We consider reaction-diffusion systems which, in addition to certain slow reactions, contain a fast irreversible reaction in which chemical components A and B form a product P. In this situation and under natural assumptions on the RD-system we prove the convergence of weak solutions, as the reaction speed of the irreversible reaction tends to infinity, to a weak solution of a limiting system. The limiting system is a Stefan-type problem with a moving interface at which the chemical reaction front is localized. Abstract: In this paper we introduce a novel generic destabilization mechanism for (reversible) spatially periodic patterns in reaction-diffusion equations in one spatial dimension. This Hopf dance mechanism occurs for long wavelength patterns near the homoclinic tip of the associated Busse balloon ($=$ the region in (wave number, parameter space) for which stable periodic patterns exist). It shows that the boundary of the Busse balloon locally has a fine-structure of two intertwining 'dancing' (or 'snaking') Hopf destabilization curves (or manifolds) that limit on the Hopf bifurcation value of the associated homoclinic limit pulse and that have infinitely many, accumulating, intersections. The Hopf dance is first recovered by a detailed numerical analysis of the full Busse balloon in an explicit Gray-Scott model. The structure, and its generic nature, is confirmed by a rigorous analysis of singular long wave length patterns in a normal form model for pulse-type solutions in two component, singularly perturbed, reaction-diffusion equations. Abstract: In this paper we consider a two-phase flow problem in porous media and study its singular limit as the viscosity of the air tends to zero; more precisely, we prove the convergence of subsequences to solutions of a generalized Richards model. Abstract: We formulate a reaction diffusion equation with non-local term as a mean field equation of the master equation where the particle density is defined continuously in space and time. In the case of the constant mean waiting time, this limit equation is associated with the diffusion coefficient of A. Einstein, the reaction rate in phenomenology, and the Debye term under the presence of potential. Abstract: We consider a model for grain boundary motion with constraint. In composite material science it is very important to investigate the grain boundary formation and its dynamics. In this paper we study a phase-filed model of grain boundaries, which is a modified version of the one proposed by R. Kobayashi, J.A. Warren and W.C. Carter [18]. The model is described as a system of a nonlinear parabolic partial differential equation and a nonlinear parabolic variational inequality. The main objective of this paper is to show the global existence of a solution for our model, employing some subdifferential techniques in the convex analysis. Abstract: Reaction-diffusion system approximations to a cross-diffusion system are investigated. Iida and Ninomiya~[Recent Advances on Elliptic and Parabolic Issues, 145--164 (2006)] proposed a semilinear reaction-diffusion system with a small parameter and showed that the limit equation takes the form of a weakly coupled cross-diffusion system provided that solutions of both the reaction-diffusion and the cross-diffusion systems are sufficiently smooth. In this paper, the results are extended to a more general cross-diffusion problem involving strongly coupled systems. It is shown that a solution of the problem can be approximated by that of a semilinear reaction-diffusion system without any assumptions on the solutions. This indicates that the mechanism of cross-diffusion might be captured by reaction-diffusion interaction. Abstract: In this paper we study an optimal control problem for a singular diffusion equation associated with total variation energy. The singular diffusion equation is derived as an Allen-Cahn type equation, and then the observing optimal control problem corresponds to a temperature control problem in the solid-liquid phase transition. We show the existence of an optimal control for our singular diffusion equation by applying the abstract theory. Next we consider our optimal control problem from the view-point of numerical analysis. In fact we consider the approximating problem of our equation, and we show the relationship between the original control problem and its approximating one. Moreover we show the necessary condition of an approximating optimal pair, and give a numerical experiment of our approximating control problem. Abstract: We study a new formulation for the Eikonal equation $|\nabla u| =1$ on a bounded subset of $\R^2$. Considering a field $P$ of orthogonal projections onto $1$-dimensional subspaces, with div$ P \in L^2$, we prove existence and uniqueness for solutions of the equation $P$ div $P$=0. We give a geometric description, comparable with the classical case, and we prove that such solutions exist only if the domain is a tubular neighbourhood of a regular closed curve. This formulation provides a useful approach to the analysis of stripe patterns. It is specifically suited to systems where the physical properties of the pattern are invariant under rotation over 180 degrees, such as systems of block copolymers or liquid crystals. Abstract: In this paper we discuss two models involving protein binding. The first model describes a system involving a drug, a receptor and a protein, and the question is to what extent the affinity of the drug to the protein affects the drug-receptor binding and thereby the efficiency of the drug. The second model is the basic model underlying Target-Mediated Drug Disposition, which describes the pharmacokinetics of a drug in the presence of a target, often a receptor. Abstract: This paper studies a dynamical stability of the steady state for some thermoelastic and thermoviscoelastic systems in multi-dimensional space domain. More general nonlinear term can be taken here than the one in [6] which studied the stability for the one-dimensional system called the Falk model system. We also give applications to thermoviscoelastic systems treated in [8] and [9]. Abstract: Consider the reaction front formed when two initially separated reactants A and B are brought into contact and react at a rate proportional to $A^n B^m$ when the concentrations $A$ and $B$ are positive. Further, suppose that both $n$ and $m$ are less than unity. Then the leading order large time asymptotic reaction rate has compact support, i.e. the reaction zone where the reaction takes place has a finite width and the reaction rate is identically zero outside of this region. In the large time asymptotic limit an analytical approximate solution to the reactant concentrations is constructed in the vicinity of the reaction zone. The approximate solution is found to be in good agreement with numerically obtained solutions. For $n \ne m$ the location of the maximum reaction rate does not coincide with the centre of mass of the reaction, and further for $n>m$ this local maximum is shifted slightly closer to the zone that initially contained species A, with the reverse holding when $m>n$. The three limits $m\rightarrow 0$, $n\rightarrow 1$ and $m,n\rightarrow 1$ are given special attention. Abstract: We consider a relation between proliferation of solid tumor cells and time-changes of the quantities of heat shock proteins in them. To do so, in the present paper we start to obtain some experimental data of the proliferation curves of solid tumor cells, actually, A549 and HepG2, as well as the time-changes of proteins, especially HSP90 and HSP72, in them. And we propose a mathematical model to re-create the experimental data of the proliferation curves and the time-changes of the quantities of heat shock proteins, which is described by ODE systems. Finally, we discuss a problem which exists between mitosis of solid tumor cells and time-changes of the quantities of heat shock proteins, from the viewpoint of biotechnology. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Statistics – A subject which most statisticians find difficult, but in which nearly all physicians are expert. – Stephen S. Senn Introduction For us, we will regard probability theory as a way of logically reasoning about uncertainty. I realize that this is not a precise mathematical definition, but neither is ‘probability theory is the mathematics arising from studying non-negative numbers which add up to 1’, which is at least partially accurate. Some additional material is covered elsewhere: * Statistical inference. To get well-grounded let’s begin with a sequence of definitions. First definitions Definition A probability space is a measure space $D$ with measure $P$ such that $P(D)=1$. The space $D$ is also sometimes called the sample space and the measurable subsets of $D$ are called events . Remark The definition of probability space is sufficiently general to include lots of degenerate examples. For example, we can take any set $S$ and make it into a probability space by decreeing that the only measurable subsets are $S$ and $\emptyset$ with $P(S)=1$. Although we will try to make this explicit, we will almost always want that singleton sets, i.e., sets with just a single element, are measurable. When a probability space has this property and every measurable subset is a countable union of singleton sets we will call the probability space discrete. Exercise Make the positive integers into a discrete probability space where every point has non-zero probability. Definition The probability of an event $E$ is $P(E)=:\int_E dP$. For discrete probability spaces we can also write $\int_E dP=\sum_{x\in E} P(x)$ . Construction Given two probability spaces $D_1$ and $D_2$ with respective probability measures $P_1$ and $P_2$. We can define a probability space $D_1\times D_2$ by: The underlying set is the cartesian product $D_1\times D_2$. The measurable subsets are generated under countable unions and complements by the products sets $I_1\times I_2$, where $I_1\subseteq D_1$ and $I_2\subseteq D_2$ are measurable subsets. The probability measure is determined by $P(I_1\times I_2)=P_1(I_1)\cdot P(I_2)$, where $I_1$ and $I_2$ are as in the previous statement. Example Suppose we have a fair coin that we flip twice. Each of the four possible outcomes $D=\{HH,HT,TH,TT\}$ are equally likely and form a discrete probability space such that $P(x)=1/4$ for all $x\in D$. The probability of the event $E$, where we get precisely one head, is $P(E)=P(HT)+P(TH)=1/2$. Definition A random variable $X\colon D\to T$ is a measurable function from a probability space $D$ to a measure space $T$. We can associate to each such $X$ a probability measure $P_X$ on $T$ by assigning to each measurable subset $U\subset T$, $P_X(U)=P(X^{-1}(U))$. Indeed it is clear that $P_X(T)=1$ and that for the measure of a countable disjoint union is $$P_X(\coprod U_i)=P(X^{-1}(\coprod U_i))=P(\coprod(X^{-1}U_i))=\sum P(U_i).$$ Remark There is an unfortunate clash in the language of probability theory and standard english usage. For example, imagine that we have a box with a single button on it and a numerical display. Every time we push the button the screen displays a number between 1 and 10. In common usage we say that these values are random if there is no way to know which number will appear on the screen every time we push the button. It is important to know that mathematics/probability theory/statistics do not provide any such mechanism. There is no function whose values are “randomly” chosen given a particular input. In particular, mathematics does not provide a method of randomly choosing objects. One should keep this in mind when talking about random variables. Random variables are not objects with random values; they are functions. The additional data that a random variable $X$ does define are numbers associated to the preimages $X^{-1}(I)$ (for measurable subsets $I$), which we can use to weight the values of $X$. This can also be used to shed light on statistical mechanics, which uses probability theory to model situations arising in physics. The fact that such models have been extremely successful in the field of quantum mechanics does not necessarily mean there is something random, in the common usage sense, about the universe; we are not claiming that “God plays dice with the universe”. It is just that our best mathematical models for these phenomena are constructed using the language of probability theory. Finally, we should remark that the closest mathematical object to a random number generator in the sense of english is a pseudorandom number generator. These are deterministic functions which output sequences of numbers which attempt to model our intuition of what a random number generator should produce. Although not truly random, these are heavily used in simulations and Monte Carlo methods. Conventions If we are regarding $\Bbb R$ as a measure space and do not specify an alternative measure, we will mean that it is equipped with its standard Borel measurable subsets and the Borel measure $E\mapsto \int_E dx$. If we regard a discrete finite set $S$ or any interval $[a,b]$ (with $a<b$) as a probability space and do not specify the measure then we will mean that it is equipped with a uniform measure. In other words, $P(s)=1/|S|$ for all $s\in S$ and for all measurable $E\subset I$ we have $P(E)=P_{\Bbb R}(E)/(b-a)$. Remarks : If the measure $P_X$ from the previous definition is absolutely continuous with respect to the standard Borel measure (i.e., the preimage of every measure 0 set with respect to the standard Borel measure is of measure 0), then there is a measurable function $dP_X/dx \colon T\to \Bbb R$ such that for all measurable $E\subset T$, $$P_X(E) := \int_{X^{-1} E} dP := \int_E dP_X = \int_{E} \frac{dP_X}{dx} dx.$$ All of these integrals are Lebesgue integrals. The measurable function $dP_X/dx$ is called a Radon-Nikodym derivative and any two such derivatives disagree on a set of measure 0, i.e., they agree almost everywhere. Without the absolute continuity hypothesis there is only a distribution satisfying this property. Having a measure defined in such a way obvious implies absolute continuity, so the first sentence can be formulated as an if and only if statement. This is the Radon-Nikodym theorem. Definition For a discrete probability space $D$ the function $p\colon D\to [0,1]$, defined by $d\mapsto p(d):=P(d)$ is called the probability mass function (PMF) of $D$. Note that the measure $P$ on $D$ is uniquely determined by the associated probability mass function. Definition Suppose that $\Bbb R$ is equipped with a probability measure $P$ and the cumulative distribution function (cdf) $F(a)=P(x\leq a)$ is a continuously differentiable function of $a$, then $F(x)=\int_{-\infty}^x F'(x) dx$ and $F'(x)$ is called the probability density function (pdf) of $F$ (or $P$). Note that the probability measure $P$ is determined by $F$ and hence the probability density function $F’$. This can lead to some confusing abuses of language. Example Let $D$ be the probability space from the first example. Let $X\colon D\to \mathbb{R}$ be the random variable which counts the number of occurrences of heads in a given event. Then the cumulative density function of $P_X$ is $F_X(x)=0$ if $x<0$, $F_X(x)=1/4$ if $0\leq x < 1$, $F_X(x)=3/4$ if $1\leq x < 2$ and $F_X(x)=1$ if $x\geq 2$. This function is discontinuous and hence the probability density function is not defined . Moments of distributions Typically when we are handed a probability space $D$, we analyze it by constructing a random variable $X\colon D\to T$ where $T$ is either a countable subset of $\Bbb R$ or to $\Bbb R$. Using the procedure of the previous section we obtain a probability measure $P_X$ on $T$ and we now study this probability space. Usually a great deal of information is lost about $D$ during this process, but it allows us to focus our energies and work in the more tractable and explicit space $T\subset\Bbb R$. So, we know focus on such probability spaces. This is usually decomposed into two cases, when $T$ is discrete (e.g., a subset of $\Bbb N$) and when $T$ is $\Bbb R$ (or some interval in $\Bbb R$). We could study the first case as a special case of the latter and just studying probability measures on $\Bbb R$, but that would require throwing in a lot of Dirac delta distributions at some point and I sense that you may not like that. We will seek a compromise and still use the integral notation to cover both cases although integrals in the discrete case can be expressed as sums. There are two special properties of this situation that we will end up using: 1. It makes sense to multiply elements of $T$ with real valued functions. 2. There is a natural ordering on $T$ (so we can define a cdf). 3. We can now meaningfully compare random variables with values in $\Bbb R$ which are defined on different probability spaces, by comparing their associated probability measures on $\Bbb R$ (or their cdfs/pdfs when these exist). For example the first property allows us to make sense of: Definition The expected value or mean of a random variable $X\colon D\to T\subset \Bbb R$, is $$\mu_X:=E(X)= \int_{x\in T} x\cdot dP_X = \int_{d\in D} X(d) dP.$$ Let $F_X$ denote the cdf of $X$. The median of $X$ is those $t\in \Bbb R$ such that $F_X(t)=0.5$. Suppose that $X$ admits a pdf $f_X$. The modes of $X$ are those $t\in \Bbb R$ such that $f_X(t)$ is maximal. Example In our coin flipping example, the expected value of the random variable $X$ which counts the heads is $$\int_D X dP = \sum_{d\in D} X(d)p(d) = 2/4+1/4+1/4+0/4=1,$$ as expected. The third property lets us make sense of: Definition Two random variables $X\colon D_1\to T$ and $Y\colon D_2\to T$ are identically distributed if they define the same probability measure on $T$, i.e., $P_X(I)=P_Y(I)$ for all measurable subsets $I\subseteq T$. In this case, we write $X\sim Y$. Definition We associate to two random variables $X,Y\colon D\to T$ a random variable $X\times Y\colon D \to T^2$ by $X\times Y(x)=(X(x),Y(x))$. This induces a probability measure $P_{X,Y}$ on $T^2.$ When $T=\Bbb R$ we can then define an associated joint cdf, $F_{X,Y}\colon \Bbb R^2\to [0,1]$ defined by $F_{X,Y}(a,b)=P_{X,Y}(x\leq a, y\leq b)$, which when $X\times Y$ is absolutely continuous with respect to the Lebesgue measure admits a joint pdf. Similarly, we can extend this to joint probability distributions of any number of random variables. Definition Two random variables $X,Y\colon D\to T$ with corresponding probability measures $P_X$ and $P_Y$ on $T$ are independent if the associated joint probability measure $P_{X,Y}$ on $T^2$ satisfies $P_{X,Y}(I_1\times I_2)=P_X(I_1)P_Y(I_2)$ for all measurable subsets $I_1, I_2\subseteq T$. When two variables are both independent and identically distributed then we abbreviate this to iid. Definition Suppose that $T$ is a probability space whose singleton sets are measurable. A random sample of size $n$ from $T$ will be any point of the associated product probability space $T^n$. Exercise Show that if $X$ and $Y$ are two $\Bbb R$ random variables then they are independent if and only if their joint cdf is the product of their individual cdfs. Suppose that moreover $X$ and $Y$ and the joint distribution admit pdfs $p_X, p_Y,$ and $p_{X,Y}$ respectively, then show that the $f_{X,Y}=f_X f_Y$ if and only if the distributions are independent. Definition The $k$th moment of a random variable $X\colon D \to \Bbb R$ is $E(X^k)$. The variance of a random variable $X\colon D\to \Bbb R$ is $$\sigma_X^2=E((X-\mu_X)^2)=\int_D (X-\mu_X)^2 dP$$. The standard deviation of $X$ is $\sigma_X=\sqrt{\sigma_X^2}$. The covariance of a pair of random variables $X,Y\colon D\to \Bbb R$ is $$ Cov(X,Y) = E((X-\mu_X)(Y-\mu_Y)). $$ The correlation coefficient of a pair of random variables $X,Y\colon D\to \Bbb R$ is $$\frac{Cov(X,Y)}{\sigma_X \sigma_Y}.$$ Exercise Suppose that $X$ and $Y$ are two independent $\Bbb R$-valued random variables with finite means $\mu_X$ and $\mu_Y$ and finite variances $\sigma^2_X$ and $\sigma^2_Y$ respectively. 1. Show that, for $a,b\in \Bbb R$ the mean of $aX+bY$ is $a\mu_X + b\mu_Y$. 1. Show that the variance of $aX+bY$ is $a^2 \sigma^2_X + b^2 \sigma^2_Y$. 1. Show that $E(XY)$ is $\mu_X\cdot \mu_Y$. 1. Show that $E(X^2)=\sigma_X^2+\mu_X^2$. Definition The characteristic function of a random variable $X\colon D\to \Bbb R$ is the complex function $$\varphi_X(t)=E(e^{itX})=\int_{x\in \Bbb R} e^{itx} dP_X = \int_{d\in D} e^{itX(d)} dP.$$ Remarks The characteristic function is always defined (because we are integrating an absolutely bounded function over a finite measure space). When $X$ admits a pdf $p_X$, then up to a reparametrization the characteristic function is the Fourier transform of $p_X$: $F(p_X)(X)=\varphi_X(-2\pi t)$. Two random variables have the same characteristic functions if and only if they are identically distributed . Some Important Results The Law of Large Numbers, which essentially says that the average $S_n$ of a sum of $n$ iif random variables with finite mean $\mu$ “converges” to the common mean. The Central Limit Theorem, which says that under the above hypotheses plus the assumption that the random variables have a finite variance $\sigma^2$, the random variable $\sqrt{n}(S_n-\mu)$ converges in distribution to the normal distribution with mean $0$ and variance $\sigma^2$. This result is the basis behind many normality assumptions and is critical to hypothesis testing which is used throughout the sciences. Conditional probability Suppose we have a probability space $P\colon D\to [0,1]$ and two events $A,B\in D$. Then we write $P(A,B)=P(A\cap B)$. Suppose that $P(B)>0$, then define the conditional probability of $A$ given $B$ as $$P(A|B)=P(A,B)/P(B).$$ A similar definition is also given for the conditional pdf of two random variables $X$ and $Y$: $f_{X,Y}(x|y)=f_{X,Y}(x,y)/f_Y(y)$ where $f_Y(y)=\int_{x \in \Bbb R} f(x,y) dx$ is the marginal density. Bayes Rule Let $A$ be an event in a probability space $P\colon D\to [0,1]$ and that $\{B_i\}_{i=1}^n$ is a disjoint union of events which cover $D$ and all of these events have non-zero probability. Then $$ $$ There is also the pdf form: $$ f_{X,Y}(x|y)=\frac{f_{X,Y}(y|x)f_X(x)}{\int f_{X,Y}(y|x) f_X(x) dx}. $$ The usefulness of Bayes rule is that it allows us to write a conditional probability that we do not understand (the dependence of $X$ on $Y$) in terms that we might understand (the dependence of $Y$ on $X$).
SageMath SageMath (formerly Sage) is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica. SageMath provides support for the following: Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents Installation contains the command-line version; for HTML documentation and inline help from the command line. includes a kernel for the Jupyter notebook interface. Note:Most of the standard Sage packages are available as optional dependencies of the package or in AUR, therefore they have to be installed additionally as normal Arch packages in order to take advantage of their features. Note that there is no need to install them with sage -i, in fact this command will not work if you installed SageMath with pacman. Usage SageMath mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations. SageMath command-line SageMath can be started from the command-line: $ sage For information on the SageMath command-line see this page. Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example: sage: plot(sin,(x,0,10)) SageMath opens the plot in an external application. Sage Notebook Note:The SageMath Flask notebook is deprecated in favour of the Jupyter notebook. The Jupyter notebook is recommended for all new worksheets. You can use the application to convert your Flask notebooks to Jupyter A better suited interface for advanced usage in SageMath is the Notebook (). To start the Notebook server from the command-line, execute: $ sage -n The notebook will be accessible in the browser from http://localhost:8080 and will require you to login. However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command: $ sage -c "notebook(automatic_login=True)" Jupyter Notebook SageMath also provides a kernel for the Jupyter notebook in the package. To use it, launch the notebook with the command $ jupyter notebook and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports LaTeX output via the %display latex command and 3D plots if is installed. Cantor Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, SageMath, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with SageMath. Cantor can be installed with the official repositories.package or as part of the or groups, available in the Optional additions SageTeX If you have TeX Live installed on your system, you may be interested in using SageTeX, a package that makes the inclusion of SageMath code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away. As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use pdflatex): include the sagetexpackage in the preamble of your document with the usual \usepackage{sagetex} create a sagesilentenvironment in which you insert your code: \begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a floatenvironment: \begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sagetex.sage> $ pdflatex <doc.tex> you can have a look at your output document. The full documentation of SageTeX is available on CTAN. Troubleshooting TeX Live does not recognize SageTex If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder): Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done.
Using Kahan summation algorithm should suffice your purpose. Having:$$R=\frac{\sum\limits_{i=1}^{N}a_i}{\sum\limits_{i=1}^{N}b_i}=\frac{A}{B}$$if $A$ and $B$ are computed accurate enough, the final ratio $R$ should not be a problem. If $A$ and $B$ are computed using Kahan summation, you can expect the relative error to be $\mathcal{O}(\epsilon_\text{mach}+N\epsilon_\text{mach}^2)$. For double-precision and $N\approx 10^{10}$ you are safe ($N\ll 1/\epsilon_\text{mach}$), and your error is practically independent of $N$: $\mathcal{O}(\epsilon_\text{mach})$. It's worth to mention, that the constant in front of the error estimate (condition # of summation $\kappa_{\sum}$) can still be relatively large.If summation condition number: $$\kappa_{\sum_A}=\frac{\sum\limits_{i=1}^{N}|a_i|}{\left|\sum\limits_{i=1}^{N}a_i\right|}$$ is large, your best bets would probably be in Shewchuk's algorithm. With that, you will be able to calculate your numerator and denominator to full double precision. If $\tilde{A}$ and $\tilde{B}$ are calculated in full double precision (say using Shewchuk's algorithm):$$\tilde{A}=A(1+\delta_A),\quad \tilde{B}=B(1+\delta_B),\quad |\delta_A|,|\delta_B|\le\epsilon_\text{mach}$$, then $$\begin{aligned}\tilde{R}&=\tilde{A}\oslash\tilde{B}&=\frac{A(1+\delta_A)}{B(1+\delta_B)}(1+\delta_{\oslash})=\frac{A}{B}(1+\theta)\\&&|\theta|\leq\frac{3\epsilon_\text{mach}}{1-3\epsilon_\text{mach}}\end{aligned}$$Here, $\tilde{A},\tilde{B},\tilde{R}$ are floating point (IEEE-754) representation of $A$, $B$, and $R$, respectively, $\oslash$ - floating point analog of division. Now, even if $A$ and $B$ are not calculated to the full precision (Kahan), it will just increase $\delta_A$ and $\delta_B$ (not allowing approximation by $\theta$ with such a tight bound). However, I still do not see how the division can be the problem here. Especially, since the true expected values of $A$ and $B$ are in $[10^{-1}, 10^{1}]$.
Tool to compute resistor color code. Electronic components, such as resistors, have their values designated by a color code and standardized. Resistors' Color Code - dCode Tag(s) : Electronics dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day! A suggestion ? a feedback ? a bug ? an idea ? Write to dCode! Sponsored ads Tool to compute resistor color code. Electronic components, such as resistors, have their values designated by a color code and standardized. To know the value of a resistance, use an ohm-meter here or read the color code on the resistor. The International Norm CEI 60757 (1983) define a color code to write the value of a resistor (but also condenser, and some other electronic component). Colors are associated to digits : Example: 0 Black 1 Brown 2 Red 3 Orange 4 Yellow 5 Green 6 Blue 7 Violet 8 Grey 9 White -1 Gold -2 Silver The more often, a resistor has 4 bands : The two first band (or the three first) indicate a digit each (a digit correspond to a color) The last one (fourth, sometimes fifth) indicates the tolerance or precision of the calculated value. When this band is absent, it means the largest tolerance : 20%. Sometimes an additional band is coded for precise resistor, it indicates a coefficient of temperature (in ppm/Kelvin or ppm/°C) Example: A resistor Yellow,Orange,Red, digits are: 4,3,2. The first 2 digits make the number 43. The 3rd digit 2 is the power of 10 factor. The calculation is \( 43 \times 10^2 = 4300 \Omega \) Example: A resitor Blue,Yellow,Red,Brown,Brown, so the digits are 6,4,2,1,1. The value is given by \( 642 \times 10^1 \pm 1 \% = 6420 \Omega ± 1 \% \) Print this page in PDF for offline use Example: If the multiplier ring gives the number 3, then multiply the value given by the first rings by \( 10 ^ 3 = 1000 \). If this ring is of gold color, the value is divided by 10, and for the color silver, divide by 100. The measured value is never exact but must be in the tolerance interval of the resistor. Example: A resistor of 100 Ω with a tolerance of 5% could be measured between 95 Ω and 105 Ω. Example: A resistor of 220 ohms Ω with a tolerance of 10%. The value of the tolerance is therefore \( 220 \times 10\% = 22 \). The tolerance interval is therefore \( 220 \pm 22 \), the value is between 198 and 242, sometimes noted \( [198, 242] \). The more often, the first band is the closest to the edge. The tolerance band is sometime more spaced than the previous ones. Generally prefixes are used for values in Ohm, k for kilo (10^3) and M for mega (10^6). Example: 12000 Ω = 12 kΩ Example: 3400000 Ω = 3.4 MΩ A resistor has a minimum of 4 bands, but sometimes, the last band is absent. As it is only about tolerance of the value found with the first 3 bands, take the highest tolerance value: 20% Some mnemonicssentences can help to remember the colors and their values. (Some include tolerance bands Gold, Silver or None). Example: B.B. ROY Goes Bombay Via Gateway With Genelia and Susanne. B. (BLACK) B. (BROWN) ROY (RED-ORANGE-YELLOW) Goes (GREEN) Bombay (BLUE) Via (VIOLET) Gateway (GREY) With (WHITE) Genelia (GOLD) and Susanne (SILVER). Example: Bad Beer Rots Our Young Guts But Vodka Goes Well – Get Some Now. Bad (BLACK) Beer (BROWN) Rots (RED) Our (ORANGE) Young (YELLOW) Guts (GREEN) But (BLUE) Vodka (VIOLET) Goes (GREY) Well (WHITE) – Get (GOLD) Some (SILVER) Now (NONE). Example: Big Boys Race Our Young Girls But Violet Generally Wins. Big (BLACK) Boys (BROWN) Race (RED) Our (ORANGE) Young (YELLOW) Girls (GREEN) But (BLUE) Violet (VIOLET) Generally (GREY) Wins (WHITE). Example: Better Be Right Or Your Great Big Venture Goes West. Better (BLACK) Be (BROWN) Right (RED) Or (ORANGE) Your (YELLOW) Great (GREEN) Big (BLUE) Venture (VIOLET) Goes (GREY) West (WHITE). Example: Better Be Right Or Your Great Big Vacation Goes Wrong. Better (BLACK) Be (BROWN) Right (RED) Or (ORANGE) Your (YELLOW) Great (GREEN) Big (BLUE) Vacation (VIOLET) Goes (GREY) Wrong (WHITE). Example: Big Brown Rabbits Often Yield Great Big Vocal Groans When Gingerly Slapped Needlessly Big (BLACK) Brown (BROWN) Rabbits (RED) Often (ORANGE) Yield (YELLOW) Great (GREEN) Big (BLUE) Vocal (VIOLET) Groans (GREY) When (WHITE) Gingerly (GOLD) Slapped (SILVER) Needlessly (NONE) dCode retains ownership of the source code of the script Resistors' Color Code online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Resistors' Color Code script for offline use on PC, iPhone or Android, ask for price quote on contact page !
Using Partial Fractions to Find Inverse Laplace Transforms concept When using the Laplace Transform to solve differential equations we won't get problems as nice as \(\frac{5}{s} + \frac{2}{s-3}\). We'll get the delightful \(\frac{7s-15}{s^2-3s}\), so it's important that we can take a rational expression and reduce it to terms that exist in our Laplace tables. Partial fraction expansion allows us to do just this. It's a process that requires quite a bit of algebra but will give us terms that can be directly inverted using our table. Which is neat. Partial fractions is clunky to talk about in words but it's straightforward to show how the process works. So we'll use a few example to go over the method. example Expand \(\frac{2s + 10}{s(s + 10)}\) using partial fractionsOur first step is to factor the denominator (which I've already done, you're welcome). Then we write a fraction for each factor with that factor as the denominator: \(\frac{2s + 10}{s(s + 10)} = \frac{A}{s} + \frac{B}{s + 10}\) Where \(A\) and \(B\) are both constants. Later on I'll explain how you know what kind of numerator to use (it's pretty straightforward). Now we need to find \(A\) and \(B\) by multiplying both sides by \(s(s + 10)\) and equating equal powers of \(s\) on both sides. \(2s + 10 = As + 10A + Bs\) So we can see that \(A = 1\) (since \(10A = 10\)). \(A + B = 2 \implies B = 1\) So \(\frac{2s + 10}{s(s + 10)} = \frac{1}{s} + \frac{1}{s+10}\) example Expand \(\frac{3s^2 - 12s + 11}{(s-3)(s-2)(s-1)}\) using partial fractions.Again we write out a fraction for each denominator factor and stick an unknown numerator on top. \(\frac{3s^2 - 12s + 11}{(s-3)(s-2)(s-1)} = \frac{A}{s-3} + \frac{B}{s-2} + \frac{C}{s-1}\) Now we need to find the values of \(A\), \(B\) and \(C\) (which are all constants). We do that by multiplying both sides by \((s-3)(s-2)(s-1)\) and then equating terms with equal powers of \(s\). \(\begin{align} 3s^2 - 12s + 11 & = A(s-2)(s-1) + B(s-3)(s-1) + C(s-3)(s-2) \\ & = As^2 - 3As + 2A + Bs^2 - 4Bs + 3B + Cs^2 - 5Cs + 6C \\ & = s^2[A + B + C] + s[-2A -4B -5C] + [2A + 3B + 6C] \\ \end{align}\) So we get the simultaneous equations: \(A + B + C = 3\) \(-2A -4B -5C = -12\) \(2A + 3B + 6C = 11\) I'll assume you can solve that and we get \(A = B = C = 1\) So \(\frac{3s^2 - 12s + 11}{(s-3)(s-2)(s-1)} = \frac{1}{s-3} + \frac{1}{s-2} + \frac{1}{s-1}\) And we can see that the inverse Laplace transform is going to be: \(f(t) = e^{3t} + e^2t + e^t\) Now in both the above examples the denominator has factored into linear functions of \(s\), that's because I'm super nice to you and wanted to ease you into this. The terms we use in partial fraction decomposition depend on the factors in the denominator. fact The following is a table of terms to be used for corresponding denominators in partial fraction expansions Factor in denominator Term in partial fraction decomposition \(as + b\) \(\frac{A}{as + b}\) \((as+b)^k\) \(\frac{A_1}{as+b} + \frac{A_2}{(as+b)^2} + \cdots + \frac{A_k}{(as+b)^k}\) \(as^2 + bs + c\) \(\frac{As + B}{as^2 + bs + c}\) \((as^2 + bs + c)^k\) \(\frac{A_1s + B_1}{as^2 + bs + c} + \frac{A_2s + B_2}{(as^2 + bs + c)^2} + \cdots + \frac{A_ks + B_k}{(as^2 + bs + c)^k}\) So let's take our new terms our for a spin example Use partial fractions to expand \(\frac{2-5s}{(s-6)(s^2+11)}\)From our table we can see that: \(\frac{2-5s}{(s-6)(s^2+11)} = \frac{A}{s-6} + \frac{Bs + C}{s^2 + 11}\) Now we do what we've always done and multiply both sides by the denominator and equate equal powers of \(s\). \(2 - 5s = As^2 + 11A + Bs^2 -6Bs + Cs -6C\) \( = s^2[A + B] + s[-6B + C] + [11A - 6C]\) From this we can see that \(A + B = 0 \implies A = -B\) \(-6B + C = -5\) \(11A - 6C = 2\) This gives: \(A = -\frac{28}{47}\), \(B = \frac{28}{47}\), \(C = -\frac{67}{47}\) So we can rewrite our problem as: \( \frac{1}{47}\left( -\frac{28}{s-6} + \frac{28s - 67}{s^2 + 11} \right) \) \( = \frac{1}{47}\left( -\frac{28}{s-6} + \frac{28s}{s^2 + 11} - \frac{67}{s^2 + 11} \right) \) And the inverse transform can be easily seen to be: \(f(t) = \frac{1}{47}\left( -28e^{6t} + 28\cos(\sqrt{11}t)-\frac{67}{\sqrt{11}}\sin(\sqrt{11}t) \right) \) Yep, it's a right mess, when you solve real differential equations with this method it almost certainly will be, with square roots and awkward fractions all over the place. practice problems
Theory Notes¶ Computation of cell dry mass¶ The concept of cell dry mass computation was first introduced by Barer [Bar52]. The dry mass \(m\) of a biological cell is defined by its non-aqueous fraction \(f(x,y,z)\) (concentration or density in g/L), i.e. the number of grams of protein and DNA within the cell volume (excluding salts). The assumption of dry mass computation in QPI is that \(f(x,y,z)\) is proportional to the RI of the cell \(n(x,y,z)\) with a proportionality constant called the refraction increment \(\alpha\) (units [mL/g]) with the RI of the intracellular fluid \(n_\text{intra}\), a dilute salt solution. These two equations can be combined to In QPI, the RI is measured indirectly as a projected quantitative phase retardation image \(\phi(x,y)\). with the vacuum wavelength \(\lambda\) of the imaging light and the refractive index of the cell-embedding medium \(n_\text{med}\). Integrating the above equation over the detector area \((x,y)\) yields For a discrete image, this formula simplifies to with the pixel area \(\Delta A\) and a pixel-wise summation of the phase data. Relative and absolute dry mass¶ If however the medium surrounding the cell has a different refractive index (\(n_\text{med} \neq n_\text{intra}\)), then the phase \(\phi\) is measured relative to the RI of the medium \(n_\text{med}\) which causes an underestimation of the dry mass if \(n_\text{med} > n_\text{intra}\). For instance, a cell could be immersed in a protein solution or embedded in a hydrogel with a refractive index of \(n_\text{med}\) = \(n_\text{intra}\) + 0.002. For a spherical cell with a radius of 10µm, the resulting dry mass is underestimated by 46pg. Therefore, it is called “relative dry mass” \(m_\text{rel}\). If the imaged phase object is spherical with the radius \(R\), then the “absolute dry mass” \(m_\text{abs}\) can be computed by splitting equation (1) into relative mass and suppressed spherical mass. For a visualization of the deviation of the relative dry mass from the actual dry mass for spherical objects, please have a look at the relative vs. absolute dry mass example. Range of validity¶ Variations in the refraction increment may occur and thus the above considerations are not always valid. For a detailed discussion of the variables that affect the refraction increment, please see [BJ54]. Dependency on imaging wavelength¶ Barer and Joseph measured the refraction increment of several proteins in dependence of wavelength. In general, short wavelengths (366nm) yield values close to 0.200mL/g while long wavelengths (656nm) yield smaller values close to 0.180mL/g (table 3 in [BJ54]). Dependency on protein concentration¶ The refraction increment has been reported to be linear for a wide range of protein concentrations. Barer and Joseph found that bovine serum albumin exhibits a linear refraction increment up to its limit of solubility (figure 2 in [BJ54]). They additionally received a personal communication stating that this is also the case for gelatin. Dependency on pH, temperature, and salts¶ The refraction increment is little dependent on pH, temperature, and salts [BJ54]. Refraction increment and the mass of cells¶ Dry mass and actual mass of a cell differ by the weight of the intracellular fluid. This weight difference is defined by the volume of the cell minus the volume of the protein and DNA content. While it seems to be difficult to define a partial specific volume (PSV) for DNA, there appears to be a consensus regarding the PSV of proteins, yielding approximately 0.73mL/g (see e.g. reference [Bar57] as well as [HGC94] and question 843 of the O-manual referring to it). For example, the protein and DNA of a cell with a radius of 10µm and a dry mass of 350pg (cell volume 4.19pL, average refractive index 1.35) occupy approximately 0.73mL/g · 350pg = 0.256pL (assuming the PSV of protein and DNA are similar). Therefore, the actual volume of the intracellular fluid is 3.93pL (94% of the cell volume) which is equivalent to a mass of 3.93ng resulting in a total (actual) cell mass of 4.28ng. Thus, the dry mass of this cell makes up approximately 10% of its actual mass which leads to a total mass that is about 2% heavier than the equivalent volume of pure water (4.19ng). Default parameters in DryMass¶ The default refraction incrementis \(\alpha\) = 0.18mL/g, as suggested for cells based on the refraction increment of cellular constituents by references [BJ54] and [Bar53]. The refraction increment can be manually set using the configuration key “refraction increment” in the “sphere” section. The default refractive index of the intracellular fluidin DryMass is assumed to be \(n_\text{intra}\) = 1.335, an educated guess based on the refractive index of phosphate buffered saline (PBS), whose osmolarity and ion concentrations match those of the human body.
Difference between revisions of "Fujimura's problem" (→n=4) (→n=5) Line 40: Line 40: [Cleanup required here] [Cleanup required here] − \overline{c}^\mu_5=12 + \overline{c}^\mu_5=12 − The set of all (a,b,c) in \Delta_5 with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. + The set of all (a,b,c) in \Delta_5with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. + + − (3,1,1),(0,4,1),(0,1,4) (3,1,1),(0,4,1),(0,1,4) + (4,1,0),(1,4,0),(1,1,3) (4,1,0),(1,4,0),(1,1,3) + (4,0,1),(1,3,1),(1,0,4) (4,0,1),(1,3,1),(1,0,4) + (1,2,2),(0,3,2),(0,2,3) (1,2,2),(0,3,2),(0,2,3) + (3,2,0),(2,3,0),(2,2,1) (3,2,0),(2,3,0),(2,2,1) + (3,0,2),(2,1,2),(2,0,3) (3,0,2),(2,1,2),(2,0,3) − So now we have found 9 elements not in S, but |\Delta_5|=21, so S\leq 21-9=12. + + So now we have found 9 elements not in S, but |\Delta_5|=21, so S\leq 21-9=12. == General n == == General n == Revision as of 22:33, 12 February 2009 Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles. Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain "hyper-optimistic" version of DHJ(3). n=0 It is clear that [math]\overline{c}^\mu_0 = 1[/math]. n=1 It is clear that [math]\overline{c}^\mu_1 = 2[/math]. n=2 It is clear that [math]\overline{c}^\mu_2 = 4[/math] (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 Deleting (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math] shows that [math]\overline{c}^\mu_3 \geq 6[/math]. In fact [math]\overline{c}^\mu_3 = 6[/math]: just note (3,0,0) or something symmetrical has to be removed, leaving 3 triangles which do not intersect, so 3 more removals are required. n=4 [Cleanup required here] [math]\overline{c}^\mu_4=9:[/math] The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. could you give your explicit list for removals from \overline{c}^\mu_4? I am unable to reproduce a triangle-free configuration from your description. For example, (4,0,0) (0,4,0) (0,0,4) (2,1,1) (1,1,2) (1,2,1) leaves the triangle (2,2,0) (0,2,2) (2,0,2). n=5 [Cleanup required here] [math]\overline{c}^\mu_5=12[/math] The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. General n [Cleanup required here] A lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set.
I am reading Probabilistic counting algorithms for database applications. In the introduction an algorithm for finding an intersection is specified: Sort A, search each element of B in A and retain it if it appears in A. It is claimed that if a, b are number of elements in A and B, and $\alpha, \beta$ are the number of distinct elements in A, B then the complexity of this algorithm is $O(a\log\alpha + b\log\alpha)$. My question is, why the sorting of A is only dependent on the number of distinct elements? Is there some kind of algorithm I am not aware of? If so, why the same algorithm could not be used for the second strategy? The second strategy is: Sort A and B, use merge-like operation to discard duplicates. For this algorithm the complexity is $O(a\log a + b\log b + a + b)$ which makes sense to me.
It is said, do not use derivative operator in a the control system. because it will amplify the noise effect. Can someone explain it and give some mathematical reference about it please? Also I heard, we shouldn't use logarithm operator too, for the same reason. How come? Thanks in advance. It is said, do not use The derivative operator is linear and time-invariant and can be described by its frequency response $$X(j\omega)=j\omega\tag{1}$$ From (1) it is obvious that its gain increases linearly with frequency, and high frequencies (where the SNR is usually bad) are greatly amplified (as already pointed out in endolith's comment). The logarithm is a memoryless nonlinearity and it compresses the input signal. This operation is used in many signal processing algorithms, such as feature extraction for speech recognition. In these algorithms this operation is essential to the performance and it definitely does not deteriorate the signal. What is important, however, is to deal with the singularity of the logarithm as its argument approaches zero because: $$\lim_{x\rightarrow 0+}\log(x)=-\infty$$ If nothing is done about it, the logarithm will produce very high negative peaks during periods where the signal is almost zero (i.e. when there's only noise). This problem is easily handled by defining a threshold, either on the value of the logarithm or on the input signal. Note that the logarithm can only be applied to non-negative signals (usually magnitudes or squared magnitudes of frequency domain data).
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Conclude that $(p\land q)\rightarrow p$ is a tautology without using truth tables. Here's what I did: $$(p\land q)\rightarrow p \equiv \lnot(p\land q)\lor p$$ By De Morgan's Law: $$\lnot(p\land q)\lor p\equiv \lnot p\lor \lnot q\lor p$$ Then By the Associative Property: $$\lnot p\lor \lnot q\lor p\equiv \lnot q \lor(\lnot p\lor p)$$ After this, I specified that $\lnot p\lor p$ is always true, so by the definition of a disjunction $\lnot q \lor(\lnot p\lor p)$ must also be true. Therefore, since $(p\land q)\rightarrow p\equiv \lnot q \lor(\lnot p\lor p)$ and $\lnot q \lor(\lnot p\lor p)$ is always true, it stands to reason that $(p\land q)\rightarrow p$ is also always true. However, my textbook simply says "If the hypothesis $p\land q$ is true, then by the definition of conjunction, the conclusion $p$ must also be true." I don't understand how this concludes the statement is a tautology. Is my proof not correct? Did I go overboard?
I just do not understand how the spherical co-ordinates conversion system works. I understand the concept, but the finding the limits for p,φ,θ does not work for me (I study part-time by myself). The question is: " Let D be the 3-Dimensional region inside the sphere $ x^2 + y^2 + z^2 = 4 $ above the cone $ z= \sqrt{4x^2 + 4y^2}$ "Attempt at answer": Function conversion: $$\iiint p^2 \,dp \,dφ \,dθ$$ The limits y (θ): It is a full enclosed circle, thus 0 < 2π The limits of x(p) x (p): r = 2 and therefore z = 2 z = pcosφ 2 = pcosφ 2/cosφ = p p = 2secφ Therefore the limit is from 0 to 2secφ but a website i assessed had a different value. Is this because p = r and from the formula of the sphere it is r = 2 therefore p = 2? The limits of z (φ): $ z = √4x^2 + 4y^2 $ $ p^2cosφ^2 = 4p^2sinφ^2cosθ^2 + 4p^2sinφ^2sinθ $ $ tanφ^2 = 1/4 $ $ tanφ^2 = \frac{1}{\sqrt(4)}$ $ tanφ = \frac{1}{2}$ However what now? Don't believe that you have a tan = 1/2 radians angle. I also saw a website that said that the angle is pi/4? Thank you!
Orthogonal group An orthogonal group is a group of all linear transformations of an $n$-dimensional vector space $V$ over a field $k$ which preserve a fixed non-singular quadratic form $Q$ on $V$ (i.e. linear transformations $\def\phi{\varphi}\phi$ such that $Q(\phi(v))=Q(v)$ for all $v\in V$). An orthogonal group is a classical group. The elements of an orthogonal group are called orthogonal transformations of $V$ (with respect to $Q$), or also automorphisms of the form $Q$. Furthermore, let ${\rm char\;} k\ne 2$ (for orthogonal groups over fields with characteristic 2 see [Di], [2]) and let $f$ be the non-singular symmetric bilinear form on $V$ related to $Q$ by the formula $$f(u,v)=\frac{1}{2}(Q(u+v) - Q(u) - Q(v)).$$ The orthogonal group then consists of those linear transformations of $V$ that preserve $f$, and is denoted by $\def\O{ {\rm O} }\O_n(k,f)$, or (when one is talking of a specific field $k$ and a specific form $f$) simply by $\O_n$. If $B$ is the matrix of $f$ with respect to some basis of $V$, then the orthogonal group can be identified with the group of all $(n\times n)$-matrices $A$ with coefficients in $k$ such that $A^TBA = B$ (${}^T$ is transposition). The description of the algebraic structure of an orthogonal group is a classical problem. The determinant of any element from $\O_n$ is equal to 1 or $-1$. Elements with determinant 1 are called rotations; they form a normal subgroup $\O_n^+(k,f)$ (or simply $\O_n^+$) of index 2 in the orthogonal group, called the rotation group. Elements from $\O_n\setminus \O_n^+$ are called inversions. Every rotation (inversion) is the product of an even (odd) number of reflections from $\O_n$. Let $Z_n$ be the group of all homotheties $\def\a{\alpha}\phi_\a : v\mapsto \a v$, $\a\in k$, $\a\ne 0$, of the space $V$. Then $\O_n\cap Z_n$ is the centre of $\O_n$; it consists of two elements: $\phi_1$ and $\phi_{-1}$. If $n$ is odd, then $\O_n$ is the direct product of its centre and $\O_n^+$. If $n\ge 3$, the centre of $\O_n^+$ is trivial if $n$ is odd, and coincides with the centre of $\O_n$ if $n$ is even. If $n=2$, the group $\O_n^+$ is commutative and is isomorphic either to the multiplicative group $k^*$ of $k$ (when the Witt index $\nu$ of $f$ is equal to 1), or to the group of elements with norm 1 in $k(\sqrt-\Delta)$, where $\Delta$ is the discriminant of $f$ (when $\nu=0$). The commutator subgroup of $\O_n(k,f)$ is denoted by $\def\Om{\Omega}\Om_n(k,f)$, or simply by $\Om_n$; it is generated by the squares of the elements from $\O_n$. When $n\ge 3$, the commutator subgroup of $\O_n^+$ coincides with $\Om_n$. The centre of $\Om_n$ is $\Om_n\cap Z_n$. Other classical groups related to orthogonal groups include the canonical images of $\O_n^+$ and $\Om_n$ in the projective group; they are denoted by ${\rm P}\O_n^+(k,f)$ and ${\rm P}\Om_n(k,f)$ (or simply by ${\rm P}\O_n^+$ and ${\rm P}\Om_n$) and are isomorphic to $\O_n^+/(\O_n^+\cap Z_n)$ and $\Om_n/(\Om_n\cap Z_n)$, respectively. The basic classical facts about the algebraic structure describe the successive factors of the following series of normal subgroups of an orthogonal group: $$\O_n\supset \O_n^+\supset \Om_n\supset \Om_n\cap Z_n \supset \{e\}.$$ The group $\O_n/\O_n^+$ has order 2. Every element in $\O_n/\Om_n$ has order 2, thus this group is defined completely by its cardinal number, and this number can be either infinite or finite of the form $2^\a$ where $\a$ is an integer. The description of the remaining factors depends essentially on the Witt index $\nu$ of the form $f$. First, let $\nu\ge 1$. Then $\O_n^+/\Om_n \simeq k^*/{k^*}^2$ when $n>2$. This isomorphism is defined by the spinor norm, which defines an epimorphism from $\O_n^+$ on $k^*/{k^*}^2$ with kernel $\Om_n$. The group $\Om_n\cap Z_n$ is non-trivial (and consists of the transformations $\phi_1$ and $\phi_{-1}$) if and only if $n$ is even and $\Delta\in {k^*}^2$. If $n\ge 5$, then the group ${\rm P}\Om_n = \Om_n/(\Om_n\cap Z_n)$ is simple. The cases where $n=3,4$ are studied separately. Namely, ${\rm P}\Om_3 = \Om_3$ is isomorphic to $\def\PSL{ {\rm PSL}}\PSL_2(k)$ (see Special linear group) and is also simple if $k$ has at least 4 elements (the group $\O_3^+$ is isomorphic to the projective group $\def\PGL{ {\rm PGL}}\PGL_2(k)$). When $\nu=1$, the group ${\rm P}\Om_4 = \Om_4$ is isomorphic to the group $\PSL_2(k(\sqrt{\Delta}))$ and is simple (in this case $\Delta\notin k^2$), while when $\nu=2$, the group ${\rm P}\Om_4$ is isomorphic to $\PSL_2(k)\times \PSL_2(k)$ and is not simple. In the particular case when $k = \R$ and $Q$ is a form of signature $(3,1)$, the group ${\rm P}\Om_4 = \Om_4\simeq \PSL_2(\C)$ is called the Lorentz group. When $\nu = 0$ (i.e. $Q$ is an anisotropic form), these results are not generally true. For example, if $k=\R$ and $Q$ is a positive-definite form, then $\Om_n = O_n^+$, although $\R^*/{\R^*}^2$ consists of two elements; when $k=\Q$, $n=4$, one can have $\Delta\in k^2$, but $\phi_{-1}\notin \Om_4$. When $\nu=0$, the structures of an orthogonal group and its related groups essentially depend on $k$. For example, if $k=\R$, then ${\rm P}\O_n^+$, $n\ge 3$, $n\ne 4$, $\nu=0$, is simple (and ${\rm P}\O_4^+$ is isomorphic to the direct product $\O_3^+ \times \O_3^+$ of two simple groups); if $k$ is the field of $p$-adic numbers and $\nu=0$, there exists in $\O_3$ (and $\O_4$) an infinite normal series with Abelian quotients. Important special cases are when $k$ is a locally compact field or an algebraic number field. If $k$ is the field of $p$-adic numbers, then $n=0$ is impossible when $\nu\ge 5$. If $k$ is an algebraic number field, then there is no such restriction and one of the basic results is that ${\rm P}\Om_n$, when $\nu=0$ and $n\ge 5$, is simple. In this case, the study of orthogonal groups is closely connected with the theory of equivalence of quadratic forms, where one needs the forms obtained from $Q$ by extension of coefficients to the local fields defined by valuations of $k$ (the Hasse principle). If $k$ is the finite field $\F_q$ of $q$ elements, then an orthogonal group is finite. The order of $\O_n^+$ for $n$ odd is equal to $$(q^{n-1}-1)q^{n-2}(q^{n-3}-1)q^{n-4}\cdots (q^2-1)q,$$ while when $n=2m$ it is equal to $$\def\e{\epsilon}(q^{2m-1}-\e q^{m-1})(q^{2m-2}-1)q^{2m-3}\cdots(q^2-1)q,$$ where $\e=1$ if $(-1)^m\Delta\in \F_q^2$ and $\e=-1$ otherwise. These formulas and general facts about orthogonal groups when $\nu\ge 1$ also allow one to calculate the orders of $\Om_n$ and ${\rm P}\Om_n$, since $\nu\ge 1$ when $n\ge 3$, while the order of $k^*/{k^*}^2$ is equal to 2. The group ${\rm P}\Om_n$, $n\ge 5$, is one of the classical simple finite groups (see also Chevalley group). One of the basic results on automorphisms of orthogonal groups is the following: If $n\ge 3$, then every automorphism $\phi$ of $\O_n$ has the form $\phi(u)=\chi(u)gug^{-1}$, $u\in \O_n$, where $\chi$ is a fixed homomorphism of $\O_n$ into its centre and $g$ is a fixed bijective semi-linear mapping of $V$ onto itself satisfying $Q(g(v))=r_gQ^\sigma(v)$ for all $v\in V$, where $r_g\in k^*$ while $\sigma$ is an automorphism of $k$. If $\nu\ge 1$ and $n\ge 6$, then every automorphism of $\O_n^+$ is induced by an automorphism of $\O_n$ (see [Di], ). Like the other classical groups, an orthogonal group has a geometric characterization (under certain hypotheses). Indeed, let $ Q$ be an anisotropic form such that $Q(v)\in k^2$ for all $v\in V$. In this case $k$ is a Pythagorean orderable field. For a fixed order of the field $k$, any sequence $((H_s)_{1\le s\le n}$ constructed from a linearly independent basis $((h_s)_{1\le s\le n}$, where $H_s$ is the set of all linear combinations of the form $\def\l{\lambda}\sum_{j=1}^sl_jh_j$, $\l_s\ge 0$, is called an $n$-dimensional chain of incident half-spaces in $V$. The group $\O_n$ has the property of free mobility, i.e. for any two $n$-dimensional chains of half-spaces there exists a unique transformation from $\O_n$ which transforms the first chain into the second. This property characterizes an orthogonal group: If $L$ is any ordered skew-field and $G$ is a subgroup in ${\rm GL_n(L)}$, $n\ge 3$, having the property of free mobility, then $L$ is a Pythagorean field, while $G=\O_n(L,f)$, where $f$ is an anisotropic symmetric bilinear form such that $f(v,v)\in L_1^2$ for any vector $v$. Let $\def\bk{ {\bar k}}\bk$ be a fixed algebraic closure of the field $k$. The form $f$ extends naturally to a non-singular symmetric bilinear form ${\bar f}$ on $V\otimes_k \bk$, and the orthogonal group $\O_n(\bk,f)$ is a linear algebraic group defined over $k$ with $\O_n(k,f)$ as group of $k$-points. The linear algebraic groups thus defined (for various $f$) are isomorphic over $\bk$ (but in general not over $k$); the corresponding linear algebraic group over $\bk$ is called the orthogonal algebraic group $\O_n(\bk)$. Its subgroup $\O_n^+(\bk,{\bar f})$ is also a linear algebraic group over $\bk$, and is called a properly orthogonal, or special orthogonal algebraic group (notation: $\def\SO{ {\rm SO}}\SO_n(\bk)$); it is the connected component of the identity of $\O_n(\bk)$. The group $\SO_n(\bk)$ is an almost-simple algebraic group (i.e. does not contain infinite algebraic normal subgroups) of type $B_s$ when $n=2s+1$, $s\ge 1$, and of type $D_s$ when $n=2s$, $s\ge 3$. The universal covering group of $\SO_n$ is a spinor group. If $ k=\R,\C$ or a $p$-adic field, then $\O_n(k,f)$ has a canonical structure of a real, complex or $p$-adic analytic group. The Lie group $\O_n(\R,f)$ is defined up to isomorphism by the signature of the form $f$; if this signature is $(p,q)$, $p+q=n$, then $\O_n(\R,f)$ is denoted by $\O_(p,q)$ and is called a pseudo-orthogonal group. It can be identified with the Lie group of all real $(n\times n)$-matrices $A$ which satisfy $$A^TI_{p,q}A = I_{p,q}\qquad\textrm{ where }I_{p,q} = \begin{pmatrix}1_p & 0 \\ 0 & -1_q\end{pmatrix}$$ ($1_s$ denotes the unit $(s\times s)$-matrix). The Lie algebra of this group is the Lie algebra of all real $(n\times n)$-matrices $X$ that satisfy the condition $X^TI_{p,q} = -I_{p.q}X$. In the particular case $q=0$, the group $\O(p,q)$ is denoted by $\O(n)$ and is called a real orthogonal group; its Lie algebra consists of all skew-symmetric real $(n\times n)$-matrices. The Lie group $\O(p,q)$ has four connected components when $q\ne 0$, and two connected components when $q=0$. The connected component of the identity is its commutator subgroup, which, when $q=0$, coincides with the subgroup $\def\SO{ {\rm SO}}\SO(n)$ in $\O(n)$ consisting of all transformations with determinant 1. The group $\O(p,q)$ is compact only when $q=0$. The topological invariants of $\SO(n)$ have been studied. One of the classical results is the calculation of the Betti numbers of the manifold $\SO(n)$: Its Poincaré polynomial has the form $$\prod_{s=1}^m(1+t^{4s-1})$$ when $n=2m+1$, and the form $$(1+t^{2m-1})\prod_{s=1}^{m-1}(1+t^{4s-1})$$ when $n=2m$. The fundamental group of the manifold $\SO(n)$ is $\Z_2$. The calculation of the higher homotopy groups $\pi_l(\SO(n))$ is directly related to the classification of locally trivial principal $\SO(n)$-fibrations over spheres. An important part in topological $K$-theory is played by the periodicity theorem, according to which, when $N\gg n$, there are the isomorphisms $$\pi_{n+8}(\O(N)) \simeq \pi_{n}(\O(N));$$ further, $$\pi_n(\O(N)) \simeq \Z_2$$ if $n=0,1$; $$\pi_n(\O(N)) \simeq \Z$$ if $n=3,7$; and $$\pi_n(\O(N)) = 0$$ if $n=2,4,5,6$. The study of the topology of the group $\O(p,q)$ reduces in essence to the previous case, since the connected component of the identity of $\O(p,q)$ is diffeomorphic to the product $\SO(p)\times \SO(q)$ on a Euclidean space. Comments A Pythagorean field is a field in which the sum of two squares is again a square. References [Ar] E. Artin, "Geometric algebra", Interscience (1957) MR1529733 MR0082463 Zbl 0077.02101 [Bo] N. Bourbaki, "Elements of mathematics. Algebra: Modules. Rings. Forms", 2, Addison-Wesley (1975) pp. Chapt.4;5;6 (Translated from French) MR2333539 MR2327161 MR2325344 MR2284892 MR2272929 MR0928386 MR0896478 MR0782297 MR0782296 MR0722608 MR0682756 MR0643362 MR0647314 MR0610795 MR0583191 MR0354207 MR0360549 MR0237342 MR0205211 MR0205210 [Di] J.A. Dieudonné, "La géométrie des groups classiques", Springer (1955) Zbl 0221.20056 [Di2] J. Dieudonné, "On the automorphisms of the classical groups", Mem. Amer. Math. Soc., 2, Amer. Math. Soc. (1951) MR0045125 Zbl 0042.25603 [Hu] D. Husemoller, "Fibre bundles", McGraw-Hill (1966) MR0229247 Zbl 0144.44804 [OM] O.T. O'Meara, "Introduction to quadratic forms", Springer (1973) Zbl 0259.10018 [We] H. Weyl, "The classical groups, their invariants and representations", Princeton Univ. Press (1946) MR0000255 Zbl 1024.20502 [Zh] D.P. Zhelobenko, "Compact Lie groups and their representations", Amer. Math. Soc. (1973) (Translated from Russian) MR0473097 MR0473098 Zbl 0228.22013 How to Cite This Entry: Orthogonal group. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Orthogonal_group&oldid=34320
At $25~\mathrm{^\circ C}$, $10.24~\mathrm{mg}$ of $\ce{Cr(OH)2}$ are dissolved in enough water to make $125~\mathrm{mL}$ of solution. When equilibrium is established, the solution has a $\mathrm{pH}$ of $8.49$. Estimate $K_\text{sp}$ for $\ce{Cr(OH)2}$ (Ans : $1.47\times 10^{-17}$) I have calculated it but not have the same answer as this. I tried it this way: $\mathrm{pH}$ is $8.49$, then $\mathrm{pOH}$ is $5.51$. I use this formula: $\mathrm{pOH} = -\log[\ce{OH}]$ The $[\ce{OH}]$ is $10^{-5.51}$ Find molarity $\ce{Cr}$: $(10.24\cdot(1/1000)\cdot(1/86))/(0.125)=9.525\times 10^{-4}$ Substitute into $K_\text{sp} = [\ce{Cr}][\ce{OH}]^2$ But after I substitute value of $[\ce{Cr}] = 9.525\times 10^{-4}$ and $[\ce{OH}]$ of $(2\times 10^{-5.51})^2$, I get answer of $3.6\times 10^{-14}$. Where have I gone wrong? Any Suggestions?
I would like to know the answer to the question " why do materials preventing heterogeneous nucleation of $CO_2$ aren't used for soda bottles and glasses?". Two possible answers so far that I thought about: Either it is too hard to make such a material or polish it "near perfectly", or it would not be worth it. By "worth it" I mean the motivation below would be false. The motivation to use such materials would be that bubbles of $CO_2$ would not form at all, because if I'm not wrong bubbles are forming on the walls of the bottle or glass thanks to heterogeneous nucleation due to microscopic cracks or "imperfections" (i.e. the crystal isn't plane. In a common glass the atoms aren't ordered like in a crystal and this favors heterogeneous nucleation) while homogeneous nucleation never occurs because the radius of the bubble required for it to occur is too big and so the probability that it's created spontaneously is almost nil. Therefore the bottle or glass would, I believe, only slowly lose $CO_2$ gas through the interface liquid/air which is due to diffusion and occurs because the chemical potential "$\mu$" of the soda is higher than the chemical potential of the air. So the process will end when there is no more $CO_2$ in the soda, if I assume that there's no $CO_2$ in the air which is a good approximation. To sum up the motivation: no bubbles formed. The loss of $CO_2$ would be very slow and so we could drink soda with plenty of "gas" even if the bottle has been opened for a long while compared to what we're currently used to. Now I would like to use some maths to show how much slower the rate of decrease of $CO_2$ would be if we were to use such bottle or glass, compared to a normal bottle or glass. More precisely: let $c(t)$ be the concentration of $CO_2$ in function of time. To settle numbers I'll assume that when $c(t)=0.1\cdot c(0)$ then there is too few "gas" for the soda to be drank. I want to estimate by calculations via a model how much time it takes until this low concentration threshold is reached in both cases. Let's assume the bottle or glass is a cylinder of 20 cm height, 3.5 cm radius. This means the area of the interface soda/air is $A_\text{interface soda/air} \approx 38.5 \text{ cm}^2$. Model 1 (no heterogeneous nucleation occurs): In this case we only have a diffusion equation for $c(t): \frac{dc(t)}{dt}=-rc(t)$ where $r$ is a positive constant proportional to $A_\text{interface soda/air}$, yielding a solution of the form $c(t)=c(0)e^{-rt}$. I can now solve for $t_c$ so that $c(t_c)=0.1\cdot c(0)$. Model 2 (heterogeneous nucleation occurs): In this case I have the same diffusion equation for c(t) except that I have a new term. I am unsure how to write it. I'll assume that there are 2 bubbles per $\text {cm} ^2$. The total area of the container is $A_\text{bottle}=2 \pi R \cdot 20 \text{ cm} + A_\text{interface soda/air} \approx 478 \text{ cm}^2$, so there are about 957 bubbles in total. Again to simplify things I'll assume that a bubble is most of its existing time stuck on the glass rather than going up in the soda. So that nucleation only occurs to bubbles growing on the glass. I'll also assume that all bubbles start with a zero radius and all have the same maximum radius, say $\text{ 0.1 cm}$ before they detach from the wall and get replaced instantly by a $0 \text{ cm}$ radius bubble. Now I believe the bubbles growth rate depend on the current value of $c(t)$, but I am not sure whether it is the rate of change of the volume or area or radius that depends linearly on $c(t)$. I'd appreciate a comment here. So that I can set up the differential equation for $c(t)$, solve it and compare it with the first model.
I'm a bit stuck on this question (which is homework so hints are more welcome than outright answers). The question is: A very long wire carrying a current I is moving with speed v towards a small circular wire loop of radius r. The long wire is in the plane of the loop and is too long to be entirely shown in the diagram. The strength of a magnetic field a distance x from a long wire is $$|B|=\frac{μ_{0}I}{2πr}$$ What is the equation for the rate of change of the strength of the magnetic field at the centre of the loop? Now, I can see that the equation for the strength of the field comes from amperes law, and is essentially the magnetic field along a loop around the wire, divided by the circumference of that loop. So it makes sense to me that given everything else in the equation is constant, delta B should come straight from the change in the circumference of a circle as the radius shrinks. Now since $$\frac{dC}{dt}=2\pi\frac{dr}{dt}$$ and since in this case $\displaystyle{\frac{dr}{dt}}$ is simply the velocity of the wire, it seems to me that the change in the field strength at the center of the loop is simply $$\frac{d|B|}{dt}=\frac{μ_{0}I}{2πV}$$ however I'm getting that this answer is wrong. Can anyone explain where I've made a mistake? Thanks for your help.
There are two other possibilities not mentioned in the above answers. Consider the time series $x_{t+1}=\beta{x}_t+\epsilon_{t+1}$. Assume $t\in\{81,82\}$ are missing. The question is why are they missing. Consider three possible cases. The first is that it is a holiday or a similar day where there was no activity. The second is omission as a recording error. The third is hiding the data as it may be embarrassing to someone. For the first two, it is simple. You alter your likelihood function in this one case. Noting for this example that $$x_{81}=\beta{x}_{80}+\epsilon_{81}$$ and $$x_{82}=\beta^2{x}_{80}+\beta\epsilon_{81}+\epsilon_{82}$$ and $$x_{83}=\beta^3x_{80}+\beta^2\epsilon_{81}+\beta\epsilon_{82}+\epsilon_{83}.$$ To make the example simple, let us assume that $\epsilon_t\sim\mathcal{N}(0,\sigma^2),\forall{t}$. The simple case, where a holiday intervened is simple. You would note that no error could have happened on those days, so $\epsilon_{81}=\epsilon_{82}=0$. You would update the posterior probability of $(\beta;\sigma^2)$ by simply cubing $\beta$ in the likelihood function. You can do this because $$\pi(\beta,\sigma^2|x_1\dots{x}_{80};x_{83})$$ follows from $$\pi(\beta,\sigma^2|x_1\dots{x}_{80})$$ via Bayes theorem where $$\pi(\beta,\sigma^2|x_1\dots{x}_{80})=\frac{\prod_{t=1}^{80}f(x_t|\beta;\sigma^2)\pi(\beta;\sigma^2)}{\int_0^\infty\int_{-\infty}^\infty\prod_{t=1}^{80}f(x_t|\beta;\sigma^2)\pi(\beta;\sigma^2)\mathrm{d}\sigma^2\mathrm{d}\beta}$$ and $$\pi(\beta,\sigma^2|x_1\dots{x}_{80};x_{83})=\frac{f(x_{83}|\beta;\sigma^2)\pi(\beta,\sigma^2|x_1\dots{x}_{80})}{\int_0^\infty\int_{-\infty}^\infty{f}(x_{83}|\beta;\sigma^2)\pi(\beta,\sigma^2|x_1\dots{x}_{80})\mathrm{d}\sigma^2\mathrm{d}\beta}.$$ In the case of an omission, there are two choices. The first is to treat it like a holiday because the alternative is to estimate the posterior of $\{\beta,\sigma^2,\epsilon_{81},\epsilon_{82}\}.$ It will end up with an answer that is no different than omitting them unless you have other variables which are correlated with $x_t$. In the case of at least one associated variable, you can improve the estimation of $\beta$ and $\sigma^2$ by estimating the missing variables as if they were parameters and then marginalizing them out via Bayes rule. In the third case, where the omission is purposeful and assuming there is no correlated data, you should estimate the missing variable my inserting a prior density for them that estimates the size of the hidden variables.
I'm trying to figure out how to estimate, given a transition matrix for a stream of distinct things, what the p-value is that the underlying stream is memoryless. In order to simplify the problem, I'm constraining my view of the world somewhat so that the alternative hypothesis is "the next item depends on the previous item". I'm okay with implicitly assuming that streams with more exotic dependencies between items in the stream are impossible. Consider a random stream of integers in the range $\{1, \cdots, n \}$, let's call it $X$ . If it is the case that the entries in $X$ are independent and drawn from a particular distribution $P$, then the transition matrix $T$ would have constant rows, since the previous state is irrelevant. $$ \left[ \begin{array}{ccc} p_1 && p_1 && p_1 \\ p_2 && p_2 && p_2 \\ p_3 && p_3 && p_3 \end{array} \right] \left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \end{array} \right) $$ For the sake of example, here's a bad test statistic I've come up with on the spot that probably doesn't have the right behavior but does serve to illustrate the goal. Let $w$ be the sum of the variances of the individual rows of the matrix divided by their mean squared, since that would be zero if the rows are constant. It also normalizes for the number of rows and how probable each of the items are to begin with. $$ w \stackrel{df}{=} \sum_{i=0}^n \frac{\mathrm{Var}(A_{i*})}{\mathop{(\mathbb{E}}A_{i*})^2} $$ If I have an empirical estimate of $T$, computed by tallying frequencies, and then compare that with a distribution for $w$ (possibly parameterized for the number of items in the matrix), I can get back a p-value. I don't know what the distribution would be, but I'm okay with computing an EDF and using that to get the p-value instead of a closed form solution.
Under the assumption that The probability of getting a girl or a boy is equal (50%) A male can only put his thang on his partner female (no multiple wives) Neither of them will get significantly more boys than the other (in a long period of time) A Priori Proof : Barbarian King In regex: (G*)B ... G G B Z B B The tree goes from left to right. Z = initial state; G = girl; B = boy; The probability of getting 1 boy and 0 girls is 1/2. The probability of getting 1 boy and 1 girl is 1/4. The probability of getting 1 boy and 2 girls is 1/8. The probability of getting 1 boy and N girls is 1/2^(N+1). To solve the average difference of boys and girls under the Barbarian King's rule, we would need the following : $$\lim_{N\to \infty} \frac{\sum_0^N((\mbox{BoyCount}-\mbox{GirlCount})*\mbox{probability}))}{N}$$ $$\lim_{N\to \infty} \frac{\sum_0^N(\frac{1-N}{2^{N+1}}))}{N} = 0$$ Check this wolfram solution to this equation. (Sorry Barbarian King) This means that in the long run (approaching $\infty$), there is no ($0$) significant difference between the number of boys and girls under the Barbarian King's rule. Councillor In regex: (B*)G G Z G B G B ... The tree goes from left to right. Z = initial state; G = girl; B = boy; The probability of getting 0 boys and 1 girl is 1/2. The probability of getting 1 boy and 1 girl is 1/4. The probability of getting 2 boys and 1 girl is 1/8. The probability of getting N boys and 1 girl is 1/2^(N+1). To solve the average difference of boys and girls under the Councillor's rule, we would need the following : $$\lim_{N\to \infty} \frac{\sum_0^N((\mbox{BoyCount}-\mbox{GirlCount})*\mbox{probability}))}{N}$$ $$\lim_{N\to \infty} \frac{\sum_0^N(\frac{N-1}{2^{N+1}}))}{N} = 0$$ Check this wolfram solution to this equation. (Sorry Councillor) This means that in the long run (approaching $\infty$), there is no ($0$) significant difference between the number of boys and girls under the Councillor's rule. A Posteriori Proof : Open your JavaScript console (F12->Console or Ctrl+Shift+J) and copy and paste the below code. function makeBabies(couples,simulations){ var kingAdvantage=0; var councillorAdvantage=0; var kingBabies={boys:0,girls:0}; var councillorBabies={boys:0,girls:0}; for(var j=0;j<simulations;j++){ for(var i=0;i<couples;i++){ //Barbarian King while(Math.random()<0.5){ kingBabies.girls++; } kingBabies.boys++; //Councillor while(Math.random()<0.5){ councillorBabies.boys++; } councillorBabies.girls++; } kingAdvantage += (kingBabies.boys-kingBabies.girls); councillorAdvantage += (councillorBabies.boys-councillorBabies.girls); //reset simulation kingBabies={boys:0,girls:0}; councillorBabies={boys:0,girls:0}; } //average kingAdvantage /= simulations; councillorAdvantage /= simulations; console.log("Barbarian King advantage : "+kingAdvantage); console.log("Councillor advantage : "+councillorAdvantage); } Type makeBabies(<couples>,<simulations>) and replace <couples> with the number of couples that you want in your simulation, and <simulations> with the number of times you want to run the same test. Results : makeBabies(100,10000) = 0.0488, -0.0228 makeBabies(1000,10000) = 0.3019, -0.1117 makeBabies(10000,10000) = -2.47, -0.4471 10000 simulations each time, with increasing number of couples (100, 1000, 10000). As you can see, the average advantage of boys over girls in both methods is very insignificant. (Most of the time < 1) You can try out the code and run more simulations if you want.
Defining parameters Level: \( N \) = \( 4000 = 2^{5} \cdot 5^{3} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 4000.cu (of order \(100\) and degree \(40\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 2000 \) Character field: \(\Q(\zeta_{100})\) Newforms: \( 0 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(4000, [\chi])\). Total New Old Modular forms 160 0 160 Cusp forms 0 0 0 Eisenstein series 160 0 160 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
I'm not sure where I could pose a challenge to find best $f(n)$ so people will join in. $n\ge 5$ will never probably be proven optimal, but some lucky computations or out of the box analysis might give nice results. (Given $n$ fixed digits and operations $(+,-,\times,\div)$, whats the highest $N\in\mathbb N$, such that all numbers $1\dots N$ can be built? $f(n)=N$) @TheSimpliFire You mentioned base, is it true that using digits $\lt b$ means we can represent some number $N$ using $\le (b+1)\log_b N$ digits, if only $+,\times$ are allowed? If $b=2$, $3\log_2 N$ bound is given: https://arxiv.org/pdf/1310.2894.pdf and explained: " The upper bound can be obtained by writing $N$ in binary and finding a representation using Horner’s algorithm." So if we actually allow $\le b$ digits, we have $log_b N$ digits and that many bases, so the bound would be $2\log_b N$? https://en.wikipedia.org/wiki/Horner%27s_method @TheSimpliFire The problem is inverting the bound which is not trivial if $b\ne 2$. For example, we can build $1=2-1$ using $1,2$ digits but adding onto $5$ and having now a set $1,2,5$ does NOT allow to rebuild $1$ since all digits must be used. So keeping consecutive integers from $n-1$ digit case is not guaranteed. This is the issue. The $d$ is fixed at $n$ digits and all need to be used. Thats why I took $d_i=2^{i-1}$ digit sets since we can divide two largest to get the $n-1$ case and this allows to obtain bound $f(n)\ge2^n-1$ eventually. Inductively. $i=1,\dots,n$ This is not the issue if all digits are $1$'s also, on which they give bound $3\log_2 N\ge a(N)$ which can be translated to $f(n)\ge 2^{N/3}$ since multiplying two $1$'s reduces the case to $n-1$ and allows induction. We need to inductively build digits $d_i$ so next set can achieve at least what previous one did. Otherwise, it is hard to prove the next step is better when adding more digits. For example we can add $d_0,d_0/2,d_0/2$ where $d_0$ can be anything since $d_0-d_0/2-d_0/2$ reduces us to case $n-3$. The comments discuss setting better bounds using similar construction (on my last question) I'm not sure if you have the full context of the question or if this makes sense so sorry for clogging up the chat :P
How can I prove the equation $(3)$? I can't understand why there is a $2/N$ in $(3)$. Why he just get the second term in DFT? Consider a sinusoidal input signal of frequency $\omega$ given by $$ x(t) = \sqrt 2 X \sin\left(\omega t + \phi\right)\tag{1} $$ This signal is conventionally represented by a phasor (a complex number) $\bar X$ $$ \bar X = Xe^{j\phi}=X\cos\phi+jX\sin\phi\tag{2} $$ Assuming that $x(t)$ is sampled $N$ times per cycle of the $60\textrm{ Hz}$ waveform to produce the sampled set $\left\{x_k\right\}$ $$ x_k = \sqrt 2 X\sin\left(\frac{2\pi}{N}k + \phi\right)\tag{3} $$ The Discrete Fourier Transform of $\left\{x_k\right\}$ contains a fundamental frequency component given by $$ \color{red}{\boxed{\color{black}{\bar X_1= \frac 2N \sum_{k=0}^N x_k e^{-j\frac{2\pi}{N}k}}}}\tag{4}\\ $$
Differential and Integral Equations Differential Integral Equations Volume 27, Number 9/10 (2014), 949-976. Degenerate parabolic equations with singular lower order terms Abstract In this paper, we give existence and regularity results for nonlinear parabolic problems with degenerate coercivity and singular lower order terms, whose simplest example is \begin{eqnarray*} \begin{cases} u_t-\Delta_p u= {\frac{f(x,t)}{u^\gamma}} & \mbox{in}\;\Omega\times (0,T)\\ u(x,t)=0 & \mbox{on}\;\partial\Omega\times(0,T)\\ u(x,0)=u_0 (x) & \mbox{in}\;\Omega\; \end{cases} \end{eqnarray*} with $\gamma>0$, $p\geq 2$, $\Omega$ a bounded open set of $\mathbb{R}^{\mathrm{N}}$ ($N\geq 2$), $0 < T < +\infty$, $f\geq 0$, $f\in L^m(Q_T)$, $m\geq 1$ and $u_0\in L^\infty(\Omega)$ such that $$ \forall \, \omega\subset\subset\Omega\; \exists\;d_{\omega} > 0\,:\,u_{0}\geq d_{\omega}\;\mbox{in}\,\;\omega\,. $$ The aim of the paper is to extend the existence and regularity results recently obtained for the associated singular stationary problem. One of the main difficulties that arises in the parabolic case is the proof of the strict positivity of the solution in the interior of the parabolic cylinder, in order to give sense to the weak formulation of the problem. The proof of this property uses Harnack's inequality. Article information Source Differential Integral Equations, Volume 27, Number 9/10 (2014), 949-976. Dates First available in Project Euclid: 1 July 2014 Permanent link to this document https://projecteuclid.org/euclid.die/1404230052 Mathematical Reviews number (MathSciNet) MR3229098 Zentralblatt MATH identifier 1340.35175 Citation de Bonis, Ida; De Cave, Linda Maria. Degenerate parabolic equations with singular lower order terms. Differential Integral Equations 27 (2014), no. 9/10, 949--976. https://projecteuclid.org/euclid.die/1404230052
Taiwanese Journal of Mathematics Taiwanese J. Math. Advance publication (2019), 24 pages. On Inverse Eigenvalue Problems of Quadratic Palindromic Systems with Partially Prescribed Eigenstructure Abstract The palindromic inverse eigenvalue problem (PIEP) of constructing matrices $A$ and $Q$ of size $n \times n$ for the quadratic palindromic polynomial $P(\lambda) = \lambda^2 A^{\star} + \lambda Q + A$ so that $P(\lambda)$ has $p$ prescribed eigenpairs is considered. This paper provides two different methods to solve PIEP, and it is shown via construction that PIEP is always solvable for any $p$ ($1 \leq p \leq (3n+1)/2$) prescribed eigenpairs. The eigenstructure of the resulting $P(\lambda)$ is completely analyzed. Article information Source Taiwanese J. Math., Advance publication (2019), 24 pages. Dates First available in Project Euclid: 4 March 2019 Permanent link to this document https://projecteuclid.org/euclid.twjm/1551690151 Digital Object Identifier doi:10.11650/tjm/190203 Citation Zhao, Kang; Cheng, Lizhi; Liao, Anping; Li, Shengguo. On Inverse Eigenvalue Problems of Quadratic Palindromic Systems with Partially Prescribed Eigenstructure. Taiwanese J. Math., advance publication, 4 March 2019. doi:10.11650/tjm/190203. https://projecteuclid.org/euclid.twjm/1551690151
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Difference between revisions of "Linear representation theory of symmetric group:S5" (→Family contexts) (2 intermediate revisions by the same user not shown) Line 30: Line 30: ! Family name !! Parameter values !! General discussion of linear representation theory of family ! Family name !! Parameter values !! General discussion of linear representation theory of family |- |- − | [[symmetric group]] || 5 || [[Family version::linear representation theory of symmetric groups]] + | [[symmetric group]] || 5|| [[Family version::linear representation theory of symmetric groups]] |- |- − | [[projective general linear group of degree two]] || [[field:F5]] || [[Family version::linear representation theory of projective general linear group of degree two]] + | [[projective general linear group of degree two]] || [[field:F5]]|| [[Family version::linear representation theory of projective general linear group of degree two ]] |} |} Line 82: Line 82: | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign |- |- − + Total || NA || NA || NA || NA || <math>q + 2</math> || 7 || <math>q^3 - q</math> || 120 || NA |} |} Latest revision as of 05:41, 16 January 2013 This article gives specific information, namely, linear representation theory, about a particular group, namely: symmetric group:S5. View linear representation theory of particular groups | View other specific information about symmetric group:S5 This article describes the linear representation theory of symmetric group:S5, a group of order . We take this to be the group of permutations on the set . Summary Item Value Degrees of irreducible representations over a splitting field (such as or ) 1,1,4,4,5,5,6 maximum: 6, lcm: 60, number: 7, sum of squares: 120 Schur index values of irreducible representations 1,1,1,1,1,1,1 maximum: 1, lcm: 1 Smallest ring of realization for all irreducible representations (characteristic zero) -- ring of integers Smallest field of realization for all irreducible representations, i.e., smallest splitting field (characteristic zero) -- hence it is a rational representation group Criterion for a field to be a splitting field Any field of characteristic not equal to 2,3, or 5. Smallest size splitting field field:F7, i.e., the field of 7 elements. Family contexts Family name Parameter values General discussion of linear representation theory of family symmetric group of degree linear representation theory of symmetric groups projective general linear group of degree two over a finite field of size , i.e., field:F5, so the group is linear representation theory of projective general linear group of degree two over a finite field Degrees of irreducible representations FACTS TO CHECK AGAINST FOR DEGREES OF IRREDUCIBLE REPRESENTATIONS OVER SPLITTING FIELD: Divisibility facts: degree of irreducible representation divides group order | degree of irreducible representation divides index of abelian normal subgroup Size bounds: order of inner automorphism group bounds square of degree of irreducible representation| degree of irreducible representation is bounded by index of abelian subgroup| maximum degree of irreducible representation of group is less than or equal to product of maximum degree of irreducible representation of subgroup and index of subgroup Cumulative facts: sum of squares of degrees of irreducible representations equals order of group | number of irreducible representations equals number of conjugacy classes | number of one-dimensional representations equals order of abelianization Note that the linear representation theory of the symmetric group of degree four works over any field of characteristic not equal to two or three, and the list of degrees is . Interpretation as symmetric group Common name of representation Degree Corresponding partition Young diagram Hook-length formula for degree Conjugate partition Representation for conjugate partition trivial representation 1 5 1 + 1 + 1 + 1 + 1 sign representation sign representation 1 1 + 1 + 1 + 1 + 1 5 trivial representation standard representation 4 4 + 1 2 + 1 + 1 + 1 product of standard and sign representation product of standard and sign representation 4 2 + 1 + 1 + 1 4 + 1 standard representation irreducible five-dimensional representation 5 3 + 2 2 + 2 + 1 other irreducible five-dimensional representation irreducible five-dimensional representation 5 2 + 2 + 1 3 + 2 other irreducible five-dimensional representation exterior square of standard representation 6 3 + 1 + 1 3 + 1 + 1 the same representation, because the partition is self-conjugate. Interpretation as projective general linear group of degree two Compare and contrast with linear representation theory of projective general linear group of degree two over a finite field Description of collection of representations Parameter for describing each representation How the representation is described Degree of each representation (general odd ) Degree of each representation () Number of representations (general odd ) Number of representations () Sum of squares of degrees (general odd ) Sum of squares of degrees () Symmetric group name Trivial -- 1 1 1 1 1 1 trivial Sign representation -- Kernel is projective special linear group of degree two (in this case, alternating group:A5), image is 1 1 1 1 1 1 sign Nontrivial component of permutation representation of on the projective line over -- -- 5 1 1 25 irreducible 5D Tensor product of sign representation and nontrivial component of permutation representation on projective line -- -- 5 1 1 25 other irreducible 5D Induced from one-dimensional representation of Borel subgroup ? ? 6 1 36 exterior square of standard representation Unclear a nontrivial homomorphism , with the property that for all , and takes values other than . Identify and . unclear 4 2 32 standard representation, product of standard and sign Total NA NA NA NA 7 120 NA Character table FACTS TO CHECK AGAINST (for characters of irreducible linear representations over a splitting field): Orthogonality relations: Character orthogonality theorem | Column orthogonality theorem Separation results(basically says rows independent, columns independent): Splitting implies characters form a basis for space of class functions|Character determines representation in characteristic zero Numerical facts: Characters are cyclotomic integers | Size-degree-weighted characters are algebraic integers Character value facts: Irreducible character of degree greater than one takes value zero on some conjugacy class| Conjugacy class of more than average size has character value zero for some irreducible character | Zero-or-scalar lemma Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 1 1 1 1 1 1 sign representation 1 -1 1 1 -1 1 -1 standard representation 4 2 0 1 -1 -1 0 product of standard and sign representation 4 -2 0 1 1 -1 0 irreducible five-dimensional representation 5 1 1 -1 1 0 -1 irreducible five-dimensional representation 5 -1 1 -1 -1 0 1 exterior square of standard representation 6 0 -2 0 0 1 0 Below are the size-degree-weighted characters, i.e., these are obtained by multiplying the character value by the size of the conjugacy class and then dividing by the degree of the representation. Note that size-degree-weighted characters are algebraic integers. Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 10 15 20 20 24 30 sign representation 1 -10 15 20 -20 24 -30 standard representation 1 5 0 5 -5 -6 0 product of standard and sign representation 1 -5 0 5 5 -6 0 irreducible five-dimensional representation 1 2 3 -4 4 0 -6 irreducible five-dimensional representation 1 -2 3 -4 -4 0 6 exterior square of standard representation 1 0 -5 0 0 4 0 GAP implementation The degrees of irreducible representations can be computed using GAP's CharacterDegrees function: gap> CharacterDegrees(SymmetricGroup(5)); [ [ 1, 2 ], [ 4, 2 ], [ 5, 2 ], [ 6, 1 ] ] This means that there are 2 degree 1 irreducible representations, 2 degree 4 irreducible representations, 2 degree 5 irreducible representations, and 1 degree 6 irreducible representation. The characters of all irreducible representations can be computed in full using GAP's CharacterTable function: gap> Irr(CharacterTable(SymmetricGroup(5))); [ Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, -1, 1, 1, -1, -1, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, -2, 0, 1, 1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, -1, 1, -1, -1, 1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 6, 0, -2, 0, 0, 0, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, 1, 1, -1, 1, -1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, 2, 0, 1, -1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, 1, 1, 1, 1, 1, 1 ] ) ]
Indeed, nothing is wrong with Noether theorem here, $J^\mu = F^{\mu \nu} \partial_\nu \Lambda$ is a conserved current for every choice of the smooth scalar function $\Lambda$. It can be proved by direct inspection, since$$\partial_\mu J^\mu = \partial_\mu (F^{\mu \nu} \partial_\nu \Lambda)=(\partial_\mu F^{\mu \nu}) \partial_\nu \Lambda+ F^{\mu \nu} \partial_\mu\partial_\nu \Lambda = 0 + 0 =0\:.$$Above, $\partial_\mu F^{\mu \nu}=0 $ due to field equations and $F^{\mu \nu} \partial_\mu\partial_\nu \Lambda=0$ because $F^{\mu \nu}=-F^{\nu \mu}$ whereas $\partial_\mu\partial_\nu \Lambda =\partial_\nu\partial_\mu \Lambda$. ADDENDUM. I show here that $J^\mu$ arises from the standard Noether theorem. The relevant symmetry transformation, for every fixed $\Lambda$, is $$A_\mu \to A'_\mu = A_\mu + \epsilon \partial_\mu \Lambda\:.$$ One immediately sees that$$\int_\Omega {\cal L}(A', \partial A') d^4x = \int_\Omega {\cal L}(A, \partial A) d^4x\tag{0}$$since even ${\cal L}$ is invariant. Hence,$$\frac{d}{d\epsilon}|_{\epsilon=0} \int_\Omega {\cal L}(A, \partial A) d^4x=0\:.\tag{1}$$Swapping the symbol of derivative and that of integral (assuming $\Omega$ bounded) and exploiting Euler-Lagrange equations, (1) can be re-written as:$$\int_\Omega \partial_\nu \left(\frac{\partial {\cal L}}{\partial \partial_\nu A_\mu} \partial_\mu \Lambda\right) \: d^4 x =0\:.\tag{2}$$Since the integrand is continuous and $\Omega$ arbitrary, (2) is equivalent to$$\partial_\nu \left(\frac{\partial {\cal L}}{\partial \partial_\nu A_\mu} \partial_\mu \Lambda\right) =0\:,$$ which is the identity discussed by the OP (I omit a constant factor):$$\partial_\mu (F^{\mu \nu} \partial_\nu \Lambda)=0\:.$$ ADDENDUM2. The charge associated to any of these currents is related with the electrical flux at spatial infinity. Indeed one has:$$Q = \int_{t=t_0} J^0 d^3x = \int_{t=t_0} \sum_{i=1}^3 F^{0i}\partial_i \Lambda d^3x = \int_{t=t_0} \partial_i\sum_{i=1}^3 F^{0i} \Lambda d^3x -\int_{t=t_0} ( \sum_{i=1}^3 \partial_i F^{0i}) \Lambda d^3x \:.$$As $\sum_{i=1}^3 \partial_i F^{0i} = -\partial_\mu F^{\mu 0}=0$, the last integral does not give any contribution and we have$$Q = \int_{t=t_0} \partial_i\left(\Lambda \sum_{i=1}^3 F^{0i} \right) d^3x = \lim_{R\to +\infty}\oint_{t=t_0, |\vec{x}| =R} \Lambda \vec{E} \cdot \vec{n} \: dS\:.$$If $\Lambda$ becomes constant in space outside a bounded region $\Omega_0$ and if, for instance, that constant does not vanish, $Q$ is just the flux of $\vec{E}$ at infinity up to a constant factor. In this case $Q$ is the electric charge up to a constant factor (as stressed by ramanujan_dirac in a comment below). In that case, however, $Q=0$ since we are dealing with the free EM field.
What is the advantage for rockets to have multiple stages? Wouldn't a single stage with the same amount of fuel weigh less? Note I would like a quantitative answer, if possible :-) Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Edited a little now that I better understand your question. Short In a multi-stage, the weight of the parts that are dropped along the ride compensates for the fact that the extra engines make it heavier in the beginning. Partially because a rocket's engine isn't that heavy compared to the fuel tank. The engine mostly just ignites and controls the combustion, while the fuel tank needs to be huge. Long I'll analyse the very optimal case, in which the energy spent is the minimum energy needed to get your rocket in orbit. Real cases go waaay above this and it's not even practical to do energy balance. Thornton & Marion does a great (undergraduate level) analysis using linear momentum. In order to launch a single stage rocket, there's a specific size that minimizes the needed energy. The weight of your rocket (just the hull, not the fuel) increases with size $P_R=g\rho V$. And the minimum amount of energy necessary to get your rocket in orbit obviously increases with its weight: $$W_{min} = hP_R + W',$$ where $W'$ is the work you do to lift the fuel. $W'$ is not important here but, for the record, it goes something like: $$W' = g\int_0^h m_{fuel}(z)dz'.$$ So, if the rocket is too small, the little amount of fuel you can fit into it does not contain enough energy to lift the hull. If the rocket is too big, then you're just wasting energy because it's unnecessarily heavy. That means you have an optimal size which minimizes the energy of a single-stage rocket. Once you find it, the amount of fuel associated to that size is the very minimum you'll need to get your rocket on orbit. Now consider a rocket of that same size, but with 2 stages. Let's assume that it has an extra engine of weight $p_e$. Let's also assume that, when your rocket reaches a height $h/2$ more than $1/3$ of its tank will be empty (which is true). Since that part of my tank is empty, I might as well leave it behind (along with the engine) and drop the dead weight (I'll call it $p_t$). If I drop it at $h/2$ the energy I'll need to reach orbit is $$W_{min2} = \frac{h}{2}(P_R + p_e) + \frac{h}{2}(P_R - p_t) + W'_2 = hP_R + W'_2 - \frac{h}{2}(p_t - p_e).$$ If we sightly reduce the amount of initial fuel we can make $W'_2\leq W'$. Therefore, $W_{min2}$ is clearly less than $W_{min}$ as long as the engine you had to add is lighter than the tank you dropped. Launch weight would be lower if you had a fixed fuel load, and only one engine and fuel tank. However the specific impulse applied to the payload would be lower. The problem is that even when the fuel is say 90% exhausted, the rocket is still trying to accelerate the now grossly oversized fuel tank and engine. So the trick is to try to reduce the deadweight (structural mass) as the fuel is consumed. Another compromise system is to have jetisonable external fuel tanks, like the shuttle, which are thrown away once their fuel is consumed. The easiest way to think of it is this, imagine all the mass left over when a rocket has burned 85% of it's fuel. The mass of most of the tank and structure is now overkill and waste. It would be nice to be able to jettison that extra mass so that the fuel left can accelerate only the payload. That's what a multi-stage rocket does. It jettisons the mass of initial stages so that the remaining fuel and thrust can accelerate much smaller mass to a much higher velocity than it would have been able to if there was only one stage. Remember acceleration is proportional to mass, so if you can get rid of say 80% of the mass then you can accelerate the payload 5 times more for the same remaining fuel. Another benefit is that you can use rocket motors that are tuned for different velocities. In the initial stage you need maximum thrust and the rocket is not moving as fast. In the later stages you want high efficiency motors, not necessarily high thrust. To get very high velocities it requires less overall fuel and mass with multiple stages. This comes at the cost of greater complexity and cost. Another aspect to consider is the burn characteristics of the rocket motors. This is especially important in solid rocket motors because once lit, they are self-oxidizing and are not easy to turn off. At low altitudes, the rocket should not accelerate too rapidly because the air is very dense and the power required is proportional to velocity cubed. So you just want to get the thing going until you reach an altitude where the air is less dense and it's more economical to go fast. So you may have a first stage that burns relatively slowly. Once that burns and you reach a higher altitude where you can go faster, you drop your "slow burn" motor and kick on your powerful motor. Now the air density is much lower so you can accelerate as quickly as you want and reach whatever speed is needed. A third stage might be used to fine-tune the speed and position the craft in whatever orbit or trajectory is needed. Furthermore, because the air pressure decreases with altitude, the ideal nozzle shape is not the same at low altitude as high. An overexpanded nozzle at sea-level can be an underexpanded nozzle at altitude resulting in a very narrow range where it is operating at maximum efficiency. You could design adaptive nozzles but they are very heavy and expensive, and they can't really be made to service the full range. Or, you could have stages with fixed nozzles that are designed to be as efficient as possible over the range of altitudes serviced. So, in addition to the answers above about dropping weight as you go resulting in less fuel use, each stage can also be designed to take into account the operating regime and needs by carefully selecting the fuel for appropriate thrust/burn rates and by designing the nozzle for the nominal operating conditions for the altitudes serviced by the motor. Both of which lead to a much more efficient motor and thus less fuel.
Ultimately, the quantities that you have to calculate, and compare to the reality are probabilities or transition probabilities, which are the square of amplitudes or transition amplitudes. The path integral formalism represents directly transition amplitudes. In quantum mechanics, you may recover the Schrodinger equation from the expression of the path integral representing the transition amplitude $\langle x',t'|xt\rangle$. You have, for "wavefunctions", the integral equation $\langle x',t'|\psi \rangle = \int dx \langle x',t'|xt\rangle \langle x,t|\psi \rangle$,- with the expression $\langle x',t'|xt\rangle = \int [dC] e^{iS(C)}$, where $C$ is a path from $x$ to $x'$, $t$ to $t'$, and $S(c)$ is the action for this path - and taking the limit $t' \to t$, will give you the Schrodinger equation. The quantum formalism that you may use (path integral formalism, operator formalism , Schrodinger/Heinsenberg representation, etc..), is secondary, in the sense that these formalisms are equivalent.It is very interesting to look at different formalisms, but, practically, depending on your problem, you will choose the simplest one. Let' take the example of the quantum harmonic oscillator. You have different eigenstates $\psi_n(x,t)$ corresponding to energies $(n+ \dfrac{1}{2})\hbar \omega$. Suppose you want to calculate the transition probability : $|\langle\psi_{n}|X| \psi_{n+1}\rangle|^2$. You may imagine doing the integral on $x$ with the expressions of $\psi_n(x,t),\psi_{n+1}(x,t)$. You will work with the "wavefunctions" and Schrodinger representation. Now it's far more interesting, in this particular case, to work with Heinseberg representation, with an operator $X(t)$. The equation for the operator $X(t)$ is simply $\ddot X(t) + \omega^2 X(t)=0$, with the Heinsenberg constraints : $[X(t), P(t')]_{t=t'} = i \hbar$. A solution is $X(t) = \sqrt{\dfrac{\hbar}{m \omega}}(a e^{i \omega t} + a^+e^{-i \omega t})$, and, in the energy basis, the non- null terms of $a$ are $a_{n,n+1} = \sqrt{n+1}$. Now, the probability that we are looking for is simply $|X_{n,n+1}|^2 = \dfrac{\hbar}{m \omega}(n+1)$. [In fact, to correctly understand Quantum mechanics, it is better to think in terms of Heinseberg representation, than Schrodinger representation. For instance, in Quantum Field Theory, you are working in a " Heinsenberg representation", that is: you work with operators $\Phi(x,t)$ depending on space and time, you are not usually working with wavefunctions $\psi(\Phi,x,t)$ - even if this formalism is possible]. Now, turning back to path integrals, it is the same logic. If you consider for instance QFT, depending on your problem, it could be more interesting to use the path integral formalism, or use the operator (canonical) formalism. For instance, you may want to calculate the vaccum energy for a bosonic field or a fermionic field. It is simpler to use the operator formalism, but you may use too the integral path formalism (see for instance Zee, QFT in a nutshell Chapter II.5 p 121/126, first edition), and you will find the same result. If you want to calculate Green Functions and propagators, it is totally natural to use the integral path formalism, for instance in a perturbative field theory, and this leads naturally to Feynmann diagrams, and Feynmann diagrams is that you need to calculate transition amplitudes and transition probabilities, in collision particles process.
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Theory Notes¶ Computation of cell dry mass¶ The concept of cell dry mass computation was first introduced by Barer [Barer1952]. The dry mass \(m\) of a biological cell is defined by its non-aqueous fraction \(f(x,y,z)\) (concentration or density in g/L), i.e. the number of grams of protein and DNA within the cell volume (excluding salts). The assumption of dry mass computation in QPI is that \(f(x,y,z)\) is proportional to the RI of the cell \(n(x,y,z)\) with a proportionality constant called the refraction increment \(\alpha\) (units [mL/g]) with the RI of the intracellular fluid \(n_\text{intra}\), a dilute salt solution. These two equations can be combined to In QPI, the RI is measured indirectly as a projected quantitative phase retardation image \(\phi(x,y)\). with the vacuum wavelength \(\lambda\) of the imaging light and the refractive index of the cell-embedding medium \(n_\text{med}\). Integrating the above equation over the detector area \((x,y)\) yields For a discrete image, this formula simplifies to with the pixel area \(\Delta A\) and a pixel-wise summation of the phase data. Relative and absolute dry mass¶ If however the medium surrounding the cell has a different refractive index (\(n_\text{med} \neq n_\text{intra}\)), then the phase \(\phi\) is measured relative to the RI of the medium \(n_\text{med}\) which causes an underestimation of the dry mass if \(n_\text{med} > n_\text{intra}\). For instance, a cell could be immersed in a protein solution or embedded in a hydrogel with a refractive index of \(n_\text{med}\) = \(n_\text{intra}\) + 0.002. For a spherical cell with a radius of 10µm, the resulting dry mass is underestimated by 46pg. Therefore, it is called “relative dry mass” \(m_\text{rel}\). If the imaged phase object is spherical with the radius \(R\), then the “absolute dry mass” \(m_\text{abs}\) can be computed by splitting equation (1) into relative mass and suppressed spherical mass. For a visualization of the deviation of the relative dry mass from the actual dry mass for spherical objects, please have a look at the relative vs. absolute dry mass example. Range of validity¶ Variations in the refraction increment may occur and thus the above considerations are not always valid. For a detailed discussion of the variables that affect the refraction increment, please see [Barer1954]. Dependency on imaging wavelength¶ Barer and Joseph measured the refraction increment of several proteins in dependence of wavelength. In general, short wavelengths (366nm) yield values close to 0.200mL/g while long wavelengths (656nm) yield smaller values close to 0.180mL/g (table 3 in [Barer1954]). Dependency on protein concentration¶ The refraction increment has been reported to be linear for a wide range of protein concentrations. Barer and Joseph found that bovine serum albumin exhibits a linear refraction increment up to its limit of solubility (figure 2 in [Barer1954]). They additionally received a personal communication stating that this is also the case for gelatin. Dependency on pH, temperature, and salts¶ The refraction increment is little dependent on pH, temperature, and salts [Barer1954]. Refraction increment and the mass of cells¶ Dry mass and actual mass of a cell differ by the weight of the intracellular fluid. This weight difference is defined by the volume of the cell minus the volume of the protein and DNA content. While it seems to be difficult to define a partial specific volume (PSV) for DNA, there appears to be a consensus regarding the PSV of proteins, yielding approximately 0.73mL/g (see e.g. reference [Barer1957] as well as [Harpaz1994] and question 843 of the O-manual referring to it). For example, the protein and DNA of a cell with a radius of 10µm and a dry mass of 350pg (cell volume 4.19pL, average refractive index 1.35) occupy approximately 0.73mL/g · 350pg = 0.256pL (assuming the PSV of protein and DNA are similar). Therefore, the actual volume of the intracellular fluid is 3.93pL (94% of the cell volume) which is equivalent to a mass of 3.93ng resulting in a total (actual) cell mass of 4.28ng. Thus, the dry mass of this cell makes up approximately 10% of its actual mass which leads to a total mass that is about 2% heavier than the equivalent volume of pure water (4.19ng). Default parameters in DryMass¶ The default refraction incrementis \(\alpha\) = 0.18mL/g, as suggested for cells based on the refraction increment of cellular constituents by references [Barer1954] and [Barer1953]. The refraction increment can be manually set using the configuration key “refraction increment” in the “sphere” section. The default refractive index of the intracellular fluidin DryMass is assumed to be \(n_\text{intra}\) = 1.335, an educated guess based on the refractive index of phosphate buffered saline (PBS), whose osmolarity and ion concentrations match those of the human body.
There is a document from the cryptographic community called NIST Special Publication 800-90B, Recommendation for the Entropy Sources Used for Random Bit Generation, which is available here. Look to §5 and §6 and just come with me for a bit ☺... 90B has a test to determine whether a sequence is IID or not. By definition if a sequence is non-IID, it's correlated. Correlation is proven by certain tests that include compression. I'm unimpressed by NIST's implementation, so I have my own take on it. There are compressors out there (like CMIX) that can compress data to within 0.1% of it's theoretical Shannon entropy. Entropy ($H$) is measured in bytes. So my test would work like this:- Compress sequence $x$, giving $H_x$. Compress sequence $y$, giving $H_y$. Interleave the values from both sequences as $\left[x_i, y_i, x_{i+1}, y_{i+1},x_{i+2}, y_{i+2}...\right]$. Compress the new interleaved sequence, giving $H_{x|y}$. I posit that the correlation between $x$ and $y$ is related to:- $$ \frac{H_x + H_y}{H_{x|y}} -1 $$ with 0 as no correlation at all, and 1 if $x=y$. It works because correlation is a form of redundancy. As correlation and thus redundancy increase, and $y \rightarrow x$, the compression algorithm finds it easier and easier to encode the $x|y$ sequence. Examples:- Two totally IID sequences have absolutely no relationship between them. No matter how they're interleaved, at whatever lag, $H_{x|y} = H_x + H_y$. Test value = 0, no correlation. Two sequences where $y = f(x)$ and $f$ is the relationship/correlation between them. Imagine if $y = \frac{x}{n}$ and $n$ is some constant suggesting a strict correlation within the data. This tight relationship will be detected by the compression algorithm simply using a broader window. Rather than using an alphabet based on a single value, it will create a slightly bigger alphabet based on two sequential values $ \therefore H_{x|y} \approx H_x$. Test value $\approx$ 1, full correlation. I can't prove it any further, I just invented it. It seems to kinda work though, at least in simple cases.
Convert mixed numbers to top heavy fractions KS2 Revision What you need to know Things to remember: Turn the whole number into a fraction with the same denominator as the fraction in the question. Add the fractions together. A mixed number is a whole number and a fraction put (added) together. For example, 3\dfrac{2}{6} means 3 wholes and \dfrac{2}{6} 3\dfrac{2}{6} If we count up all of the sixths, we can see that we have \dfrac{20}{6} We don’t have to draw the diagram though, we can split up the whole number into lots of 1s and turning them into fractions with the same denominator. 1+1+1+\frac{2}{6} \frac{6}{6}+\frac{6}{6}+\frac{6}{6}+\frac{2}{6} \frac{20}{6} We can also do this in a fancy way with 3 Steps: Step 1: Multiply the whole number by the bottom of the fraction 3\times6=18 Step 2: Add the number in Step 1 to the top of the fraction 18+2=20 Step 3: Put the number in Step 2 over the bottom of the fraction. 3\dfrac{2}{6}=\dfrac{20}{6} Example Questions Question 1: What is 2\dfrac{1}{5} as a top heavy fraction? 1+1+\frac{1}{5} \frac{5}{5}+\frac{5}{5} +\frac{1}{5} \frac{11}{5} Step 1: Multiply the whole number by the bottom of the fraction 2\times5=10 Step 2: Add the number in Step 1 to the top of the fraction 10+1=11 Step 3: Put the number in Step 2 over the bottom of the fraction. 2\frac{1}{5}=\frac{11}{5}</p> <p> </p> <p style="text-align: justify;"><strong>Question 2:</strong> What is 4\dfrac{4}{7} as a top heavy fraction? Question 2: What is 4\dfrac{4}{7} as a top heavy fraction? 1+1+1+1+\dfrac{4}{7} \frac{7}{7}+\frac{7}{7}+\frac{7}{7}+\frac{7}{7}+\frac{4}{7} \frac{32}{7} Step 1: Multiply the whole number by the bottom of the fraction 4\times7=28 Step 2: Add the number in Step 1 to the top of the fraction 28+4=32 Step 3: Put the number in Step 2 over the bottom of the fraction. 4\frac{4}{7}=\frac{32}{7}
Jethva, Hiren and Satheesh, SK and Srinivasan, J (2007) Assessment of second-generation MODIS aerosol retrieval (Collection 005) at Kanpur, India. In: Geophysical Research Letters, 34 (L19802). pp. 1-5. PDF Assessment_of_second.pdf - Published Version Restricted to Registered users only Download (381kB) | Request a copy Abstract The second-generation MODIS aerosol retrieval (Collection 005) from EOS- Aqua (2002–2005) was evaluated using ground-based AERONET measurements at Kanpur $(26.45 ^\circ N, 80.35^ \circ E)$ northern India. We found that the aerosol optical depth (AOD) retrievals are more accurate compared to that of previous retrieval (Collection 004). About 70\% of the total retrievals at 0.47 \mu m and 0.55 \mu m and 60\% of the retrievals at 0.66 \mu m wavelength in the new version fall within the pre-launch uncertainty (\Delta \tau = \pm 0.05 \pm 0.15 \tau, where \tau is AOD) with better correlation $(R^2 \sim 0.83)$ at all three wavelengths. However, MODIS still tends to over-estimate AOD for a few retrievals in the presence of dust aerosols. The error in the fine-dominated AOD was large for most retrievals in the C005. However, the fine-dominated AOD of Collection 004 was better correlated with the equivalent AERONET data. This suggests that the fine-dominated AOD retrievals need to be re-examined further. Item Type: Journal Article Additional Information: Copyright of this article belongs to American Geophysical Union. Keywords: Remote sensing;Aerosol;Fine mode fraction; Department/Centre: Division of Mechanical Sciences > Centre for Atmospheric & Oceanic Sciences Depositing User: Satish MV Date Deposited: 10 Dec 2007 Last Modified: 08 Feb 2012 06:06 URI: http://eprints.iisc.ac.in/id/eprint/12486 Actions (login required) View Item
Why All These Stresses and Strains? In structural mechanics you will come across a plethora of stress and strain definitions. It may be a Second Piola-Kirchhoff Stress or a Logarithmic Strain. In this blog post we will investigate these quantities, discuss why there is a need for so many variations of stresses and strains, and illuminate the consequences for you as a finite element analyst. The defining tensor expressions and transformations can be found in many textbooks, as well as through some web links at the end of this blog post, so they will not be given in detail here. The Tensile Test When evaluating the mechanical data of a material, it is common to perform a uniaxial tension test. What is actually measured is a force versus displacement curve, but in order to make these results independent of specimen size, the results are usually presented as stress versus strain. If the deformations are large enough, one question then is: do you compute the stress based on the original cross-sectional area of the specimen, or based on the current area? The answer is that both definitions are used, and are called Nominal stress and True stress, respectively. A second, and not so obvious, question is how to measure the relative elongation, i.e. the strain. The engineering strain is defined as the ratio between the elongation and the original length, \epsilon_{eng} = \frac{L-L_0}{L_0}. For larger stretches, however, it is more common to use either the stretch \lambda=\frac{L}{L_0} or the true strain (logarithmic strain) \epsilon_{true} = \log\frac{L}{L_0} = \log \lambda. The true strain is more common in metal testing, since it is a quantity suitable for many plasticity models. For materials with a very large possible elongation, like rubber, the stretch is a more common parameter. Note that for the undeformed material, the stretch is \lambda=1. In order to make use of the measured data in an analysis, you must make sure of the following two things: How the stress and strain are defined in the test In what form your analysis software expects it for a specific material model The transformation of the uniaxial data is not difficult, but it must not be forgotten. Stress-strain curves for the same tensile test. Geometric Nonlinearity Most structural mechanics problems can be analyzed under the assumption that the deformations are so small compared to the dimensions of the structure, that the equations of equilibrium can be formulated for the undeformed geometry. In this case, the distinctions between different stress and strain measures disappear. If displacements, rotations, or strains become large enough, then geometric nonlinearity must be taken into account. This is when we start to consider that area elements actually change, that there is a distinction between an original length and a deformed length, and that directions may change during the deformation. There are several mathematically equivalent ways of representing such finite deformations. For the uniaxial test above, the different representations are rather straight-forward. In real life however, geometries are three-dimensional, have multiaxial stress states, and might rotate in space. Even if we just consider the same tensile test, keep the stress and strain fixed at a certain level, and then rotate the specimen, questions arise. What results can we expect? Are the values of the stress and strain components expected to change or not? Stress Measures The most fundamental and commonly used stress quantity is the Cauchy stress, also known as the true stress. It is defined by studying the forces acting on an infinitesimal area element in the deformed body. Both the force components and the normal to the area have fixed directions in space. This means that if a stressed body is subjected to a pure rotation, the actual values of the stress components will change. What was originally a uniaxial stress state might be transformed into a full tensor with both normal and shear stress components. In many cases, this is neither what you want to use nor what you would expect. Consider for example an orthotropic material with fibers having a certain orientation. It is much more plausible that you want to see the stress in the fiber direction, even if the component is rotated. The Second Piola-Kirchhoff stress has this property. It is defined along the material directions. In the figure below, an originally straight cantilever beam has been subjected to bending by a pure moment at the tip. The xx-component of the Cauchy stress (top) and Second Piola-Kirchhoff stress (below) are shown. Since the stress is physically directed along the beam, the xx-component of the Cauchy stress (which is related to the global x-direction) decreases with the deflection. The Second Piola-Kirchhoff stress however, has the same through-thickness distribution all along the beam, even in the deformed configuration. Cauchy and Second Piola-Kirchhoff stress for an initially straight beam with constant bending moment. Another stress measure that you may encounter is the First Piola-Kirchhoff stress. It is a multiaxial generalization of the nominal (or engineering) stress. The stress is defined as the force in the current configuration acting on the original area. The First Piola-Kirchhoff is an unsymmetric tensor, and is for that reason less attractive to work with. Sometimes you may also encounter the Kirchhoff stress. The Kirchhoff stress is just the Cauchy stress scaled by the volume change. It has little physical significance, but can be convenient in some mathematical and numerical operations. Unfortunately, even without a rotation, the actual values of all these stress representations are not the same. All of them scale differently with respect to local volume changes and stretches. This is illustrated in the graph below. The xx-component of several stress measures are plotted at the fixed end of the beam, where the beam axis coincides with the x-axis. In the center of the beam, where strains, and thereby volume changes are small, all values approach each other. So for a case with large rotation but small strains, the stress representations can be seen as pure rotations of the same stress tensor. The distribution of axial stress at the fixed end of the beam. If you want to compute the resulting force or a moment on a certain boundary, there are really only two possible choices: Either integrate the Cauchy stress over the deformed boundary, or integrate the First Piola-Kirchhoff stress over the same boundary in the undeformed configuration. In COMSOL Multiphysics this corresponds to selecting either “Spatial frame” or “Material frame” in the settings for the integration operator. Strain Measures When investigating the uniaxial tensile test above, three different representations of the strain were introduced. It is possible to generalize all of them to multiaxial cases, but for the true strain this is not trivial. It has to be done through a representation in the principal strain directions because that is the only way to take the logarithm of a tensor. The general tensor representation of the logarithmic strain is often called Hencky strain. There are also many other possible representations of the deformation. Any reasonable representation however, must be able to represent a rigid rotation of an unstrained body without producing any strain. The engineering strain fails here, thus it cannot be used for general geometrically nonlinear cases. One common choice for representing large strains is the Green-Lagrange strain. It contains derivatives of the displacements with respect to the original configuration. The values therefore represent strains in material directions, similar to the behavior of the Second Piola-Kirchhoff stress. This allows a physical interpretation, but it must be realized that even for a uniaxial case, the Green-Lagrange strain is strongly nonlinear with respect to the displacement. If an object is stretched to twice its original length, the Green-Lagrange strain is 1.5 in the stretching direction. If the object is compressed to half its length, the strain would read -0.375. An even more fundamental quantity is the deformation gradient, \mathbf F, which contains the derivatives of the deformed coordinates with respect to the original coordinates, \mathbf F = \frac{\partial \mathbf x}{\partial \mathbf X}. The deformation gradient contains all information about the local deformation in the solid, and can be used to form many other strain quantities. As an example, the Green-Lagrange strain is \frac{1}{2} (\mathbf{F}^T \mathbf F-\mathbf I). A similar strain tensor, but based on derivatives with respect to coordinates in the deformed configuration, is the Almansi strain tensor, \frac{1}{2} ( \mathbf I-( \mathbf{F} \mathbf F^T)^{-1}). The Almansi strain tensor will then refer to directions fixed in space. Conjugate Quantities A general way to express the continuum mechanics problem is by using a weak formulation. In mechanics this is known as the principle of virtual work, which states that the internal work done by an infinitesimal strain variation operating on the current stresses equals the external work done by a corresponding virtual displacement operating on the loads. The stress and strain measures must then be selected so that their product gives an accurate energy density. This energy density may be related either to the undeformed or deformed volume, depending on whether the internal virtual work is integrated over the original or the deformed geometry. In the table below, some corresponding conjugate stress-strain pairs are summarized: Strain Stress Symmetry Volume Orientation Engineering Strain (based on deformed geometry); True strain; Almansi strain Cauchy (True stress) Symmetric Deformed Spatial Engineering Strain (based on deformed geometry); True strain; Almansi strain Kirchhoff Symmetric Original Spatial Deformation gradient First Piola-Kirchhoff (Nominal Stress) Non-symmetric Original Mixed Green-Lagrange strain Second Piola-Kirchhoff (Material Stress) Symmetric Original Material In the Solid Mechanics interface in COMSOL Multiphysics, the principle of virtual work is always expressed in the undeformed geometry (the “Material frame”). Green-Lagrange strains and Second Piola-Kirchhoff stresses are then used. Such a formulation is sometimes called a “Total Lagrangian” formulation. A formulation that is instead based on quantities in the current configuration is called an “Updated Lagrangian” formulation. Additional Resources on Stresses and Strains Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
2019-10-16 07:36 Working Group 6 Summary: Spin and 3D Structure / Eyser, Oleg (Brookhaven) ; Parsamyan, Bakur (CERN ; INFN, Turin ; Turin U.) ; Rogers, Ted (Old Dominion U.) The spin and 3D structure session of the DIS2019 conference focused on recentefforts to understand nucleon structure using collinear factorization theorems, transverse momentumdependent correlation functions (TMDs), generalized parton distribution (GPDs) and similar objects.A large amount of progress in both theoretical and experimental directions was reported. We summarize some of the highlights here.. SISSA, 2019 - 15 p. - Published in : PoS DIS2019 (2019) 284 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.284 Notice détaillée - Notices similaires 2019-10-16 07:36 Future SIDIS measurements with a transversely polarized deuteron target at COMPASS / Martin, Anna (Trieste U. ; INFN, Trieste)/Compass Since 2005, measurements of Collins and Sivers asymmetries from the HERMES and COMPASS experiments have shown that both the transversity and the Sivers PDFs are different from zero and measurable in semi-inclusive DIS on transversely polarised targets. Most of the data were collected on proton targets, and only small event samples were collected in the early phase of the COMPASS experiment on a deuteron (6LiD) target and more recently at JLab, on 3He. [...] SISSA, 2019 - 6 p. - Published in : PoS DIS2019 (2019) 267 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.267 Notice détaillée - Notices similaires 2019-10-16 07:36 Notice détaillée - Notices similaires 2019-10-16 07:36 Notice détaillée - Notices similaires 2019-10-16 07:36 Search for new physics in CP violation with beauty and charm decays at LHCb / Bartolini, Matteo (Genoa U. ; INFN, Genoa)/LHCb LHCb is one of the four big experiments operating at LHC and it's mainly dedicatedto measurements of $C\!P$ violation and to the search for new physics in the decays ofrare hadrons containing heavy quarks. The LHCb collaboration has recently published a result which shows for the first time a compelling 5.3 $\sigma$ evidence of $C\!P$ violation in the two-body meson $D^{0}\rightarrow K^{+}K^{-}$ and $D^{0}\rightarrow \pi^{+}\pi^{-}$ decays.$C\!P$ violation in the Cabibbo-suppressed decays $D^{+}_{s} \rightarrow K^{0}_{S}\pi^{+}$, $D^{+} \rightarrow K^{0}_{S}K^{+}$, $D^{+} \rightarrow \phi \pi^{+}$ is expected to be small ($\sim10^{-3}$) due to interference between tree and penguin diagrams and thus sensible to contributions Beyond Standard Model (BSM). [...] SISSA, 2019 - 6 p. - Published in : PoS DIS2019 (2019) 250 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.250 Notice détaillée - Notices similaires 2019-10-16 07:36 Notice détaillée - Notices similaires 2019-10-16 07:36 Notice détaillée - Notices similaires 2019-10-16 07:36 Heavy flavour spectroscopy and exotic states at the LHC / Cardinale, Roberta (Genoa U. ; INFN, Genoa)/LHCb The LHC, producing huge amount of $b\bar{b}$ and $c\bar c$ pairs, is the ideal place for spectroscopy studies which are fundamental as tests and inputs for QCD models. Many of the recently observed states, which are not fitting the standard picture, are still lacking of interpretation. [...] SISSA, 2019 - 6 p. - Published in : PoS DIS2019 (2019) 146 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.146 Notice détaillée - Notices similaires 2019-10-16 07:36 Notice détaillée - Notices similaires 2019-10-16 07:36 Z boson production in proton-lead collisions accounting for transverse momenta of initial partons / Kutak, Krzysztof (Cracow, INP) ; Blanco, Etienne (Cracow, INP) ; Jung, Hannes (CERN ; DESY) ; Kusina, Aleksander (Cracow, INP) ; van Hameren, Andreas (Cracow, INP) We report on a recent calculation of inclusive Z boson production in proton-lead collisions at theLHC taking into account the transverse momenta of the initial partons [1]. In the calculation the frameworkof $k_T$-factorization has been used. [...] SISSA, 2019 - 5 p. - Published in : PoS DIS2019 (2019) 126 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.126 Notice détaillée - Notices similaires
There are a few points to discuss: Since there are $3N$ possible $\{X_{A}\}$, each term where an $X_{A}$ appears will result in $3N$ matrix elements. In your first equation, that will be $S$, $T$, $V$, and the 2-electron contribution, which should really be rewritten into a Fock matrix term $F$ where the ket is precontracted with the density. There is nothing wrong with this solution, though it is a little strange because it's in the MO basis, and we normally prefer to work in the AO basis to avoid any unnecessary transformations. I'm going to drop the $Ne$ from here on out since $V$ is understood in this context to only be the one-electron nuclear attraction term. I think you're asking what happens to the density matrix. The derivative of $V$ expands into multiple parts, as we will see below. I will reference a bunch of equations that use $h$, where $\left( h = H = H^{\text{core}} \right) \equiv T_{e} + V_{Ne}$, and differentiating into a sum should hopefully be clear. What follows should work for any real-valued one-electron operator. I will use the book by Yamaguchi with liberties taken. In many places I use $p,q$ rather than $i,j$, as these indices run over all MOs, not just occupied ones. Expanding $\frac{\partial V_{ij}}{\partial X_{A}}$ using the product rule gives $$\begin{align*}\frac{\partial V_{ij}}{\partial X_{A}} &= \frac{\partial}{\partial X_{A}} \left( \sum_{\mu\nu}^{\text{AO}} C_{\mu i} C_{\nu j} V_{\mu\nu} \right) \tag{3.80} \\&= \sum_{\mu\nu}^{\text{AO}} \left( \frac{\partial C_{\mu i}}{\partial X_{A}} C_{\nu j} V_{\mu\nu} + C_{\mu i} \frac{\partial C_{\nu j}}{\partial X_{A}} V_{\mu\nu} + C_{\mu i} C_{\nu j} \frac{\partial V_{\mu\nu}}{\partial X_{A}} \right), \label{3.81}\tag{3.81}\end{align*}$$ where the third/last term is the true AO integral derivative, and the first two terms, the MO coefficient derivatives, come from differentiating the density matrix. The simplest term to write out, but the most difficult to implement, is the AO integral derivative. I will use $\mu,\nu$ rather than $\chi_{\mu},\chi_{\nu}$, so they refer to both AO basis functions and their matrix indices. $$\begin{align*}\frac{\partial V_{\mu\nu}}{\partial X_{A}} &= \frac{\partial}{\partial X_{A}} \left< \mu | \hat{V} | \nu \right> \tag{3.24} \\&= \left< \frac{\partial \mu}{\partial X_{A}} | \hat{V} | \nu \right> + \left< \mu | \frac{\partial \hat{V}}{\partial X_{A}} | \nu \right> + \left< \mu | \hat{V} | \frac{\partial \nu}{\partial X_{A}} \right> \label{3.25}\tag{3.25}\end{align*}$$ What you have presented, $$\frac{\partial \hat{V}}{\partial X_{A}} = -Z_{A} \sum_{i} \frac{X_{i} - X_{A}}{r_{iA}^{3}},$$ is only the derivative of the operator. The basis function derivatives are also required, as the AOs in this case depend on the nuclear positions. Contrast this with an electric field perturbation, where only the derivative of the operator is required, or magnetic field derivatives, where basis functions may be perturbation-dependent if gauge-including atomic orbitals (GIAOs) are used. Also important is that the index $i$ here refers to an electron, not an occupied MO. This is an electric field integral. Now for the derivative of the MO coefficients/density matrix. Clearly they don't appear in the final HF gradient expression. They disappear through the magic of Wigner's $2n + 1$ rule. From page 25: When the wavefunction is determined up to the $n$th order, the expectation value (electronic energy) of the the system is resolved, according to the results of perturbation theory, up to the $(2n+1)$st order. This principle is called Wigner's $2n+1$ theorem [29-31]. More explicitly, we have the zeroth-order wavefunction, so we must be able to calculate the first-order correction to the energy. Worded differently, any first derivative of the energy must be easily calculated without differentiating MO coefficients, which is only required for second derivatives, such as the molecular Hessian or the dipole polarizability. First, rewrite the MO coefficient derivatives $$\frac{\partial C_{\mu i}}{\partial X_{A}} = \sum_{m}^{\text{MO}} U_{mi}^{X_{A}} C_{\mu m}, \tag{3.7}$$ where the index $m$ runs over all occupied and unoccupied/virtual MOs. The same goes for $p,q$. I've replaced $a$ from the text with $X_{A}$, but this holds for any general perturbation (such as $\lambda$, pick your favorite unused index). The key insight is that we can write the effect of a perturbation on the MO coefficients as the contraction of the unmodified MO coeffcients with a unitary matrix describing single-particle excitations from occupied to virtual MOs, as well as deexcitations from virtual to occupied MOs. In matrix form, this is $$\mathbf{C}^{(X_{A})} = \mathbf{C}^{(0)} \left( \mathbf{U}^{(X_{A})} \right)^{T},$$ where it is usually easiest to have the dimension of $\mathbf{U}$ be $[N_{\text{orb}}, N_{\text{orb}}]$, with only the occ-virt and virt-occ blocks being non-zero. The problem now becomes solving for $\mathbf{U}$ and eliminating it from the final gradient expression. I will walk through parts of section 4.3 to show how this is done. Given the energy expression for an RHF wavefunction, $$E = 2 \sum_{i}^{\text{d.o.}} h_{ii} + \sum_{ij}^{\text{d.o.}} \left[ 2(ii|jj) - (ij|ij) \right], \tag{4.1}$$ the first derivative with respect to $X_{A}$ is $$\frac{\partial E_{\text{elec}}}{\partial X_{A}} = 2 \sum_{i}^{\text{d.o.}} h_{ii}^{X_{A}} + \sum_{ij}^{\text{d.o.}} \left[ 2(ii|jj)^{X_{A}} - (ij|ij)^{X_{A}} \right] + 4 \sum_{m}^{\text{all}} \sum_{i}^{\text{d.o.}} U_{mi}^{X_{A}} F_{im}, \label{4.16}\tag{4.16}$$ where the Fock matrix is defined as $$\begin{align*}F_{pq} &= h_{pq} + \sum_{k}^{\text{d.o.}} \left[ 2(pq|kk) - (pk|qk) \right] \tag{4.6} \\&= h_{pq} + 2J_{pq} - K_{pq},\end{align*}$$ and I've introduced the Coulomb and exchange matrices $\mathbf{J}$ and $\mathbf{K}$ as well. I skipped all the steps in expanding the first derivative, and $\eqref{4.16}$ is what results upon collecting terms with $\mathbf{U}$. Using the RHF variational conditions, the Fock matrix from a converged calculation is diagonal in the MO basis, corresponding to the MO energies $$F_{pq} = \delta_{pq} \epsilon_{pq}, \tag{4.7}$$ so $\eqref{4.16}$ simplifies to $$\frac{\partial E_{\text{elec}}}{\partial X_{A}} = 2 \sum_{i}^{\text{d.o.}} h_{ii}^{X_{A}} + \sum_{ij}^{\text{d.o.}} \left[ 2(ii|jj)^{X_{A}} - (ij|ij)^{X_{A}} \right] + 4 \sum_{m}^{\text{all}} \sum_{i}^{\text{d.o.}} U_{mi}^{X_{A}} \epsilon_{im}, \tag{4.17}$$ which can be further simplified as $$\frac{\partial E_{\text{elec}}}{\partial X_{A}} = 2 \sum_{i}^{\text{d.o.}} h_{ii}^{X_{A}} + \sum_{ij}^{\text{d.o.}} \left[ 2(ii|jj)^{X_{A}} - (ij|ij)^{X_{A}} \right] + 4 \sum_{i}^{\text{d.o.}} U_{ii}^{X_{A}} \epsilon_{ii}. \tag{actual 4.17}$$ Now we use one of the most important tricks in quantum chemistry. Given the orthonormality of the MOs, $$S_{pq} = \delta_{pq}, \tag{3.44}$$ we must have $$\frac{\partial S_{pq}}{\partial X_{A}} \overset{!}{=} 0. \tag{3.45}$$ This is where not using general notation in the above is a bit confusing because it seems to conflict with your original expression, but remember that similar to $\eqref{3.81}/\eqref{3.25}$, this is in fact multiple terms: two for the basis functions, and two for the MO coefficients, giving $$\begin{align*}\frac{\partial S_{pq}}{\partial X_{A}} &= \sum_{\mu\nu}^{\text{AO}} C_{\mu p} C_{\mu q} \frac{\partial S_{\mu\nu}}{\partial X_{A}} + \sum_{m}^{\text{all}} \left( U_{mp}^{X_{A}} S_{mq} + U_{mq}^{X_{A}} S_{pm} \right) \tag{3.40 + 3.43} \\ &= S_{pq}^{X_{A}} + \sum_{m}^{\text{all}} \left( U_{mp}^{X_{A}} S_{mq} + U_{mq}^{X_{A}} S_{pm} \right). \tag{3.43}\end{align*}$$ The sum over all MOs can be eliminated by reusing the orthonormality condition, so in the first term $m \overset{!}{=} q$ and for the second term $m \overset{!}{=} p$, and the overlap matrix in the MO basis is unity for those terms, giving $$\frac{\partial S_{pq}}{\partial X_{A}} = S_{pq}^{X_{A}} + U_{qp}^{X_{A}} + U_{pq}^{X_{A}} \overset{!}{=} 0. \tag{3.46}$$ Recognizing that we only need diagonal terms, this can be rewritten as $$U_{pp}^{X_{A}} = -\frac{1}{2} S_{pp}^{X_{A}}, \tag{4.20}$$ which is then plugged back into the first derivative expression to give $$\begin{align*}\frac{\partial E_{\text{elec}}}{\partial X_{A}} &= 2 \sum_{i}^{\text{d.o.}} h_{ii}^{X_{A}} + \sum_{ij}^{\text{d.o.}} \left[ 2(ii|jj)^{X_{A}} - (ij|ij)^{X_{A}} \right] + 4 \sum_{i}^{\text{d.o.}} \left( -\frac{1}{2} S_{ii}^{X_{A}} \right) \epsilon_{ii} \\ &= 2 \sum_{i}^{\text{d.o.}} h_{ii}^{X_{A}} + \sum_{ij}^{\text{d.o.}} \left[ 2(ii|jj)^{X_{A}} - (ij|ij)^{X_{A}} \right] - 2 \sum_{i}^{\text{d.o.}} S_{ii}^{X_{A}} \epsilon_{ii}. \tag{4.21}\end{align*}$$ Rewrite the last term $$\begin{align*}\sum_{i}^{\text{d.o.}} S_{ii}^{X_{A}} \epsilon_{ii} &= \sum_{i}^{\text{d.o.}} \sum_{\mu\nu}^{\text{AO}} C_{\mu i} C_{\mu i} \frac{\partial S_{\mu\nu}}{\partial X_{A}} \epsilon_{ii} \\ &= \sum_{i}^{\text{d.o.}} \sum_{\mu\nu}^{\text{AO}} C_{\mu i} C_{\mu i} \epsilon_{ii} \frac{\partial S_{\mu\nu}}{\partial X_{A}} \\ &= \sum_{\mu\nu}^{\text{AO}} W_{\mu\nu} \frac{\partial S_{\mu\nu}}{\partial X_{A}} \tag{4.24}\end{align*}$$ to use the energy-weighted density matrix $\mathbf{W}$, which in your expression is called $\mathbf{Q}$. Again, the elimination of the $\mathbf{U}$ matrix is one of the most important results in quantum chemistry, as it means the coupled-perturbed SCF equations do not need to be solved for first derivatives of SCF wavefunctions. This is why you do not see any density or MO coefficient derivatives in the gradient expression. References Yamaguchi, Yukio; Goddard, John D.; Osamura, Yoshihiro; Schaefer III, Henry F. A New Dimension to Quantum Chemistry: Analytic Derivative Methods in Ab Initio Molecular Electronic Structure Theory (International Series of Monographs on Chemistry); Oxford University Press: 1994. Epstein, Saul T. General Remainder Theorem. J. Chem. Phys. 1968, 48, 4725. Epstein, Saul T. Constraints and the $V^{2n+1}$ theorem. Chem. Phys. Lett. 1980, 70, 311. If you don't have access to the Yamaguchi book, the seminal article on HF derivatives contains a condensed version of this derivation, but be aware of typos.
Impedance (Resistance) in AC Circuits concept In order to solve problems in AC circuits we can use Ohm's Law just like we did in DC however we need a new idea of "resistance" that takes into account how capacitors and inductors affect current when an AC voltage is applied to them. We call this new concept of resistance "impedance". In this topic we'll see how to calculate the impedance of resistors, capacitors and inductors. In the next topic we'll use these impedances to solve for circuit currents just like we did in DC, but this time we can find solutions to circuits with capacitors and inductors, opening a whole new world of circuit design and analysis. fact The common concept of resistance in DC circuits is replaced with impedance in AC circuits. When working on electrical circuits we use the symbol \(j\) to represent \(\sqrt{-1}\) instead of \(i\) to avoid any confusion with currents. Whenever you see a \(j\) remember that it's the imaginary unit. fact The impedance of a resistor with resistance \(R\) is given by: \(Z = R\) As we'll see in a moment the impedance of capacitors and inductors change based on the frequency of the applied voltage. This allows us to create circuits that react differently to different frequencies, like a circuit that removes high frequency sounds from a microphone or a filter for a RADAR that only lets in signals of the correct frequency and blocks outside noise. fact The impedance of a capacitor with capacitance \(C\) is given by: \(Z = \frac{1}{j\omega C} = \frac{1}{\omega C}\angle -\frac{\pi}{2}\) When operated at radian frequency \(\omega\). Since \(\frac{1}{j} = -j\) we can also say that \(Z = \frac{-j}{\omega C}\) This shows that the impedance of a capacitor decreases as the frequency increases. So at DC a capacitor acts like an open circuit but at incredibly high frequencies the capacitor acts like a short circuit. example Find the impedance of a capacitor with capacitance \(C = 16\mu\)F at frequency \(\omega = 120\pi\)We can just plug these values into our formula and find: \(Z = \frac{1}{\omega C}\angle -\frac{\pi}{2}\) \(Z = 166\angle -\frac{\pi}{2}\) fact The impedance of an inductor with inductance \(L\) is given by: \(Z = j\omega L = \omega L\angle \frac{\pi}{2}\) when operated at radian frequency \(\omega\). So at low frequencies the inductor acts like a short circuit but at larger and larger frequencies the inductor acts more like an open circuit. So we can see that a resistor has purely real impedance but the inductor and capacitor have purely imaginary impedance. example Find the impedance of an inductor with inductance \(L = 10\)mH at frequency \(f = 50\)HzNow in this case we need to convert our frequency into radian frequency first. \(\omega = 2\pi f = 100\pi\) And now we can use our formula: \(Z = \omega L \angle \frac{\pi}{2}\) \(Z = 3.14 \angle \frac{\pi}{2}\) fact We call the imaginary part of an impedance the reactance and give it the symbol \(X\). So we can write the impedance as: \(Z = R + iX\) Where \(R\) is the resistance and \(X\) is the reactance. fact The reactance of an inductor is given by: \(X_L = \omega L\) fact The reactance of a capacitor is given by: \(X_c = -\frac{1}{\omega C}\) fact Impedance, resistance and reactance are all measured in Ohms (\(\Omega\)) practice problems
The Method of Undetermined Coefficients fact Undetermined Coefficients is a method best suited to constant coefficient differential equations of the form: $$ y'' + A_1y' + A_2y = g(t) $$ Where \(A_1\) and \(A_2\) are constants. fact Undetermined coefficients will work only for a small number of \(g(t)\)'s however these are commonly found and when it can be used undetermined coefficients is an excellent, and simple, method. fact The idea behind undetermined coefficients is that the "form" of our solution \(y_p(t)\) will be similar, and based off of, the form of \(g(t)\). By guessing a solution of the right "form" (to be defined later) we can solve for some constants and end up with a complete \(y_p\). fact The method works as follows: Find the complementary solutions to the equation (set \(g(t) = 0\) and solve) Use some simple rules to guess at \(y_p\) from \(g_t\) If our \(y_p\) is in our complementary solution multiply it by \(t\) Plug \(y_p\) into the equation and solve for the constants example Find a particular solution to: $$ y'' - 4y' - 12y = 3e^{5t} $$First we'll find the complementary solution, so: \(r^2 - 4r - 12 = 0 \implies (r-6)(r+2)=0 \implies r = 6, r = -2\) So \(y_c(t) = c_1e^{6t} + c_2e^{-2t}\) Now we figure that since \(g(t) = 3e^{5t}\) our solution will look a little something like \(y_p(t) = Ae^{5t}\) Plugging this in we'll get: $$ 25Ae^{5t} - 20Ae^{5t} - 12Ae^{5t} = 3e^{5t} $$ $$ -7A = 3 \implies A = \frac{-3}{7} $$ Which means: $$ y_p(t) = \frac{-3}{7}e^{5t} $$ And our total solution will be given by: \(y(t) = c_1e^{6t} + c_2e^{-2t} - \frac{3}{7}e^{5t}\) If we were given some initial conditions we would then use this equation to solve for \(c_1\) and \(c_2\). fact The following is a table of good guesses for \(y_p(t)\) given some \(g(t)\). \(g(t)\) \(y_p(t)\) \(Ce^{ax}\) \(Ae^{ax}\) \(C\cos(at)\) or \(C\sin(at)\) \(A\cos(at) + B\sin(at)\) \(C_1\cos(at) + C_2\sin(at)\) \(A\cos(at) + B\sin(at)\) \(n^{th}\) degree polynomial \(A_nt^n + A_{n-1}t^{n-1} + \cdots + A_1t + A_0\) fact If \(g(t)\) is a linear combination of some of the above examples take a combination of the example \(y_p(t)\)'s. E.G. for \(g(t) = t + e^t\) guess \(y_p(t) = At + B + Ce^t\) example Find a particular solution to $$ y'' - 4y' - 12y = 2t^3 - t + 3 $$We found the homogeneous solutions previously, they are: $$ y_h = c_1e^{-2t} + c_2e^{6t} $$ Now with those standing by let's use our \(y_p\) table above to guess at our particular solution. Since \(g(t) = 2t^3 - t + 3\) we're dealing with a polynomial so our solution will look like: $$ y_p(t) = At^3 + Bt^2 + Ct + D $$ For a polynomial our \(y_p(t)\) will have the same degree and will have all powers below that degree even if they aren't in \(g(t)\), which is why we have a \(t^2\) term in our \(y_p\) even though \(g\) doesn't have one. Now we need \(y_p'(t)\) and \(y_p''(t)\): $$y_p'(t) = 3At^2 + 2Bt + C$$ $$y_p''(t) = 6At + 2B$$ Plugging them into our differential equation: \(6At + 2B - 4(3At^2 + 2B + C) - 12(At^3 + Bt^2 + Ct + D) = 2t^3 - t + 3\) \(-12At^3 +(-12A -12B)t^2 + (6A -8B -12C)t + 2B - 4C - 12D = 2t^3 - t + 3\) Setting the coefficients from each side of the \(t^3\) term, \(t^2\) term, etc... equal we get our system of equations: \(-12A = 2 \implies A = -\frac{1}{6}\) \(-12A - 12B = 0 \implies B = \frac{1}{6}\) \(6A - 8B - 12C = 1 \implies C = -\frac{1}{9}\) \(2B - 4C - 12D = 3 \implies D = -\frac{5}{27}\) Our particular solution is then: $$y_p(t) = -\frac{1}{6}t^3 + \frac{1}{6}t^2 - \frac{1}{9}t - \frac{5}{27}$$ practice problems
This question already has an answer here: Linear surjective isometry then unitary 2 answers Let $X$ be an inner product space such that $\dim X < \infty$, and let $T \colon X \to X$ be an isometric linear operator. Since $\dim X < \infty$, $X$ is complete and thus a Hilbert space; since $T$ is isometric, $T$ is also injective and hence also surjective and thus bijective, because $\dim X < \infty$. So $T^{-1}$ exists. How to show that $T$ is unitary. That is, how to show that the Hilbert adjoint operator $T^*$ of $T$ equals $T^{-1}$? Since $X$ is finite-dimensional, we can choose an orthonormal basis for $X$; let $n \colon= \dim X$, and let $\{e_1, \ldots, e_n \}$ be an orthonormal basis for $X$. Then, for each $i, j = 1, \ldots, n$, we have $$\langle Te_i , e_j \rangle = \langle e_i, T^* e_j \rangle,$$ and, $$\langle T^* T e_i, e_j \rangle = \langle T e_i , T e_j \rangle = \langle e_i , e_j \rangle = \begin{cases} 1 \ & \mbox{ if } \ i = j \\ 0 \ & \mbox{ if } \ i \neq j. \end{cases} $$ What next?
In the free scalar field theory in 2D conformal field theory, we consider the correlation functions of the derivatives of the fields, i.e. $$\langle \partial \phi(z) \partial \phi(w) \rangle, \tag{1}$$ where we use complex coordinates. Then, since $$:T(w): = - \frac{1}{2} \lim_{z\to w} \left(\partial\phi(z)\partial\phi(w) + \frac{1}{(z-w)^2} \right), \tag{2}$$ we can use the OPEs $T(z)T(w)$ and $\partial\phi(z)\partial\phi(w)$ in order to find the central charge, which turns out to be $$c=1. \tag{3}$$ Now, I am wondering why we look at the derivatives of $\phi$ in the first place. It seems to me that the reason why we look at them is that, without them, the correlation function looks like $$\langle \phi(z) \phi(w) \rangle = - \ln(z-w) - \ln(\bar{z}-\bar{w}), \tag{4}$$ and that such an expression diverges at $z\to\infty$. On the other hand, the correlator $\langle \partial\phi(z)\partial\phi(w)\rangle$ behaves nicely with a $(z-w)^{-2}$ dependency, which makes it vanish (or just converge maybe?) at $\infty$. Now the question is: why is it legitimate to say that the central charge of the free boson is $c=1$? It seems to me that I could take the correlation function of other fields nicely behaving, and I would get a different central charge. Or what am I missing that makes this the definitive choice? Thank you in advance, and sorry if this is a dumb question.
Rather than thinking of planks as having lengths, think of them as defining certain sets of vectors. So in this case we have (1,0), (0,1), (2,0), (0,2). (Caution: if you have e.g. a plank of length 5 then you need to allow (3,4) and (4,3) as well as (5,0) and (0,5)! [EDITED to add:] No, as pointed out by another user in comments that's wrong because the question specifies orthogonal only. Though obviously you could also do it the other way if you wanted :-).) Now we have a recurrence relation: if we write $N(a,b)$ for the number of ways to span a pond of size $(a,b)$ then we have $N(0,0)=1$ and $N(a,b)=\sum N(a-x,b-y)$ where the sum is over plank-vectors $(x,y)$. For the particular case here, the table looks like this: $$\begin{array}{r}1 & 1 & 2 & 3 & 5 & 8 & 13 & 21 & 34 & 55 & 89 \\1 & 2 & 5 & 10 & 20 & 38 & 71 & 130 & 235 & 420 & 744 \\2 & 5 & 14 & 32 & 71 & 149 & 304 & 604 & 1177 & 2256 & 4266 \\3 & 10 & 32 & 84 & 207 & 478 & 1060 & 2272 & 4744 & 9692 & 19446 \\5 & 20 & 71 & 207 & 556 & 1390 & 3310 & 7576 & 16807 & 36331 & 76850 \\8 & 38 & 149 & 478 & 1390 & 3736 & 9496 & 23080 & 54127 & 123230 & 273653 \\13 & 71 & 304 & 1060 & 3310 & 9496 & 25612 & 65764 & 162310 & 387635 & 900448 \\21 & 130 & 604 & 2272 & 7576 & 23080 & 65764 & 177688 & 459889 & 1148442 & 2782432 \\34 & 235 & 1177 & 4744 & 16807 & 54127 & 162310 & 459889 & 1244398 & 3240364 & 8167642 \\55 & 420 & 2256 & 9692 & 36331 & 123230 & 387635 & 1148442 & 3240364 & 8777612 & 22968050 \\89 & 744 & 4266 & 19446 & 76850 & 273653 & 900448 & 2782432 & 8167642 & 22968050 & 62271384\end{array}$$ The number you want is in the bottom right of the array. This happens to be http://oeis.org/A036355. In general, the generating function for these things is $\frac1{1-\sum x^{dx}y^{dy}}$ where the sum is over plank-vectors $(dx,dy)$. I guess you can probably get a closed form out of that somehow.
Structure Tensor is a matrix in form: $S=\begin{pmatrix} W \ast I_x^2 & W \ast (I_xI_y)\\ W \ast (I_xI_y) & W \ast I_y^2 \end{pmatrix}$ where $W$ is a smoothing kernel (e.g a Gaussian kernel) and $I_x$ is gradient in the direction of $x$ and so on. Therefore,size of structure tensor is $2N \times 2M$ (were $N$ is the image height and $M$ is its width). However it is supposed to be $2\times2$ matrix to decompose eigenvalues such that we obtain $\lambda_1$ and $\lambda_2$ as it is mentioned in many papers. So, how to calculate $S$ matrix?
ISSN: 1930-8337 eISSN: 1930-8345 All Issues Inverse Problems & Imaging February 2015 , Volume 9 , Issue 1 Select all articles Export/Reference: Abstract: We consider the inverse problem of the simultaneous reconstruction of the dielectric permittivity and magnetic permeability functions of the Maxwell's system in 3D with limited boundary observations of the electric field. The theoretical stability for the problem is provided by the Carleman estimates. For the numerical computations the problem is formulated as an optimization problem and hybrid finite element/difference method is used to solve the parameter identification problem. Abstract: We present a scalable solver for approximating the maximum a posteriori(MAP) point of Bayesian inverse problems with Besov priors based on wavelet expansions with random coefficients. It is a subspace trust region interior reflective Newton conjugate gradient method for bound constrained optimization problems. The method combines the rapid locally-quadratic convergence rate properties of Newton's method, the effectiveness of trust region globalization for treating ill-conditioned problems, and the Eisenstat--Walker idea of preventing oversolving. We demonstrate the scalability of the proposed method on two inverse problems: a deconvolution problem and a coefficient inverse problem governed by elliptic partial differential equations. The numerical results show that the number of Newton iterations is independent of the number of wavelet coefficients $n$ and the computation time scales linearly in $n$. It will be numerically shown, under our implementations, that the proposed solver is two times faster than the split Bregman approach, and it is an order of magnitude less expensive than the interior path following primal-dual method. Our results also confirm the fact that the Besov $\mathbb{B}_{11}^1$ prior is sparsity promoting, discretization-invariant, and edge-preserving for both imaging and inverse problems governed by partial differential equations. Abstract: In this paper, we consider tomographic reconstruction for axially symmetric objects from a single radiograph formed by fan-beam X-rays. All contemporary methods are based on the assumption that the density is piecewise constant or linear. From a practical viewpoint, this is quite a restrictive approximation. The method we propose is based on high-order total variation regularization. Its main advantage is to reduce the staircase effect while keeping sharp edges and enable the recovery of smoothly varying regions. The optimization problem is solved using the augmented Lagrangian method which has been recently applied in image processing. Furthermore, we use a one-dimensional (1D) technique for fan-beam X-rays to approximate 2D tomographic reconstruction for cone-beam X-rays. For the 2D problem, we treat the cone beam as fan beam located at parallel planes perpendicular to the symmetric axis. Then the density of the whole object is recovered layer by layer. Numerical results in 1D show that the proposed method has improved the preservation of edge location and the accuracy of the density level when compared with several other contemporary methods. The 2D numerical tests show that cylindrical symmetric objects can be recovered rather accurately by our high-order regularization model. Abstract: A novel variational model for deformable multi-modal image registration is presented in this work. As an alternative to the models based on maximizing mutual information, the Rényi's statistical dependence measure of two random variables is proposed as a measure of the goodness of matching in our objective functional. The proposed model does not require an estimation of the continuous joint probability density function. Instead, it only needs observed independent instances. Moreover, the theory of reproducing kernel Hilbert space is used to simplify the computation. Experimental results and comparisons with several existing methods are provided to show the effectiveness of the model. Abstract: In this article, we are interested in the study of the asymptotic behavior, in terms of finite-dimensional attractors, of a generalization of the Cahn--Hilliard equation with a fidelity term (integrated over $\Omega\backslash D$ instead of the entire domain $\Omega$, $D \subset \subset \Omega$). Such a model has, in particular, applications in image inpainting. The difficulty here is that we no longer have the conservation of mass, i.e. of the spatial average of the order parameter $u$, as in the Cahn--Hilliard equation. Instead, we prove that the spatial average of $u$ is dissipative. We finally give some numerical simulations which confirm previous ones on the efficiency of the model. Abstract: Consider the two-dimensional inverse elastic scattering problem of recovering a piecewise linear rigid rough or periodic surface of rectangular type for which the neighboring line segments are always perpendicular. We prove the global uniqueness with at most two incident elastic plane waves by using near-field data. If the Lamé constants satisfy a certain condition, then the data of a single plane wave is sufficient to imply the uniqueness. Our proof is based on a transcendental equation for the Navier equation, which is derived from the expansion of analytic solutions to the Helmholtz equation. The uniqueness results apply also to an inverse scattering problem for non-convex bounded rigid bodies of rectangular type. Abstract: We study the broken ray transform on $n$-dimensional Euclidean domains where the reflecting parts of the boundary are flat and establish injectivity and stability under certain conditions. Given a subset $E$ of the boundary $\partial \Omega$ such that $\partial \Omega \setminus E$ is itself flat (contained in a union of hyperplanes), we measure the attenuation of all broken rays starting and ending at $E$ with the standard optical reflection rule applied to $\partial \Omega \setminus E$. By localizing the measurement operator around broken rays which reflect off a fixed sequence of flat hyperplanes, we can apply the analytic microlocal approach of Frigyik, Stefanov, and Uhlmann ([7]) for the ordinary ray transform by means of a local path unfolding. This generalizes the author's previous result in [9], although we can no longer treat reflections from corner points. Similar to the result for the two dimensional square, we show that the normal operator is a classical pseudo differential operator of order $-1$ plus a smoothing term with $C_{0}^{\infty}$ Schwartz kernel. Abstract: We shall derive and propose several efficient overlapping domain decomposition methods for solving some typical linear inverse problems, including the identification of the flux, the source strength and the initial temperature in second order elliptic and parabolic systems. The methods are iterative, and computationally very efficient: only local forward and adjoint problems need to be solved in each subdomain, and the local minimizations have explicit solutions. Numerical experiments are provided to demonstrate the robustness and efficiency of the methods: the algorithms converge globally, even with rather poor initial guesses; and their convergences do not deteriorate or deteriorate only slightly when the meshes are refined. Abstract: A novel method is developed for solving the inverse obstacle scattering problem in near-field imaging. The obstacle surface is assumed to be a small and smooth deformation of a circle. Using the transformed field expansion, the direct obstacle scattering problem is reduced to a successive sequence of two-point boundary value problems. Analytical solutions of these problems are derived by a Green's function method. The nonlinear inverse problem is linearized by dropping the higher order terms in the power series expansion. Based on the linear model and analytical solutions, an explicit reconstruction formula is obtained. In addition, a nonlinear correction scheme is devised to improve the results dramatically when the deformation is large. The method requires only a single incident wave at a fixed frequency. Numerical tests show that the method is stable and effective for near-field imaging of obstacles with subwavelength resolution. Abstract: This paper proposes a novel approach to reconstruct changes in a target conductivity from electrical impedance tomography measurements. As in the conventional difference imaging, the reconstruction of the conductivity change is based on electrical potential measurements from the exterior boundary of the target before and after the change. In this paper, however, images of the conductivity before and after the change are reconstructed simultaneously based on the two data sets. The key feature of the approach is that the conductivity after the change is parameterized as a linear combination of the initial state and the change. This allows for modeling independently the spatial characteristics of the background conductivity and the change of the conductivity - by separate regularization functionals. The approach also allows in a straightforward way the restriction of the conductivity change to a localized region of interest inside the domain. While conventional difference imaging reconstruction is based on a global linearization of the observation model, the proposed approach amounts to solving a non-linear inverse problem. The feasibility of the proposed reconstruction method is tested experimentally and with a simulation which demonstrates a potential new medical application of electrical impedance tomography: imaging of vocal folds in voice loading studies. Abstract: Recently, many practical algorithms have been proposed to recover the sparse signal from fewer measurements. Orthogonal matching pursuit (OMP) is one of the most effective algorithm. In this paper, we use the restricted isometry property to analysis OMP. We show that, under certain conditions based on the restricted isometry property and the signals, OMP will recover the support of the sparse signal when measurements are corrupted by additive noise. Abstract: The inverse spectral-scattering problems for the radial Schrödinger equation on the half-line are considered with a real-valued integrable potential with a finite moment. It is shown that if the potential is sufficiently smooth in a neighborhood of the origin and its derivatives are known, then it is uniquely determined on the half-line in terms of the amplitude or scattering phase of the Jost function without bound state data, that is, the bound state data is missing. Abstract: In this paper, we propose an iterative method for solving the $\ell_1$-regularized minimization problem $\min_{x\in\mathbb{R}^n} f(x)+\rho^T |x|$, which has great applications in the areas of compressive sensing. The construction of our method consists of two main steps: (1) to reformulate an $\ell_1$-problem into a nonsmooth equation $h^\tau(x)={\bf 0}$, where ${\bf 0}\in \mathbb{R}^n$ is the zero vector; and (2) to use $-h^\tau(x)$ as a search direction. The proposed method can be regarded as spectral residual method because we use the residual as a search direction at each iteration. Under mild conditions, we establish the global convergence of the method with a nonmonotone line search. Some numerical experiments are performed to compare the behavior of the proposed method with three other methods. The results positively support the proposed method. Abstract: The transmission eigenvalue problem on the interval $\left[ a,b\right] $ is considered. We show that the isolated transmission eigenvalues are continuous functions of the coefficients of the problem and that the transmission eigenfunctions can be normalized so that they depend continuously on all coefficients in the uniform norm. Throughout this work, our results are established without assumptions on the sign of the contrasts. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Theory Notes¶ Computation of cell dry mass¶ Definition¶ The concept of cell dry mass computation was first introduced by Barer [Bar52]. The dry mass \(m\) of a biological cell is defined by its non-aqueous fraction \(f(x,y,z)\) (concentration or density in g/L), i.e. the number of grams of protein and DNA within the cell volume (excluding salts). The assumption of dry mass computation in QPI is that \(f(x,y,z)\) is proportional to the RI of the cell \(n(x,y,z)\) with a proportionality constant called the refraction increment \(\alpha\) (units [mL/g]) with the RI of the intracellular fluid \(n_\text{intra}\), a dilute salt solution. These two equations can be combined to In QPI, the RI is measured indirectly as a projected quantitative phase retardation image \(\phi(x,y)\). with the vacuum wavelength \(\lambda\) of the imaging light and the refractive index of the cell-embedding medium \(n_\text{med}\). Integrating the above equation over the detector area \((x,y)\) yields For a discrete image, this formula simplifies to with the pixel area \(\Delta A\) and a pixel-wise summation of the phase data. Relative and absolute dry mass¶ If however the medium surrounding the cell has a different refractive index (\(n_\text{med} \neq n_\text{intra}\)), then the phase \(\phi\) is measured relative to the RI of the medium \(n_\text{med}\) which causes an underestimation of the dry mass if \(n_\text{med} > n_\text{intra}\). For instance, a cell could be immersed in a protein solution or embedded in a hydrogel with a refractive index of \(n_\text{med}\) = \(n_\text{intra}\) + 0.002. For a spherical cell with a radius of 10µm, the resulting dry mass is underestimated by 46pg. Therefore, it is called “relative dry mass” \(m_\text{rel}\). If the imaged phase object is spherical with the radius \(R\), then the “absolute dry mass” \(m_\text{abs}\) can be computed by splitting equation (1) into relative mass and suppressed spherical mass. For a visualization of the deviation of the relative dry mass from the actual dry mass for spherical objects, please have a look at the relative vs. absolute dry mass example. Notes and gotchas¶ The default refraction increment in DryMassis \(\alpha\) = 0.18mL/g, as suggested for cells based on the refraction increment of cellular constituents by references [BJ54] and [Bar53]. The refraction increment can be manually set using the configuration key “refraction increment” in the “sphere” section. Variations in the refraction incrementmay occur and thus the above considerations are not always valid. The refraction increment is little dependent on pH and temperature, but may be strongly dependent on wavelength (e.g. serum albumin \(\alpha_\text{SA@366nm}\) = 0.198mL/g and \(\alpha_\text{SA@656nm}\) = 0.179mL/g) [BJ54]. The refractive index of the intracellular fluidin DryMass is assumed to be \(n_\text{intra}\) = 1.335, an educated guess based on the refractive index of phosphate buffered saline (PBS), whose osmolarity and ion concentrations match those of the human body. Dry mass and actual massof a cell differ by the weight of the intracellular fluid. This weight difference is defined by the volume of the cell minus the volume of the protein and DNA content. While it seems to be difficult to define a partial specific volume (PSV) for DNA, there appears to be a consensus regarding the PSV of proteins, yielding approximately 0.73mL/g (see e.g. reference [Bar57] as well as [HGC94] and question 843 of the O-manual referring to it). For example, the protein and DNA of a cell with a radius of 10µm and a dry mass of 350pg (cell volume 4.19pL, average refractive index 1.35) occupy approximately 0.73mL/g · 350pg = 0.256pL (assuming the PSV of protein and DNA are similar). Therefore, the actual volume of the intracellular fluid is 3.93pL (94% of the cell volume) which is equivalent to a mass of 3.93ng resulting in a total (actual) cell mass of 4.28ng. Thus, the dry mass of this cell makes up approximately 10% of its actual mass which leads to a total mass that is about 2% heavier than the equivalent volume of pure water (4.19ng).
(06-05-2016, 07:45 PM)Schmelzer Wrote: I don't understand your question. In my mathematics, +1 -(-1) + 1 + 1 is always 4, and this has nothing to do with CHSH and whatever ranges or expectation terms. So this sounds like a trick question. But so what, I will have a look at the trick. It's not a trick question. I just want to know if you think that result for CHSH is possible. Yes or no? Depends on what you consider. If you consider a local-realistic theory which fulfills the conditions of the theorem, and talk about the sum \[ S=E(a,b)-E(a,b^{\prime })+E(a^{\prime },b)+E(a^{\prime },b^{\prime }),\] then S=4 is not possible. If you talk about the sum +1 -(-1) + 1 + 1, then it is not only possible, but holds always. If you talk not about computing the sum for the same \(\lambda\), but for four different particular values, without any averaging, then it is possible. If you average enough so that you have \[E(a,b) = \int A(a,\lambda)B(b,\lambda) \rho(\lambda) d\lambda\] with sufficient accuracy and the same functions \(A, B, \rho\) for all four expressions, then not. So, a typical trick question, you do not give enough specification for a unique answer and hope nonetheless for an answer. 06-05-2016, 10:54 PM (This post was last modified: 06-06-2016, 03:55 AM by FrediFizzx.) (06-05-2016, 09:26 PM)Schmelzer Wrote: Depends on what you consider. If you consider a local-realistic theory which fulfills the conditions of the theorem, and talk about the sum \[ S=E(a,b)-E(a,b^{\prime })+E(a^{\prime },b)+E(a^{\prime },b^{\prime }),\] then S=4 is not possible. If you talk about the sum +1 -(-1) + 1 + 1, then it is not only possible, but holds always. If you talk not about computing the sum for the same \(\lambda\), but for four different particular values, without any averaging, then it is possible. If you average enough so that you have \[E(a,b) = \int A(a,\lambda)B(b,\lambda) \rho(\lambda) d\lambda\] with sufficient accuracy and the same functions \(A, B, \rho\) for all four expressions, then not. So, a typical trick question, you do not give enough specification for a unique answer and hope nonetheless for an answer. Ok, you say it "depends" so no real answer to the question. But I did perfectly specify what I meant. All that I specified was the CHSH string of expectation terms that can range from -1 to +1. Nothing else was specified. Now, it is pretty easy to see from what I did specify that the answer has to be yes that a result of +1 - (1) + 1 +1 = 4 is possible. What this boils down to is that if the CHSH string of terms are independent from each other, then we can have an inequality, \[ E(a,b)-E(a,b^{\prime })+E(a^{\prime },b)+E(a^{\prime },b^{\prime }) \leq 4.\] And that is the inequality that both QM and all experiments to date have used and have never violated. All one has to do is to look and see that all experiments and QM have always used this inequality with independent expectation terms. I have never seen a counter example to this simple demonstration by mathematical inspection. If what Bell says is right, then there ought to be one. And note that local realistic theories don't even need to enter into this demonstration. Just show how QM or the experiments have ever violated one of Bell's inequalities. You can't; it is impossible. Sorry, no, nobody has ever been interested in this trivial inequality. And therefore it is completely irrelevant what you tell us about this inequality. Name it FrediFizzx-inequality and be happy with it. 06-06-2016, 05:03 AM (This post was last modified: 06-06-2016, 06:15 AM by FrediFizzx.) (06-06-2016, 04:18 AM)Schmelzer Wrote: Sorry, no, nobody has ever been interested in this trivial inequality. And therefore it is completely irrelevant what you tell us about this inequality. Name it FrediFizzx-inequality and be happy with it. If nobody has been interested in that inequality, then why is it that it is the one that QM and the experiments all use? Anyways, you should just admit that you can't demonstrate exactly how QM or the experiments violate any of the Bell inequalities. It is quite simple math to show that it is impossible for anything including QM, experiments and LHV models to violate the Bell inequalities. 06-06-2016, 07:15 AM (This post was last modified: 06-06-2016, 07:15 AM by Schmelzer.) (06-06-2016, 05:03 AM)FrediFizzx Wrote: If nobody has been interested in that inequality, then why is it that it is the one that QM and the experiments all use? Nobody uses it. What is used is the CHSH inequality which is \(|S|\le 2\). Which can be derived for Einstein-causal realistic theories. (06-06-2016, 05:03 AM)FrediFizzx Wrote: Anyways, you should just admit that you can't demonstrate exactly how QM or the experiments violate any of the Bell inequalities. It is quite simple math to show that it is impossible for anything including QM, experiments and LHV models to violate the Bell inequalities. What? It is sufficient to compute the QM predictions for the particular experiment. Or to use the experimental results. Of course, they violate the CHSH inequality \(|S|\le 2\), not the trivial FrediFizzx inequality \(|S|\le 4\). (06-06-2016, 07:15 AM)Schmelzer Wrote: (06-06-2016, 05:03 AM)FrediFizzx Wrote: If nobody has been interested in that inequality, then why is it that it is the one that QM and the experiments all use?Nobody uses it. What is used is the CHSH inequality which is \(|S|\le 2\). Which can be derived for Einstein-causal realistic theories. (06-06-2016, 05:03 AM)FrediFizzx Wrote: Anyways, you should just admit that you can't demonstrate exactly how QM or the experiments violate any of the Bell inequalities. It is quite simple math to show that it is impossible for anything including QM, experiments and LHV models to violate the Bell inequalities.What? It is sufficient to compute the QM predictions for the particular experiment. Or to use the experimental results. Of course, they violate the CHSH inequality \(|S|\le 2\), not the trivial FrediFizzx inequality \(|S|\le 4\). They do use the inequality with the bound of 4. And since it is impossible for anyone to show that Bell's inequalities have ever been violated, it is easy to prove that the inequality with a bound of 4 is being used. Just look at the Wikipedia entry,https://en.wikipedia.org/wiki/Bell%27s_t...redictions When they do the final calculation for the CHSH string, each expectation term is independent. Thus they are using the inequality with the bound of 4. If those expectation terms are dependent on each other like in the Bell-CHSH inequality with a bound of 2, it is impossible to get anything greater than 2. It really is just simple mathematics. Sorry, but this makes no sense at all. At least I'm unable to give this text a meaningful interpretation. (06-06-2016, 08:03 AM)Schmelzer Wrote: Sorry, but this makes no sense at all. At least I'm unable to give this text a meaningful interpretation. It is really quite simple. In the Wikipedia entry, when they calculated the CHSH string of expectation terms there was no dependency between the terms like there is in Bell-CHSH. Therefore they used the inequality with the bound of 4 which I easily demonstrated via simple mathematical inspection. IOW, they cheated when they claim there is a violation of Bell-CHSH. It's apples and oranges. Pure and simple. Believe me, it is mathematically impossible for anything to violate any of the Bell inequalities. You can't do it is why you are not even trying to demonstrate that it is possible. If you think you can do it, then let's see a rigorous demonstration mathematically. And then I will show you where you make your mistake. No, they used no inequality at all, but computed what quantum theory predicts. The result was something greater than 2, thus, violated the Bell-CHSH inequality, which would hold in a Einstein-causal realistic theory. The whole point is that QT is not equivalent to any Einstein-causal realistic theory. So, of course, if they compute the QT prediction, they use the QT rules and not some inequality which holds for Einstein-causal realistic theories. Of course, nothing can violate mathematical proofs, which is a triviality. But you can have theories which violate the assumptions of the Bell-CHSH theorems, thus, can violate the resulting inequalities.
Consider the free real scalar field in (1+1)-dimensions. If the (normal-ordered) Hamiltonian is $$ \hat{H}=\int_{-\infty}^{\infty} dk\ \sqrt{ k^2 + m^2 } \hat{a}_{k}^{\dagger} \hat{a}_{k} $$ where $\hat{a}_{k}$ and $\hat{a}_{k}^{\dagger}$ are the annihilation and creation operators for ordinary Minkowski particles. Consider the wave-packet $f_{MN}:\mathbb{R} \to \mathbb{C}$ defined for all $N,M \in \mathbb{Z}$ as the following $$ f_{MN}(k) = \begin{cases} \frac{1}{\sqrt{\mathcal{E}}} e^{- 2 \pi i N \frac{k}{\mathcal{E}}} \ \ \ \ \ \ \ , \ k\in\left[ (M-\frac{1}{2})\mathcal{E} , (M+\frac{1}{2})\mathcal{E} \right] \\ \ 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ , \ \mathrm{otherwise} \end{cases} $$ This wave-packet focuses the momentum around $\sim M\mathcal{E}$ with a width of $\sim \mathcal{E}$. These wave-packets are orthonormal in that $$ \int_{-\infty}^{\infty} dk \ F_{NM}(k) F_{IJ}(k) = \delta_{NI}\delta_{MJ} \\ \sum_{N,M} F^{\ast}_{NM}(k)F_{NM}(p) = \delta(k-p) $$ Consider the smeared operators $$ \hat{a}_{MN} \equiv \int_{-\infty}^{\infty} dk\ F^{\ast}_{MN}(k) \hat{a}_{k} $$ which can be inverted via the orthonormality conditions stated to give $$ \hat{a}_{k} = \sum_{M,N} F_{MN}(k) \hat{a}_{MN} $$ MY QUESTION: Is it possible to put the Hamiltonian into a form $\sum_{NM} E_{MN} \hat{a}^{\dagger}_{MN}\hat{a}_{MN}$ for some energy $E_{MN}$? I think that if this is possible at all, then it would have to rely on an approximation (maybe something like $\mathcal{E} \ll m$). So far I have been able to get the Hamiltonian in the form: $$ \hat{H} = \sum_{ M N J} \hat{a}^{\dagger}_{MN} \hat{a}_{MJ} \int_{(M-\frac{1}{2})\mathcal{E}}^{(M+\frac{1}{2})\mathcal{E}} dk\ \sqrt{ k^2 + m^2 } \frac{1}{\mathcal{E}} e^{- 2 \pi i (N - J) k/\mathcal{E} } $$ But this integral is too difficult for me.
In the tutorial Integration by parts – indefinite integrals we have seen how to solve an integral symbolically. In this tutorial we are going to learn how to solve an integral numerically and what is the meaning of a simple integral. Mathematical definition of the definite integral In mathematics, the definition of a definite integral is something like: for a given continuous function f(x), of a real variable x, defined on an interval [a, b], the definite integral is: where F(x) is the antiderivative (the function we get after solving an integral). Numerically, an integration is an accumulation / summation. When we say that we are integrating a function, we are adding up the value of the function in several discrete intervals. Graphical representation of the definite integral Let’s imagine that the plot a general function f(x), on the interval [a, b] is something as depicted in the image below. The value of the integral of the function f(x), on the interval [a, b] is the area between the plot of the function and the horizontal axis. The area can be positive or negative depending on the sign of y = f(x). The final result will be the sum between all positive an negative areas. Properties of the definite integral The value of a definite integral over intervals of length zero, a = b, is zero. If a is a real number then: The value of a definite integral changes sign if the integration limits are reversed. If a > b we get: If a point c ∈ [a, b], the following relationship is true: How to calculate a definite integral Step 1. Calculate the antiderivative F(x) Step 2. Calculate the values of F(b) and F(a) Step 3. Calculate F(b) – F(a). Example 1. Let’s calculate the definite integral of the function f(x) = x – 1, on the interval [1, 10]. \[F(x) = \int (x-1) dx = \frac{x^2}{2} – x\] Step 1. \[ \begin{split} Step 2. F(10) &= \frac{10^2}{2} – 10 = 40\\ F(1) &= \frac{1^2}{2} – 1 = -0.5 \end{split} \] \[F(10)-F(1)=40-(-0.5)=40.5\] Step 3. In order to demonstrate that the value of the definite integral is the area delimited by the plot of the function and the horizontal axis, we are going to use the following Scilab script: deff('y=f(x)','y=x-1'); x=1:10; plot(x,f(x),'LineWidth',6); xgrid(); ha=gca(); ha.data_bounds=[0,f(x(1));x($),f(x($))]; xp = [x(1) x($) x($)]; yp = [f(x(1)) f(x(1)) f(x($))]; xfpoly(xp,yp,12); xlabel('x','FontSize',3); ylabel('f(x) = x-1','FontSize',3); title('x-engineer.org','FontSize',2,'Color','blue') Running the script gives the following graphical window: As you can see, the area between the plot of the function and the horizontal axis, for the interval [1, 10], is a right triangle. The base of the triangle has the length 9. The height of the triangle is also 9. Therefore, the area of the triangle is: As you can see, the area of the triangle is the same with the value of the definite integral. Example 2. Let’s calculate the definite integral of the function f(x) = sin(x), on the interval [0, 2π]. \[F(x) = \int sin(x) dx = cos(x)\] Step 1. \[ \begin{split} Step 2. F(2 \pi) &= cos(2 \pi) = 0\\ F(0) &= cos(0) = 0 \end{split} \] \[F(2 \pi)-F(0)=0 – 0=0\] Step 3. In order to demonstrate that the value of the definite integral is the area delimited by the plot of the function and the horizontal axis, we are going to use the following Scilab script: deff('y=f(x)','y=sin(x)'); x=0:0.01:2*%pi; plot(x,f(x),'LineWidth',2); plot([0 2*%pi],[0 0],'r--','LineWidth',2) xgrid(); xlabel('x','FontSize',3); ylabel('f(x) = sin(x)','FontSize',3); title('x-engineer.org','FontSize',2,'Color','blue') Running the script gives the following graphical window: The total area between the plot of the function and the horizontal axis is zero. The area from π to 2π is equal but opposite sign with the area from 0 to π. Therefore the sum of these two areas is zero. For any questions, observations and queries regarding this article, use the comment form below. Don’t forget to Like, Share and Subscribe!