text
stringlengths
256
16.4k
Reversing a matrix Ok I think this should be easy, if I can explain it properly. Let's say I had a 2*2 matrix = matrix A And I was multiplying it 2*1 matrix = matrix B And it would produce a 2*1 matrix = matrix C Right? But let's say - I want to go back ways, reverse it... I have Matrix C and I have Matrix A but I need to find Matrix B. I'd better explain this as well, as I'm not sure the rules are exactly the same. I'm using a nominal Pi model to analyse a transmission line The layout is like this Vs | A B | Vr Is | C D | Ir Where Vs = A*Vr + B*Ir And Is = C*Vr + D*Ir But now I need to find Vr and Ir I have ABCD and I have Vs and Is At first I thought well I'll just rearrange the equation to find Vr. But 2 of the variables are missing, when I do that... Any help would be great guys. Thx So basically, you want to solve a matrix equation of the form AB= C for B? If A is invertible then $\displaystyle B= A^{-1}C$. If A is not invertible then there is no solution. Write $\displaystyle A= \begin{bmatrix}a & b \\ c & d \end{bmatrix}$, $\displaystyle B= \begin{bmatrix}x \\ y \end{bmatrix}$, and $\displaystyle C= \begin{bmatrix}p \\ q \end{bmatrix}$. A is invertible if and only if its determinant, ad- bc is not 0 and, in that case, its inverse is $\displaystyle A^{-1}= \frac{1}{ad- bc}\begin{bmatrix}d & -b \\ -c & a\end{bmatrix}$ and $\displaystyle B= A^{-1}C= \frac{1}{ad- bc}\begin{bmatrix}d & -b \\ -c & a\end{bmatrix}\begin{bmatrix}p \\ q \end{bmatrix}= \frac{1}{ad- bc}\begin{bmatrix}dp- bq \\ aq- cp\end{bmatrix}$. Or you could simply treat the matrix equation as a system of two equations in two unkowns. We have $\displaystyle \begin{bmatrix}a & b \\ c & d \end{bmatrix}\begin{bmatrix}x \\ y \end{bmatrix}= \begin{bmatrix}ax+ by \\ cx+ dy\end{bmatrix}= \begin{bmatrix}p \\ q\end{bmatrix}$. That is equivalent to the two equations ax+ by= p and cx+ dy= q. Multiply the first equation by d to get adx+ bdy= pd and multiply the second equation by b to get bcx+ bdy= bq. The coefficents of y are now the same, bd, so subtracting one equation from the other eliminates y: (ad- bc)x= pd- bq. If ad- bc is not 0, divide both sides by ad- bc to get $\displaystyle x= \frac{pd- bq}{ad- bc}$ as before. If ad- bc= 0 and pd- bq is not 0, there is no x that makes that equation true. If both ad- bc= 0 and pd- bq= 0 then any x works. Much the same thing is true of $\displaystyle V_s = AV_r + BI_r$ and $\displaystyle I_s = CV_r + DI_r$. Multiply the first equation by D to get $\displaystyle DV_s= ADV_r+ BDI_r$ and multiply the second equation by B to get $\displaystyle BI_s= BCV_r+ BDI_r$. Now $\displaystyle I_r$ has the same coefficient in both equations so subtracting one equation from the other eliminates $\displaystyle I_r$: $\displaystyle DV_s- BI_s= (AD- BC)V_r$ and $\displaystyle V_r= \frac{DV_s- BI_s}{AD- BC}$, assuming, of course, that AD- BC is non-zero. Then $\displaystyle V_s= AV_r+ BI_r= \frac{ADV_s- ABI_s}{AD- BC}+ BI_r$ so $\displaystyle BI_r= V_s- \frac{ADV_S- ABI_s}{AD- BC}= \frac{ADV_s- BCV_s- ADV_s+ ABI_s}{AD- BC}= \frac{ABI_s- BCV_s}{AD- BC}$. Quote: Beautiful Country Boy. I've written it all out, substituting in numbers on one side, with the equations on the other. I think I understand most of it now. This bit at the end however, seems to be off by a factor of 2. Quote: $\displaystyle V_s = AV_r + BI_r$ = $\displaystyle 92 = 12*3 + 14*4$ and $\displaystyle I_s = CV_r + DI_r$ = $\displaystyle 85 = 11*3 + 13*4$ Therefore:- $\displaystyle BI_r = 28 $ But:- $\displaystyle (ABI_s - BCV_s) / (AD-BC) $ $\displaystyle 112/2 = 56$ Quote: How about my example Country Boy... Those numbers don't meet any of you conditions. However:- $\displaystyle BI_r ≠ (ABI_s−BCV_s)/(AD−BC)$ Although it is off by a factor of 2, which is the answer I'm looking for... Am I supposed to guess something here? My math is Ok, when I work on it. But it takes a lot of effort. I'm not naturally gifted with it, like your good self. My mathematically intuition = 0 :( Can you help me take this final step? Quote: The fact is that if $A$ has full rank, then $AB = C$ may have a unique solution even if $A$ is not invertible. This is easily seen by noticing that $A^TA$ is always invertible where $A^T$ is the conjugate transpose. Now suppose $Ax = b$ where $x,b$ are vectors, then you can easily see that a formula for $x$ is given by \[x = (A^TA)^{-1} A^T b. \] In this case, $(A^TA)^{-1}A^T$ is the weak inverse for $A$ which is also a left inverse (sometimes called a pseudo-inverse). Notice that if $A$ is not square, then it doesn't even have a determinant, but that has nothing to do with the question of invertibility. While this high brow mathematically discussion is, I can guess... absolutely fascinating. There's a n00b engineer over here with a transmission line to analyse. :rolleyes: Can one of you geniuses, climb down from the lofty tower and explain to lil old me, how to find $\displaystyle I_r$? Please... Wait... Inspiration has come... I think... $\displaystyle I_r= (CV_s-AI_s)/(CB-AD)$ Well I think your math is great, Country Boy. Never mind the haters. :D All times are GMT -8. The time now is 11:45 PM. Copyright © 2019 My Math Forum. All rights reserved.
2.2 Libraries and Clients Each program that you have written consists of Java code that resides in a single .java file.For large programs, keeping all the code in a single file is restrictive and unnecessary.Fortunately, it is very easy in Java to refer to a method inone file that is defined in another.This ability has two important consequences on our style of programming: It allows us to extend the Java languageby developing libraries of static methods for use by any other program, keeping each library in its own file. It enables modular programming, where we divide a program up into static methods, grouped together in some logical way. Using static methods in other programs.To refer to a static method in one class that is defined in another, you must make both classes accessible to Java (for example, by putting them both in the same directory in your computer). Then, to call a method, prepend its class name and a period separator. For example, SAT.java calls the cdf()method in Gaussian.java, which calls the pdf()method, which calls the exp()and sqrt()methods in Math. We describe several details about the process. The public keyword.The publicmodifier identifies the method as available for use by any other program. You can also identify methods as private, but you have no reason to do so at this point. Each module is a class.We use the term moduleto refer to all the code that we keep in a single file. By convention, each module is a Java class that is kept in a file with the same name of the class but has a .javaextension. In this chapter, each class is merely a set of static methods. The .class file.When you compile the program, the Java compiler makes a file with the class name followed by a .classextension that has the code of your program in a language more suited to your computer. Compile when necessary.When you compile a program, Java typically compiles everything that needs to be compiled in order to run the program. For example, when you type javac SAT.java, the compiler will also check whether you modified Gaussian.javasince the last time it was compiled. If so, it will also compile Gaussian. MultipleBoth SAT.java and Gaussian.java have their own main()methods. main()method. When you type javafollowed by a class name, Java transfers control to the machine code corresponding to the main()method defined in that class. Libraries.We refer to a module whose methods are primarily intended for use by many other programs as a library. Clients.We use the term clientto refer to a program that calls a given library method. APIs.Programmers normally think in terms of a contractbetween the client and the implementation that is a clear specification of what the method is to do. Implementations.We use the term implementationto describe the Java code that implements the methods in an API. As an example, Gaussian.java is an implementation of the following API: Random numbers.StdRandom.java is a library for generating random numbers from various distributions. Input and output for arrays.StdArrayIO.java is a library for reading arrays of primitive types from standard input and printing them to standard output. Iterated function systems.An Iterated function system (IFS) is a general way to produce fractals like the Sierpinski triangle or Barnsley fern. As a first example, consider the following simple process: Start by plotting a point at one of the vertices of an equilateral triangle. Then pick one of the three vertices at random and plot a new point halfway between the point just plotted and that vertex. Continue performing the same operation. Sierpinski.java simulates this process. Below are snapshots after 1,000, 10,000, and 100,000 steps. IFS.java is a data-driven version program that simulates a generalization of this process. You can run it on the inputs sierpinski.txt, barnsley.txt, tree.txt, and coral.txt. Standard statistics.StdStats.java is a library for statistical calculations and basic visualizations. Bernoulli trials.Bernoulli.java counts the number of heads found when a fair coin is flipped ntimes and compares the result with the predicted Gaussian distribution function. According to the Central Limit Theorem, the resulting histogram is extremely well approximated by the Gaussian distribution with mean n/2 and variance n/4. Exercises Add to Gaussian.javaan implementation of the three-argument static method pdf(x, mu, sigma)specified in the API that computes the Gaussian probability density function with a given mean μ and standard deviation σ, based on the formula \(\phi(x, \mu, \sigma)\) = \(\phi((x - \mu) / \sigma) / \sigma\). Also add an implementation of the associated cumulative distribution function cdf(z, mu, sigma), based on the formula \(\Phi(z, \mu, \sigma)\) = \(\Phi((z - \mu) / \sigma)\). Write a library of static methodHyperbolic.javathat implements the hyperbolicfunctions based on the definitions \(\sinh(x) = (e^x - e^{-x}) / 2\) and \(\cosh(x) = (e^x + e^{-x}) / 2\), with \(\tanh(x)\), \(\coth(x)\), \(\text{sech}(x)\), and \(\text{csch}(x)\) defined in a manner analogous to standard trigonometric functions. Add to StdRandom.javaa method shuffle()that takes an array of doublevalues as argument and rearranges them in random order. Implement a test client that checks that each permutation of the array is produced about the same number of times. Add overloaded methods that take arrays of integers and strings. Develop a full implementation of StdArrayIO.java (implement all 12 methods indicated in the API). Write a library Matrix.java that implementsthe following API: Write a Matrix.java client MarkovSquaring.java that implements the version of Markov.java described in Section 1.6 but is based on squaring the matrix, instead of iterating the vector–matrix multiplication. Creative Exercises Sicherman dice.Suppose that you have two six-sided dice, one with faces labeled 1, 3, 4, 5, 6, and 8 and the other with faces labeled 1, 2, 2, 3, 3, and 4. Write a program Sicherman.java to compare the probabilities of occurrence of each of the values of the sum of the dice with those for a standard pair of dice. Use StdRandomand StdStats. Solution: dice with these properties are called Sicherman dice: they produce sums with the same frequency as regular dice (2 with probability 1/36, 3 with probability 2/36, and so on). Web Exercises Sample standard deviation.The sample standard deviationof a sequence of nobservations is defined similar to the standard deviation except that we divide by n−1 instead of n. Add a method sampleStddev()that computes this quantity. Barnsley fern.Write a program Barnsley.java that takes a command line argument N and plots a sequence of N points according to the following rules. Set (x, y) = (0.5, 0). Then update (x, y) to one of the following four quantities according to the probabilities given. PROBABILITY NEW X NEW Y 2% 0.5 0.27y 15% -0.139x + 0.263y + 0.57 0.246x + 0.224y - 0.036 13% 0.170x - 0.215y + 0.408 0.222x + 0.176y + 0.0893 70% 0.781x + 0.034y + 0.1075 -0.032x + 0.739y + 0.27 The pictures below show the results after 500, 1000, and 10,000 iterations. Black-Scholes.The Black-Scholes model predicts that the asset price at time t will be S' = S exp { (rt - 0.5*sigma^2*t + sigma ε sqrt(t) }, where epsilon is a standard Gaussian random variable. Can use Monte Carlo simulate to estimate. To estimate the value of the option at time T, compute max(S' - X, 0) and take mean over many trials of epsilon. The value of the option today is e^-rT * mean. European put = max(X - S', 0). Reuse function. Name your program BlackScholes.java. See Exercise 2.1.30 for an exact formula for this case. Simulation.Application: some kind of simulation which uses StdRandomand StdStatsto flip coins and analyze mean/variance. [Ex: physics, financial based on Black-Scholes hedge simulation. Simulation needed to price options whose payoff depends on the price path, not just the price at the maturity time T. Ex: Asian average price call = max(0, S_bar - X) where S_bar is the average price of the asset from time 0 to T. Lookback option = max(0, S(T) - min_t S_t). Idea: discretize time into N periods.] another reference Break up simulation into various pieces encapsulated as functions. Flaming fractals.Implement a generalization of IFS to produce fractal flames like Water Lilies by Roger Johnston. Flaming fractals differ from classic IFS by using nonlinear update functions (sinusoidal, spherical, swirl, horseshoe), using a log-density display to color pixels according to how many times they result in the process, and incorporating color based on which rule was applied to get to that point. Random point on a sphere.Use StdRandom.gaussian()to generate a random point on the surface of a sphere or hypersphere using the following method: generate N random values from the gaussian distribution, x[0], ..., x[N-1]. Then (x[0]/scale, ..., x[N-1]/scale) is a random point on the N-dimensional sphere, where scale = sqrt(x[0]^2 + ... + x[N-1]^2). Coupon collector.Write a modular program CouponExperiment.java that runs experiments to estimate the value of the quantity of interest in the coupon collector problem. Compare the experimental results from your program with the mathematical analysis, which says that the expected number of coupons collected before all N values are found should be about N times the Nth Harmonic number (1 + 1/2 + 1/3 + ... + 1/N) and the standard deviation should be about N π / sqrt(6). Exponential distribution.Add a method exp()to StdRandom.java that takes an argument λ and returns a random number from the exponential distributionwith rate λ. Hint: If xis a random number uniformly distributed between 0 and 1, then -ln x/ λ is a random number from the exponential distribution with rate λ.
If $[$.$]$ denotes the greatest integer function, then find the value of $\lim_{n \to \infty} \frac{[x] + [2x] + [3x] + … + [nx]}{n^2}$ What I did was, I wrote each greatest integer function $[x]$ as $x - \{x\}$, where $\{.\}$ is the fractional part. Hence, you get $\lim_{n \to \infty} \frac{\frac{n(n+1)}{2}(x-\{x\})}{n^2}$ The limit should then evaluate to $\frac{x-\{x\}}{2}$ But the answer given is $\frac{x}{2}$. What am I missing here?
OpenCV 3.0.0 Open Source Computer Vision class cv::ConjGradSolver This class is used to perform the non-linear non-constrained minimization of a function with known gradient,. More... class cv::DownhillSolver This class is used to perform the non-linear non-constrained minimization of a function,. More... class cv::MinProblemSolver Basic interface for all solvers. More... enum cv::SolveLPResult { cv::SOLVELP_UNBOUNDED = -2, cv::SOLVELP_UNFEASIBLE = -1, cv::SOLVELP_SINGLE = 0, cv::SOLVELP_MULTI = 1 } return codes for cv::solveLP() function More... int cv::solveLP (const Mat &Func, const Mat &Constr, Mat &z) Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). More... The algorithms in this section minimize or maximize function value within specified constraints or without any constraints. enum cv::SolveLPResult return codes for cv::solveLP() function Enumerator SOLVELP_UNBOUNDED problem is unbounded (target function can achieve arbitrary high values) SOLVELP_UNFEASIBLE problem is unfeasible (there are no points that satisfy all the constraints imposed) SOLVELP_SINGLE there is only one maximum for target function SOLVELP_MULTI there are multiple maxima for target function - the arbitrary one is returned int cv::solveLP ( const Mat & Func, const Mat & Constr, Mat & z ) Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). What we mean here by "linear programming problem" (or LP problem, for short) can be formulated as: \[\mbox{Maximize } c\cdot x\\ \mbox{Subject to:}\\ Ax\leq b\\ x\geq 0\] Where \(c\) is fixed 1-by- n row-vector, \(A\) is fixed m-by- n matrix, \(b\) is fixed m-by- 1 column vector and \(x\) is an arbitrary n-by- 1 column vector, which satisfies the constraints. Simplex algorithm is one of many algorithms that are designed to handle this sort of problems efficiently. Although it is not optimal in theoretical sense (there exist algorithms that can solve any problem written as above in polynomial time, while simplex method degenerates to exponential time for some special cases), it is well-studied, easy to implement and is shown to work well for real-life purposes. The particular implementation is taken almost verbatim from Introduction to Algorithms, third edition by T. H. Cormen, C. E. Leiserson, R. L. Rivest and Clifford Stein. In particular, the Bland's rule http://en.wikipedia.org/wiki/Bland%27s_rule is used to prevent cycling. Func This row-vector corresponds to \(c\) in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers. As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to \(c^T\). Constr m-by- n+1 matrix, whose rightmost column corresponds to \(b\) in formulation above and the remaining to \(A\). It should containt 32- or 64-bit floating point numbers. z The solution will be returned here as a column-vector - it corresponds to \(c\) in the formulation above. It will contain 64-bit floating point numbers.
Answer to the title question: absolutely not. Experimental Touchstone Before we explain in detail, let's begin by noticing that the Michelson-Morley interferometric experiment explicitly tests if orientation affects the clocking behavior of a there-and-back light path. And quite famously the answer is "no". This must be true for any inertial observer. 1 So why do all the introductory materials use a transverse clock? It's actually a good question and the answer (at least beyond "Well, that's what Einstein did!"), requires taking a close look at the way the explanation would work with a longitudinal clock. What is Going on, Then? The short version is easy: because the longitudinal light clock is affected by length contractions as well as time dilation. 2 And then it follows that from a didactic point of view you want to develop one of the rules (time dilation or length contraction) first, and address the second one separately rather than trying to deal with them at the same time. That makes the transverse clock preferable for intorducing relativity. To show this the long way we'll imagine two basically identical light-reflection clocks $\mathbb{c}$ and $\mathbb{C}$, where $\mathbb{c}$ is the traditional transverse clock and $\mathbb{C}$ is aligned longitudinally. 3 In their rest frame $S$, each clock has length $l = L$, and consequently identical periods $p = 2l/c$ and $P = 2L/c$. We then consider the behavior of the clocks as observed in frame $S'$ moving at speed $-v$ along the length of $\mathbb{C}$ with respect to $S$. Transverse case The analysis of the period of $p'$ of the transverse clock is the traditional one: the time required to complete the trip (out and back) is\begin{align}p' &= \frac{\sqrt{(2l)^2 + (vp')^2}}{c}\\&= \sqrt{\left(\frac{2l}{c}\right)^2 + \left(\beta p'\right)^2} \\&= p \sqrt{1 + \left( \beta \frac{p'}{p}\right)^2} \;,\end{align}so that\begin{align}\left(\frac{p'}{p} \right)^2&= 1 + \left(\beta \frac{p'}{p}\right)^2\\\frac{p'}{p} &= \left(1 - \beta^2 \right)^{-1/2} \\&= \gamma\;.\end{align} Longitudinal case To find the period $P'$ of the longitudinal clock we have to do a bit more figuring. The elapsed time $T_f$ for the forward going half of the journey is$$ T_f' = \frac{L' + v T_f'}{c} \;,$$and for the backward going half of the journey the time $T_b$ required is$$ T_b' = \frac{L' - v T_b'}{c} \;.$$After a little figuring we get the period as\begin{align}P' &= \frac{L'}{c(1 - \beta)} + \frac{L'}{c(1 + \beta)}\\&= \frac{2L'}{c(1 - \beta^2)} .\end{align}Now, if $L' = L$ this would lead to$$ \frac{P'}{P} = \left( 1 - \beta^2 \right)^{-1} = \gamma^2 \;, \tag{wrong!}$$meaning the clocks would not agree, but as we said before Michelson-Morley style of experiments rule that out, so $L'$ must not be the same as $L$. To get the agreement we must have it is required that$$ \frac{L'}{L} = \left(1 - \beta^2 \right)^{1/2} \;,$$the usual expression for length contraction. Better Way All that work is, quite frankly, nasty, and I would recommend a geometry first approach as a better alternative to Einstein's version. Get Takeuchi's book, it's worth the money. 1 Because it tells us that two clocks set with their emit/receive ends coincident that beat in time with one another will still beat in time with one another when you swing them around. It doesn't mean that all observer will agree on the frequency of the clocks, just that the two clocks agree. 2 That's what Lorentz-Fitzgerald contraction is all about after-all: fixing up the classical theory to match the Michelson-Morley results. 3 We'll continue to use lower case for quantities related to the transverse clock and capitals for quantities related to the longitudinal clock throughout.
The beth numbers, $\beth_\alpha$ The beth numbers $\beth_\alpha$ are defined by transfinite recursion: $\beth_0=\aleph_0$ $\beth_{\alpha+1}=2^{\beth_\alpha}$ $\beth_\lambda=\sup_{\alpha\lt\lambda}\beth_\alpha$, for limit ordinals $\lambda$ Thus, the beth numbers are the cardinalities arising from iterating the power set operation. It follows by a simple recursive argument that $|V_{\omega+\alpha}|=\beth_\alpha$. Beth one The number $\beth_1$ is $2^{\aleph_0}$, the cardinality of the power set $P(\aleph_0)$, which is the same as the continuum. The continuum hypothesis is equivalent to the assertion that $\aleph_1=\beth_1$. The generalized continuum hypothesis is equivalent to the assertion that $\beth_\alpha=\aleph_\alpha$ for all ordinals $\alpha$. Beth omega The cardinal $\beth_\omega$ is the smallest uncountable cardinal exhibiting the interesting property that whenever a set $X$ has cardinality less than $\beth_\omega$, then also the power set $P(X)$ also has size less than $\beth_\omega$. Strong limit cardinal More generally, a cardinal $\kappa$ is a strong limit cardinal if whenever $\gamma\lt\kappa$, then $2^\gamma\lt\kappa$. Thus, the strong limit cardinals are those cardinals closed under the exponential operation. The strong limit cardinals are precisely the cardinals of the form $\beth_\lambda$ for a limit ordinal $\lambda$. Beth fixed point A cardinal $\kappa$ is a $\beth$-fixed point when $\kappa=\beth_\kappa$. Just as in the construction of aleph fixed points, we may similar construct beth fixed points: begin with any cardinal $\beta_0$ and let $\beta_{n+1}=\beth_{\beta_n}$; it follows that $\kappa=\sup_n\beta_n$ is a $\beth$-fixed point, since $\beth_\kappa=\sup_n\beth_{\beta_n}=\sup_n\beta_{n+1}=\kappa$. One may similarly construct $\beth$-fixed points of any desired cardinality, and indeed, the class of $\beth$-fixed points are precisely the closure points of the function $\alpha\mapsto\beth_\alpha$ and therefore form a closed unbounded proper class of cardinals. Every $\beth$-fixed point is an $\aleph$-fixed point as well. Since every model of ZFC satisfies the existence of a $\beth$-fixed point, it follows that no model of ZFC satisfies $\forall\alpha >0(\beth_\alpha>\aleph_\alpha)$.
I have been trying to prove that the two definitions of $\limsup$ are equivalent. I would appreciate it if someone could verify my attempt! Thanks in advance! Here are the two definitions: For any bounded sequence $(x_n)$, $\displaystyle \limsup_{n\to\infty} x_n$ is defined to be $\displaystyle\limsup_{n\to\infty}x_n\colon = \lim_{n\to\infty} \sup \{ x_{n}, x_{n+1} , x_{n+2} , \ldots \}$. Let $(x_n)$ be a bounded sequence. Let $T$ be the set of all cluster points of $(x_n)$. Then we define $\displaystyle\limsup_{n\to\infty} x_n=\sup T$. My attempt: Let $(x_n)$ be bounded sequence. We define $(y_n)$ to be the sequence $y_n = \sup \{ x_{n}, x_{n+1} , x_{n+2} , \ldots \} $ and $\alpha = \lim_{n\to\infty} y_n$ and $\beta = \sup T$ where $T$ is the set of all cluster points of $(x_n)$. We'll be done if we show that $\alpha = \beta $. $\beta$ is a cluster point of $(x_n)$. Thus there must be a subsequence $(x_{n_k})$ of $(x_n)$ such that $\lim_{k\to \infty} x_{n_{k}} = \beta$. For each $k \in \mathbb{N}$, $n_k \ge k$ and thus $x_{n_k} \le y_k = \sup \{ x_{k}, x_{k+1} , x_{k+2} , \ldots \} $. Thus, taking limits both sides of the previous inequality, we have that $\beta \le \alpha$. To prove $\alpha \le \beta$, we show that $\alpha$ is a cluster point of $(x_n)$ then we will be done. We will construct a subsequence of $(x_n)$ which converges to $\alpha$. We use the fact that $\alpha = \lim_{n\to\infty} y_n$ to achieve this. Let $\varepsilon =1$. Then there exists $N\in\mathbb{N}$ such that $\alpha -1 < \sup \{ x_{n}, x_{n+1} , \ldots \} < \alpha + 1 $ for all $n\ge N$. Let $N_1=N$. Hence, $\alpha -1 < \sup \{ x_{N_1}, x_{N_1+1} , \ldots \} < \alpha + 1 $. Since $\alpha -1$ is not an upper bound for the set $\{ x_{N_1}, x_{N_1+1} , \ldots \}$, there exists $n_1 \ge N_1$ such that $\alpha -1 < x_{n_1} \le \sup \{ x_{N_1}, x_{N_1+1} , \ldots \} < \alpha + 1 $. Now, we do it for $\varepsilon = 1/2$. Then yet again there exists $N\in\mathbb{N}$ such that $\alpha -\frac{1}{2} < \sup \{ x_{n}, x_{n+1} , \ldots \} < \alpha + \frac{1}{2} $ for all $n\ge N$. Let $N_2 =\max \{ N, n_1 \}$. Hence, $\alpha -\frac{1}{2} < \sup \{ x_{N_2}, x_{N_2 +1} , \ldots \} < \alpha + \frac{1}{2} $. Since $\alpha -1/2$ is not an upper bound for the set $\{ x_{N_2}, x_{N_2+1} , \ldots \}$, there exists $n_2 \ge N_2$ such that $\alpha -1/2 < x_{n_2} \le \sup \{ x_{N_2}, x_{N_2+1} , \ldots \} < \alpha + 1/2 $. By induction, for each $k \in \mathbb{N}$, we pick $x_{n_k}$ such that $n_{k+1} > n_{k}$ and $|x_{n_{k}}-\alpha | < \frac{1}{k}$ . By our construction $\lim_{k \to \infty} x_{n_k} = \alpha$. Thus, $\alpha$ is a cluster point of $(x_n)$ and hence $\alpha \le \beta$. Thus we are done! (Do not mark this post as duplicate as I do not seek for solutions!)
Consider the following linear model under classical Gauss-Markov assumtions: $$Y = X\beta + e$$ where $\mathbb{E}X'e = 0$ Consider the following estimator $$\tilde\beta = \left(\sum_{i=1}^{N}x_ix_i' + \lambda I_k\right)^{-1}\left(\sum_{i+1}^Nx_iy_i\right)$$ where $x_i$ is a column vector $k\times1$ from $X$ and $\lambda > 0$ is a scalar and $\mathbb{E}(x_ie_i) = 0$ . Define bias and show that $\tilde\beta$ is biased. Define consistency and show that $\tilde\beta$ is consistent. Define conditional variance of $\tilde\beta$. Show that conditional variance of $\tilde\beta$ is smaller then the conditional variance of OLS estimator $\hat\beta$. Give two reasons why we want to prefer using $\tilde\beta$ instead of $\hat\beta$. (Hint: think of collinearity). First two questions are answered (with the help of Cross Validated). Define $\left(\sum_{i=1}^{N}x_ix_i' + \lambda I_k\right)^{-1} = (X'X + \lambda I)^{-1} = W$. Also note that under homoskedasticity $Var(\hat\beta) = \sigma^2(X'X)^{-1}$. For the third one I have \begin{equation} \begin{aligned} Var(\tilde\beta|X) &= Var(WX'Y|X) \\ & = WX'Var(Y|X)XW \\ & = WX'Var(X\beta + u|X)XW \\ & = WX'Var(u|X)XW \\ \text{(assuming homoskedasticity)}& = WX'\sigma^2IXW \\ & = \sigma^2WX'XW \end{aligned} \end{equation} Now to end with question 3 I need to show that $(X'X)^{-1} - WX'XW$ is positive semidefinite. This is the place where I am stuck. I also have no ideas on question 4. EDIT: please note that this is question from the last years exam which almost surely means that the question can be solved using basic matrix algebra and not more advanced technics like SVD etc.
Foundations (Energy Balance) For the Room Consider the system shown in the picture below. Air flows through a room volume (area and height) with a residence time. Heat is provided by people inside the room. The steady state energy balance equation becomes $$\dot{m}_a\left( \tilde{H}_{a,out} - \tilde{H}_{a,in} \right) = \dot{q}_p \\\left(\frac{A h}{t_{TO}}\right) \left(\frac{M_a p}{RT_{in}}\right) \tilde{C}_{p,a}\left(T_{out} - T_{in} \right) = N_p\hat{\dot{q}}_p$$ This says, the heat generated causes an enthalpy change in the air flow (out - in). The enthalpy change of an ideal gas causes it to undergo a temperature change. You have the room area. You also need its height. This gives volume. Divided by residence time gives volumetric flow. With ideal gas density you get mass flow. The specific heat and molar mass of air are known. The pressure of the inlet air is known. The output temperature of your AC unit is the inlet temperature to the room. You might also adjust the flow rate of air in to the room. This is the residence time. You know the number of people and the heat per person. The remaining factor is the outlet temperature. This is the desired room temperature. Your goal is to have a set room temperature. As more people enter the room, you will need either to increase the flow rate (decrease residence time) at the same inlet air temperature or you will need to decrease the inlet air temperature to the room (the outlet temperature of your AC unit) at the same air flow rate. Sizing the AC Unit(s) The minimum cooling load on the AC units is $\dot{q}_p$. You must remove at least the heat generated by the people in the room. The CoP of an AC is $CoP = \dot{q}/\dot{w}$, where $\dot{w}$ is the work required. This is to first order the electrical power input multiplied by an efficiency factor for the heat pump. Each AC takes $W$ watts power at an efficiency of $\epsilon$. The net result to establish the minimum number of units is $$\dot{q}_p = N_u\ CoP\ \epsilon\ W = N_p \hat{\dot{q}}_p$$ Summary The approach might be as follows: Determine the minimum number of AC units to meet the cooling load based on the efficiency, CoP, and power load of a unit as well as the number of people in the room and their heat output. Determine whether the AC units will meet the demand to maintain a desired room temperature based on the rated output temperature and air flow rate of the units.
$\textsf{DWCDM+}$: A BBB secure nonce based MAC 1. Indian Statistical Institute, Kolkata, India 2. NTT Secure Platform Laboratories, NTT Corporation, Japan $\textsf{EWCDM}$ $n$ $\textsf{E}$ $n$ $\textsf{H}$ $\textsf{E}_{K_2}\bigl(\textsf{E}_{K_1}(N)\oplus N\oplus \textsf{H}_{K_h}(M)\bigr),$ $N$ $M$ $2n/3$ $\textsf{DWCDM+}$ $2n/3$ $n$ $\textsf{E}$ $n$ $k$ $\forall k \leq n$ $\textsf{H}$ $ \textsf{E}^{-1}_{K}\bigl(\textsf{E}_{K}(N)\oplus N \oplus \textsf{H}_{K_h}(M)\bigr). $ $\textsf{DWCDM+}$ $\textsf{EWCDM}$ $2$ $1$ $K_h$ $0^{n-2} \| 10$ $n$ $(n-1)$ $\textsf{DWCDM+}$ $2^{2n/3}$ $2^{n/2}$ $2^n$ Keywords:$\textsf{EWCDM}$, $\textsf{DWCDM+}$, mirror theory, extended mirror theory, H-coefficient. Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:Nilanjan Datta, Avijit Dutta, Mridul Nandi, Kan Yasuda. $\textsf{DWCDM+}$: A BBB secure nonce based MAC. Advances in Mathematics of Communications, 2019, 13 (4) : 705-732. doi: 10.3934/amc.2019042 References: [1] M. Bellare and R. Impagliazzo, A tool for obtaining tighter security analyses of pseudorandom function based constructions, with applications to PRP to PRF conversion, preprint, ePrint: 1999/024.ps.Google Scholar [2] [3] S. Chen and J. Steinberger, Tight Security Bounds for Key-Alternating Ciphers, in [4] S. Chen, R. Lampe, J, Lee, Ya. Seurin and J. Steinberger, Minimizing the two-round even-Mansour cipher, in [5] [6] B. Cogliati and Y. Seurin, EWCDM: An efficient, beyond-birthday secure, nonce-misuse resistant MAC, in [7] W. Dai, V. T. Hoang and S. Tessaro, Information-theoretic indistinguishability via the chi-squared method, in [8] N. Datta, A. Dutta, M. Nandi and K. Yasuda, Encrypt or decrypt? To make a single-key beyond birthday secure nonce-based MAC, in [9] N. Datta, A. Dutta, M. Nandi, G. Paul and L. Zhang, Single key variant of PMAC_plus, [10] N. Datta, A. Dutta, M. Nandi and G. Paul, Double-block hash-then-sum: A paradigm for constructing BBB secure PRF, [11] A. Dutta, A. Jha and M. Nandi, Tight security analysis of ehtm MAC, [12] B. Gilles, On computationally secure authentication tags requiring short secret shared keys, in [13] O. Goldreich, S. Goldwasser and S. Micali, On the cryptographic applications of random functions, in [14] [15] [16] [17] [18] B. Mennink and S. Neves, Encrypted davies-meyer and its dual: Towards optimal security using mirror theory, in [19] B. Mihir, K. Ted and R. Phillip, Luby-Rackoff backwards: Increasing security by making block ciphers non-invertible, in [20] K. Minematsu and T. Iwata, Building blockcipher from tweakable blockcipher: Extending FSE 2009 proposal, in [21] Y. Naito, Blockcipher-based MACs: Beyond the birthday bound without message length, in [22] [23] [24] B. Srimanta and N. Mridul, Revisiting variable output length xor pseudorandom function, [25] [26] [27] S. Victor, On fast and provably secure message authentication based on universal hashing, in [28] [29] L. Zhang, W. L. Wu, H. Sui and P. Wang, 3kf9: Enhancing 3GPP-MAC beyond the birthday bound, in show all references References: [1] M. Bellare and R. Impagliazzo, A tool for obtaining tighter security analyses of pseudorandom function based constructions, with applications to PRP to PRF conversion, preprint, ePrint: 1999/024.ps.Google Scholar [2] [3] S. Chen and J. Steinberger, Tight Security Bounds for Key-Alternating Ciphers, in [4] S. Chen, R. Lampe, J, Lee, Ya. Seurin and J. Steinberger, Minimizing the two-round even-Mansour cipher, in [5] [6] B. Cogliati and Y. Seurin, EWCDM: An efficient, beyond-birthday secure, nonce-misuse resistant MAC, in [7] W. Dai, V. T. Hoang and S. Tessaro, Information-theoretic indistinguishability via the chi-squared method, in [8] N. Datta, A. Dutta, M. Nandi and K. Yasuda, Encrypt or decrypt? To make a single-key beyond birthday secure nonce-based MAC, in [9] N. Datta, A. Dutta, M. Nandi, G. Paul and L. Zhang, Single key variant of PMAC_plus, [10] N. Datta, A. Dutta, M. Nandi and G. Paul, Double-block hash-then-sum: A paradigm for constructing BBB secure PRF, [11] A. Dutta, A. Jha and M. Nandi, Tight security analysis of ehtm MAC, [12] B. Gilles, On computationally secure authentication tags requiring short secret shared keys, in [13] O. Goldreich, S. Goldwasser and S. Micali, On the cryptographic applications of random functions, in [14] [15] [16] [17] [18] B. Mennink and S. Neves, Encrypted davies-meyer and its dual: Towards optimal security using mirror theory, in [19] B. Mihir, K. Ted and R. Phillip, Luby-Rackoff backwards: Increasing security by making block ciphers non-invertible, in [20] K. Minematsu and T. Iwata, Building blockcipher from tweakable blockcipher: Extending FSE 2009 proposal, in [21] Y. Naito, Blockcipher-based MACs: Beyond the birthday bound without message length, in [22] [23] [24] B. Srimanta and N. Mridul, Revisiting variable output length xor pseudorandom function, [25] [26] [27] S. Victor, On fast and provably secure message authentication based on universal hashing, in [28] [29] L. Zhang, W. L. Wu, H. Sui and P. Wang, 3kf9: Enhancing 3GPP-MAC beyond the birthday bound, in [1] [2] Harbir Antil, Mahamadi Warma. Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence. [3] Florin Diacu, Shuqiang Zhu. Almost all 3-body relative equilibria on $ \mathbb S^2 $ and $ \mathbb H^2 $ are inclined. [4] Ildoo Kim. An $L_p$-Lipschitz theory for parabolic equations with time measurable pseudo-differential operators. [5] Lin Du, Yun Zhang. $\mathcal{H}_∞$ filtering for switched nonlinear systems: A state projection method. [6] [7] Sugata Gangopadhyay, Goutam Paul, Nishant Sinha, Pantelimon Stǎnicǎ. Generalized nonlinearity of $ S$-boxes. [8] [9] [10] [11] [12] Eun-Kyung Cho, Cunsheng Ding, Jong Yoon Hyun. A spectral characterisation of $ t $-designs and its applications. [13] [14] [15] [16] Annalisa Cesaroni, Serena Dipierro, Matteo Novaga, Enrico Valdinoci. Minimizers of the $ p $-oscillation functional. [17] Teresa Alberico, Costantino Capozzoli, Luigi D'Onofrio, Roberta Schiattarella. $G$-convergence for non-divergence elliptic operators with VMO coefficients in $\mathbb R^3$. [18] Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar. Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system. [19] Daniel Heinlein, Michael Kiermaier, Sascha Kurz, Alfred Wassermann. A subspace code of size $ \bf{333} $ in the setting of a binary $ \bf{q} $-analog of the Fano plane. [20] Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. 2018 Impact Factor: 0.879 Tools Metrics Other articles by authors [Back to Top]
In this MathStackExchange post the question in the title was asked without much outcome, I feel. Edit: As Douglas Zare kindly observes, there is one more answer in MathStackExchange now. I am not used to basic Probability, and I am trying to prepare a class that I need to teach this year. I feel I am unable to motivate the introduction of random variables. After spending some time speaking about Kolmogoroff's axioms I can explain that they allow to make the following sentence true and meaningful: The probability that, tossing a coin $N$ times, I get $n\leq N$ tails equals $$\tag{$\ast$}{N \choose n}\cdot\Big(\frac{1}{2}\Big)^N.$$ But now people (i.e. books I can find) introduce the "random variable $X\colon \Omega\to\mathbb{R}$ which takes values $X(\text{tails})=1$ and $X(\text{heads})=0$" and say that it follows the binomial rule. To do this, they need a probability space $\Omega$: but once one has it, one can prove statement $(\ast)$ above. So, what is the usefulness of this $X$ (and of random variables, in general)? Added: So far my question was admittedly too vague and I try to emend. Given a discrete random variable $X\colon\Omega\to\mathbb{R}$ taking values $\{x_1,\dots,x_n\}$ I can define $A_k=X^{-1}(\{x_k\})$ for all $1\leq k\leq n$. The study of the random variable becomes then the study of the values $p(A_k)$, $p$ being the probability on $\Omega$. Therefore, it seems to me that we have not gone one step further in the understanding of $\Omega$ (or of the problem modelled by $\Omega$) thanks to the introduction of $X$. Often I read that there is the possibility of having a family $X_1,\dots,X_n$ of random variables on the same space $\Omega$ and some results (like the CLT) say something about them. But then I know no example—and would be happy to discover—of a problem truly modelled by this, whereas in most examples that I read there is either a single random variable; or the understanding of $n$ of them requires the understanding of the power $\Omega^n$ of some previously-introduced measure space $\Omega$. It seems to me (but admit to have no rigourous proof) that given the above $n$ random variables on $\Omega$ there should exist a $\Omega'$, probably much bigger, with a single $X\colon\Omega'\to\mathbb{R}$ "encoding" the same information as $\{X_1,\dots,X_n\}$. In this case, we are back to using "only" indicator functions. I understand that this process breaks down if we want to make $n\to \infty$, but I also suspect that there might be a deeper reason for studying random variables. All in all, my doubts come from the fact that random variables still look to me as being a poorer object than a measure (or, probably, of a $\sigma$-algebra $\mathcal{F}$ and a measure whose generated $\sigma$-algebra is finer than $\mathcal{F}$, or something like this); though, they are introduced, studied, and look central in the theory. I wonder where I am wrong. Caveat: For some reason, many people in comments below objected that "throwing random variables away is ridiculous" or that I "should try to come out with something more clever, then, if I think they are not good". That was not my point. I am sure they must be useful, lest all textbooks would not introduce them. But I was unable to understand why: many useful and kind answers below helped much.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Introduction to Series Let $\displaystyle\displaystyle\sum_{i=1}^\infty a_i$ be a series, and let $\displaystyle s_n = \sum_{i=1}^n a_i$ be its $n^{th}$ partial sum. We define $\displaystyle\sum_{i=1}^\infty a_i = \lim_{n \to \infty} s_n.$ If this limit exists and is finite and equal to $s$, we say the series is convergent and that $\displaystyle\sum_{i=1}^\infty a_i=s$. Otherwise, we say the series is divergent.
We discussed this topic in previous posts: Graph theory: connections in the market. To understand what a graph is and the elements it has. World connections using financial indexes. In this other post, you can learn how to create a graph using the correlation matrix between financial indexes from different countries of the world. Once the graph is created, there are different statistics and measures we can use to extract a lot of information and get a better understanding of our portfolio. In this first post, we are going to learn the theory behind these concepts and measures. Let’s start defining the different terms: \(V\) is a set of vertices or nodes. \(v_i \in V\) is the i-th vertex. \(E\) is a set of edges. \(e_{v_i, v_j} \in E\) is the edge between the vertices \(v_i, v_j \in V\). The graph is represented by \(G(V,E)\). \(n = |V|\) is the number of vertices of the graph. \(m = |E|\) is the number of edges of the graph. Adjacency matrix The adjacency matrix, \(A_G\), represents the different connections between the vertices of the graph G(V,E). If the vertex \(v_i \in V\) is connected with the vertex \(v_j \in V\) through the edge \(e_{v_i, v_j} \in E\), then the positions (i, j) and (j, i) of the matrix are 1; if not both are 0. The adjacency matrix is symmetric and it is defined as follows: \(A_G = a_{i,j} = \left\{ \begin{matrix} 1 & if & e_{v_i, v_j} \in E \\ 0 & if & e_{v_i, v_j} \not\in E \end{matrix} \right. \quad \forall v_i,v_j \in V\) In the next example we can see the adjacency matrix of a simple graph. Neighbours of a vertex The neighbours of a vertex \(v_i \in V\) is a subset of V, \(N_{v_i} \in V\), including all the vertices connected with it. \(N_{v_i} = \left\{ v_j \; : \; v_j \in V \wedge e_{v_i, v_j} \in E \right\} \quad \forall v_j \in V \) Degree of a vertex The degree of a vertex \(v_i \in V\) is the number of connected vertices or neighbours this vertex has. \(deg\left( v_i \right) = {\delta}_{v_i} = \left| N_{v_i} \right| \) The maximum and minimum degree of a graph G are represented by \(\Delta(G)\) y \(\delta(G)\), respectively. They could be defined as follows: \(\Delta(G) = max\left({\delta}_{v_i}\right) \quad \forall v_i \in V\) \(\delta(G) = min\left({\delta}_{v_i}\right) \quad \forall v_i \in V\) Degree matrix The degree matrix, \({\Delta}_G\), has 0 in all its elements except the diagonal in which it is shown the degree for each vertex. \({\Delta}_G = d_{i,j} = \left\{ \begin{matrix} {\delta}_{v_i} & if & i=j \\ 0 & if & i \neq j \end{matrix} \right. \quad \forall v_i \in V \) In the next example we can see the degree matrix of the same simple graph. Clustering coefficient \(C_{v_i}\) is the clustering coefficient of the vertex \(v_i \in V\) and it is a real number in the interval \(\left[0, 1\right]\). This measure quantifies the number of existing connections between the neighbours of that vertex \(v_i\) in comparison to the number of posible connections, i.e., it shows if there are triple connections (triangles). It is said that there is a strong clustering around the vertex \(v_i\) when this measure is close to 1. \(C_{v_i}=\frac{2}{{\delta}_{v_i}\left({\delta}_{v_i}-1\right)}\left|t_{v_i}\right| \quad : \quad {\delta}_{v_i} > 1\) Where \(t_{v_i}\) is the set of edges that connect \(v_j\) and \(v_k\) and both vertices are neighbours of \(v_i\) (\(v_j,v_k \in N_{v_i}, \, e_{v_j, v_k} \in E\)). The clustering coefficient of a graph G, \(C(G)\), is an equally-weighted mean of the clustering coefficients of each vertex of the graph. When the clustering coefficient, \(C(G)\), is close to 1 it shows that all the vertices are inter-connected and all possibles triple connections exists. A high clustering coefficient shows a high robustness. \(C(G) = \frac{1}{N} \sum_{v_i \in V; {\delta}_{v_i}>1} C_{v_i}\) Distance between two vertices The distance between two vertices \(v_i, v_j \in V\) is the minimum number of edges necessary to go from one vertex to the other. It is represented by \(d_{v_i, v_j}\). The BFS or Breadth-first search algorithm calculates this measure in complex graphs. Distance matrix The distance matrix of a graph G is symmetric and contains the minimum distances between each pair of vertices. It is represented by \(D_G\) and the diagonal is 0. \(D_G = d_{i,j} = \left\{ \begin{matrix} d_{v_i, v_j} & if & i \neq j \\ 0 & if & i=j \end{matrix} \right. \quad \forall v_i,v_j \in V\) Average distance The average distance of a graph G, \(\bar{d}(G)\), is defined as follows: \(\bar{d}(G) = \frac{2}{n(n-1)}\sum_{i=1}^{n} \sum_{j=i+1}^{n} d_{v_i, v_j} \quad \forall v_i, v_j \in V \) Diameter The diameter, \({d}^{max}(G)\), is the maximum of the minimum existing distances between two vertices of the graph G: \(d^{max}(G) = max \left( \left\{d_{v_i, v_j}, \, \forall v_i, v_j \in V, \, v_i \neq v_j \right\} \right)\) The lower the value of these measures, the greater the robustness of the graph. Efficiency The graph efficiency is a measure that shows the effectiveness in the information exchange between two vertices. The average graph efficiency of a graph is defined as follows: \(E(G) = \frac{2}{n(n-1)}\sum_{i=1}^{n} \sum_{j=i+1}^{n} \frac{1}{d_{v_i, v_j}} \quad \forall v_i, v_j \in V\) Connection density or cost The connection density or cost is the number of existing edges, m, in the graph G in relation to the total number of possible edges. It is the simplest estimator of the physical cost of a network. \(D(G) = \frac{2m}{n(n-1)}\) Intermediation The intermediation of a vertex \(v_i\) or an edge \(e_{v_i, v_j}\) is the number of shortest paths between two vertices \(v_k, v_l \in V\) that include the vertex \(v_i\) or the edge \(e_{v_i, v_j}\). Average intermediation of vertices The average intermediation of a vertex is defined as follows: \(\bar{b}_v (G) = \frac{1}{2}(n-1)\left(\bar{d}(G)+1\right)\) Average intermediation of an edge The average intermediation of an edge is defined as follows: \(\bar{b}_e (G) = \frac{n(n-1)}{2m}\bar{d}(G)\) In the next Graph Theory post we will see an example of how to apply these statistics and measures to a real case. They will be useful to analyse the universe of our portfolio and check its diversification.
From Statistical Physics, 2nd Edition by F. Mandl: Two vessels contain the same number $N$ molecules of the same perfect gas. Initially the two vessels are isolated from each other, the gases being at the same temperature $T$ but at different pressures $P_1$ and $P_2$. The partition separating the two gases is removed. Find the change in entropy of the system when equilibrium has been re-established, in terms of the initial pressures $P_1$ and $P_2$. Show that this entropy change is non-negative. I'm a little confused about a few things. Is there a temperature change in this process? Intuitively, I would say no because $(T+T)/2=T$. My other guess would be that the temperature must change because we now have a third pressure, $P_3$, that is different from the pressure of the other two and also because we have increased the volume. I believe this is an irreversible process, correct? Because you can't realistically separate the gasses into that which came from vessel A and that which came from vessel B. Can the change in volume simply be called $V_A+V_B$? I thought it would be that simple but thinking more about it I feel as though the change in pressure and possible change in temperature might change things. When the partition is removed, is there a heat exchange between the 2 gases? My intuition says no because heat can only flow when there is a temperature difference and in this case both vessels are at temperature $T$. My attempt: So all in all I need to solve $\Delta S= \int \frac{dQ}{T}$ $$\Delta S= \int \frac{dQ}{T}$$ $$=\int \frac{dE+dW_{by}}{T}$$ We know that $dE=0$ and that $dW=PdV$ $$=\frac{1}{T}\int PdV$$ $$=\frac{P\Delta V}{T}$$ This is where I'm stuck - I don't think there is a valid thing to put in for $\Delta V$ because there were 2 systems that formed into 1 bigger system. If the final system is $V_1+V_2$, then what was its previous size? $V_1$ or $V_2$? Or can I say that it was $\frac{V_1+V_2}{2}$?
I grasp this answer, except for one identity. To quote: "$\sum_{d\mid n}\left[\Phi_d(X)\right]_{-1} = \left[\prod_{d\mid n} \Phi_d(X)\right]_{-1}$" It isn't so simple i think because you don't take the coefficients corresponding to the same place/x-power. What is my misunderstanding? Update: I simply had missed reading the updates, which made it all reasonable. Thanks to https://math.stackexchange.com/a/69548/218659 I understand that "the coefficient in question is the negative sum of the roots". Though I feel somewhat silly, i honestly can't produce myself why "Even without considering roots, this follows from looking at how the product expands."
Analisi, algebra lineare, topologia, gruppi, anelli, campi, ... Rispondi 2 messaggi • Pagina 1di 1 How do you prove this $\displaystyle z\cot \left(z\right)=1-2\sum _{k=1}^{\infty}\frac{z^2}{k^2\pi ^2-z^2}$ ? i’ve seen this formula on a document about Bernoulli’s numbers and their relations with Riemann’s zeta function (http://www.luciocadeddu.com/tesi/Aru_magistrale.pdf corollario 2.5.2) but i really don’t understand why this is true… i’ve seen this formula on a document about Bernoulli’s numbers and their relations with Riemann’s zeta function (http://www.luciocadeddu.com/tesi/Aru_magistrale.pdf corollario 2.5.2) but i really don’t understand why this is true… I haven't looked at the document you link to, but the standard way to treat this kind of problems is to combine the Cauchy integral formula and the Phragmén–Lindelöf principle to get an appropriate residue theorem for unbounded domains. For this specific identity I believe there also is a more ad hoc technique known as the Herglotz trick. "Quello lì pubblica come un riccio!" (G.) "Questo puoi mostrarlo o assumendo abc o assumendo GRH+BSD, vedi tu cos'è meno peggio..." (cit.) "Questo puoi mostrarlo o assumendo abc o assumendo GRH+BSD, vedi tu cos'è meno peggio..." (cit.)
Let's start with the energy-momentum equation:$$E^2=p^2c^2+(m_0c^2)^2\tag{1}$$This can be derived according to the Minkowski metric. This works because the inner product of the four-momentum, $\langle\mathbf{P},\mathbf{P}\rangle$, is equal to $|\mathbf{P}|^2=-(m_0c)^2$. We can also use $$\langle\mathbf{P},\mathbf{P}\rangle=P^\alpha\eta_{\alpha\beta}P^\beta=-\left(\frac{E}{c}\right)^2+p^2\tag{2}$$where $\eta_{\alpha\beta}$ is the Minkowski metric. Setting these two expressions equal yields $(1)$. We can then use this to derive an expression for $\gamma$. Now let's do things in reverse, with your requirements. First, let us rewrite your $\gamma$ as$$\gamma=\frac{c^2+v^2}{c^2}$$Putting this into the expression$$E=\gamma m_0c^2$$We find$$\frac{E}{c^2+v^2}=m_0$$We then have$$E^2=p^2v^2+(m_0c^2)^2\tag{3}$$Note that we have $p^2v^2$, instead of $p^2c^2$. We now have$$-(m_0c^2)^2=-\left(\frac{E}{c}\right)^2+\frac{p^2v^2}{c^2}=\langle\mathbf{P},\mathbf{P}\rangle$$This implies that you have a metric that is nothing like the Minkowski metric, and you have spacetime that is nothing like Minkowski spacetime. The final term now includes a dependence on $v$. Now you have a problem, because special relativity needs Minkowski spacetime to work. The postulates of special relativity, especially those concerning invariance, most likely will not hold. The big problem with this - all of this - is that you haven't started from first principles. Instead of using some logic to make a derivation, you've done things the other way around, starting from a result you want and trying to work backwards. You're then left with results that might be described as disastrous. This is an easy trap to fall into. You would think that changing one tiny thing about a universe wouldn't cause too many problems, but it can. Each equation, each law, each postulate that makes up our universe is finely woven together with every one to form a self-consistent framework that describes how things work. It's like making a jigsaw puzzle, where each piece is a different law of nature. You can change the shape of one piece, and change the shape of one of the neighboring pieces to compensate. But unless you modify all of the pieces that touch the modified piece, the puzzle won't be self-consistent.
This post contains the notes taken from reading of the following paper: I was also helped by the slides of Stanford’s CS231b. Fast-RCNN was the state-of-the-art algorithm for object detection in 2015; itsobject proposal used Selective Search that itself used Efficient Graph-BasedSegmentation. The reason this segmentation was still useful almost 10 years later is because the algorithm is fast, while remaining efficient. Its goal is to segment the objects in an image. A Graph-Based Algorithm The algorithm sees an image as a graph, and every pixels as vertices. Making of good segmentation for an image is thus equivalent to finding communities in a graph. What separates two communities of pixels is a boundary based on where similarityends and dissimilarity begins. A segmentation too fine would result incommunities separated without real boundary between them; in a segmentation too coarse communities should be splitted. The authors of the papers argue that their algorithm always find the right segmentation, neither too fine nor too coarse. Predicate Of A Boundary The authors define their algorithm with a predicate $D$ that measuresdissimilarity: That predicate takes two components and returns true if a boundaryexists between them. A component is a segmentation of one or morevertice. With $C1$ and $C2$ two components: $$ D(C1, C2) = \begin{cases} true & \text{if } \text{Dif}(C1, C2) > \text{MInt}(C1, C2)\newline false & \text{otherwise} \end{cases} $$ With: $$\text{Dif}(C1, C2) = \min_{\substack{v_i \in C1, v_j \in C2 \newline (v_i, v_j) \in E_{ij}}} w(v_i, v_j)$$ The function $Dif(C1, C2)$ returns the minimum weight $w(.)$ edge thatconnects a vertice $v_i$ to $v_j$, each of them being in two differentcomponents. $E_{ij}$ is the set of edges connecting two vertices between components$C1$ and $C2$. This function $Dif$ measures the difference between two components. And with: $$\text{MInt}(C1, C2) = min (\text{Int}(C1) + \tau(C1), \text{Int}(C2) + \tau(C2))$$ $$\tau(C) = \frac{k}{|C|}$$ $$\text{Int}(C) = \max_{\substack{e \in \text{MST}(C, E)}} w(e)$$ The function $\text{Int}(C)$ returns the edge with maximum weight that connects twovertices in the Minimum Spanning Tree( MST) of a same component. Looking only in the MST reduces considerably thenumber of possible edges to consider: A spanning tree has $n - 1$ edges insteadof the $\frac{n(n - 1)}{2}$ total edges. Moreover, using the minimumspanning tree and not just a common spanning tree allows to have segmentationwith high-variability (but still progressive). This function $\text{Int}$ measures the internal difference of a component. A low $\text{Int}$ means that the componentis homogeneous. The function $\tau(C)$ is a threshold function, that imposes a strongerevidence of boundary for small components. A large $k$ creates a segmentationwith large components. The authors set $k = 300$ for wide images, and $k = 150$for detailed images. Finally $\text{MInt}(C1, C2)$ is the minimum of internal difference of twocomponents. To summarize the predicate $D$: A large difference between two internallyhomogeneous components is evidence of a boundary between them. However, if thetwo components are internally heterogeneous it would be harder to provea boundary. Therefore details are ignored in high-variability regions butare preserved in low-variability regions: Notice how the highly-variable grass is correctly segmented while details likenumbers on the back of the first player are preserved. Different Weight Functions The predicate uses a function $w(v_i, v_j)$ that measures the edge’s weight between two vertices $v_i$ and $v_j$. The authors provide two alternatives for this weight function: Grid Graph Weight To correctly use this weight function, the authors smooth the image using a Gaussian filter with $\sigma = 0.8$. The Grid Graph Weight function is: $$w(v_j, v_i) = |I(p_i) - I(p_j)|$$ It is the intensity’s difference of the pixel neighbourhood. Indeed, the authors choose to not only use the pixel intensity, but also its 8 neighbours. The intensity is the pixel-value of the central pixel $p_i$ and its 8neighbours. Using this weight function, they run the algorithm three times (for red, blue,and green) and choose the intersection of the three segmentations as result. Nearest Neighbours Graph Weight The second weight function is based on the Approximate Nearest Neighbours Search. It tries to find a good approximation of what could be the closest pixel. Thefeatures space is both the spatial coordinates and the pixel’s RGB. Features Space = $(x, y, r, g, b)$. The Actual Algorithm Now that every sub-function of the algorithm has been defined, let’s see the actual algorithm: For the Graph $G = (V, E)$ composed of the vertices $V$ and the edges $E$, and a segmentation $S = (C_1, C_2, …)$: Sort E into $\pi$ = ($o_1$, …, $o_m$) by increasing edge weight order. Each vertice is alone in its own component. This is the initial segmentation $S^0$. For $q = 1, …, m$: Current segmentation is $S^q$ ($v_i$, $v_j$) $= o_q$ If $v_i$ and $v_j$ are not in the same component, andthe predicate $D(C_i^{q - 1}, C_j^{q - 1})$ is false then: Merge $C_i$ and $C_j$ into a single component. Return $S^m$. The superscript $q$ in $S^q$ or $C_x^Q$ simply denotes a version of the segmentation or of the component at the instant $q$ of the algorithm. Basically what the algorithm is doing is a bottom-up merging of at first individual pixels into larger and larger components. At the end, the segmentation $S^m$ will neither be too fine nor too coarse. Conclusion As you have seen, the algorithm of this paper is quite simple. What makes it efficient is the chosen metrics and the predicate defined beforehand. If you have read until the bottom of the page, congrats! To thank you, here is some demonstrations by the authors:
I'm not the person to understand everything in Geometric Endoscopy and Mirror Symmetry, but some parts of it are reasonably clear to me. In particular, one of the main objects, mathematically speaking, is the category of coherent sheaves on an orbifold point $\mathrm{pt}/\Gamma$, where $\Gamma$ is a finite group of automorphisms of some ${}^LG$-local system. This category is well-known in algebraic geometry to be just $\mathrm{Rep}(\Gamma)$. The main point of the paper is that some other, less obvious, additive category happens to be isomorphic to this well-known $\mathrm{Rep}(\Gamma)$. This means, in particular, that its objects are actually sums of things like $R\otimes V_R$ where $R$ goes over irreps of $\Gamma$. But (9.5), (9.8) (numbers from the version 3) are different: $${\mathcal F}_{\mathrm{Reg}(\Gamma)} = \bigoplus_{R \in \mathrm{Irrep}({}^LG)} R^* \otimes {\mathcal F}_R$$ Note the sum goes by $\mathrm{Irrep}({}^LG)$ where I thought $\mathrm{Irrep}(\Gamma)$ is appropriate. The source of the chain of equations seems to be on page 112, where, quote, the regular representation $$\mathrm{Reg}(\Gamma) = \bigoplus_{R \in \mathrm{Irrep}({}^LG)} R^* \otimes R,$$ unquote. Thus I've decided it's a typo in the formula for the regular representation carried over for a next several pages. Yet I feel out of place until I'm completely sure there is no other explanation — theoretically there could be some relationship between the representations of $\Gamma$ and those of ${}^LG$, after all. Question:do these two formulas have a typo or is there a meaning I miss?
Suppose that $A$ has discrete spectrum consisting of eigenvalues with finite multiplicities $$ 0<\lambda_1 < \lambda_2<\cdots $$ with $\lambda_n\to\infty$ as $n\to \infty$. Denote by $(\psi_n)$ an orthonormal basis consisting of e-vectors of $A$. In this basis $A$ is an (infinite) diagonal matrix $\DeclareMathOperator{\diag}{diag}$ $$ A=\diag(\lambda_1,\lambda_2,\dotsc, ...).$$ Then $\newcommand{\ve}{\varepsilon}$ $$f(A+\ve I)=\diag( f(\lambda_1+\ve), f(\lambda_2+\ve),\dotsc),..). $$ Assume that $\newcommand{\bR}{\mathbb{R}}$ $f:\bR\to\bR$ is $C^2$. Then $$ f(\lambda_n+\ve I)=f(\lambda_n)+\ve f'(\lambda_n)\ve+ \frac{1}{2}f''(\xi_n)\ve^2,\;\;\xi_n\in (\lambda_n,\lambda_n+\ve). $$ Suppose more concretely that $f(x)=x^3$. The above shows that $f(A+\ve I)-f(A)$ is not bounded. This may not be as surprising since $A^3$ is not a bounded operators. Here is a more interesting examples. Suppose for simplicity that $\lambda_n=n$ and $f$ is a $C^2$ function such that for $|x-n|<0.1$ we have $f(x)=n^2\cos (x-n)-n^2+1$. Note that $$f(\lambda_n)=1,\;\;\forall n $$ so $f(A)=I$. Consider the bounded operator $$ B=\diag( 1^{-1/2}, 2^{-1/2},\dotsc, n^{-1/2},\dotsc, ). $$ Then $$f(A+\ve B)-f(A)=\diag(\dotsc, f(n+\ve n^{-1/2})-f(n),\dotsc), $$ and we observe that, if $\ve<0.1$, then $$ f(n+\ve n^{-1/2})-f(n) =n^2\cos\ve n^{-1/2}. $$ We deduce that $f(A+\ve B)-f(A)$ is not bounded. Remark (a) Here is a possible reformulations. There are several natural topologies on the space of closed selfadjoint operators; see this paper. One of them, called gap topology in the above paper is defined by a certain metric $\gamma$ (distance between the graphs) and has the property $$\gamma(A_n,A)\to 0 \,\Longleftrightarrow\; \Vert f(A_n)- f(A)\Vert\to 0,\;\;\forall f\in C_0(\bR), $$ where $C_0(\bR)$ denotes the space of continuous functions $f:\bR\to\bR$ such that $$\lim_{t\to\pm\infty} f(t)=0. $$ It may be the case that if $f\in C^2(\bR)$ is such that $f,f', f''\in C_0(\bR)$ then a conclusion of the type you've formulated could be true. (b) Let me mention a closely related question. The set of closed selfadjoint Fredholm operators on $H$, equipped with the above gap topology can be organized as a Banach manifold. More precisely it is an open dense subset $\newcommand{\eO}{\mathscr{O}}$ $\eO$ of $\DeclareMathOperator{\Lag}{Lag}$ $\Lag(H\oplus L)$ Grassmannian of Lagrangian subspaces of $H\oplus H$; see this paper or this paper for more details. The above remark shows that any $f\in C_0(\bR)$ defines a continuous function $\hat{f}:\eO\to\bR$, $A\mapsto f(A)$ and I am $52$% sure that it extends to $\Lag(H\oplus H)$. Is it true that if $f\in C_0(\bR)$ is such that $f'\in C_0(\bR)$ that $\hat{f}$ is a $C^1$-function on $\eO$?
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates For most of calculus, the word "function'' has meant a machine thatinputs a number and outputs another number. Something like$f(x)=x^2$. But some functions have more than one input, like$f(x,y) = x^2 + y^2$, while Every ordinary function with one input and one output is associatedwith a However, some curves can't be written as graphs, since they don't pass the vertical line test. This simplest example is a circle. Instead, we usually describe the unit circle by an equation:$x^2+y^2=1$. This is an example of a A third way to get a curve is to track the trajectory of a movingpoint. If we specify both $x$ and $y$ as functions of a Every graph can be written as a parametrized curve. For instance, we can just take $x(t)=t$, $y(t)=f(t)$, moving from left to right at constant horizontal speed, and letting the graph do the rest. However, that's not the only way to write the graph as a parametrized curve. The parametrization $x(t)=2t$, $y(t)=f(2t)$ would work just as well. It would trace out the exact same set of points, only twice as fast. The unit circle is usually parametrized as $x(t)=\cos(t)$ and $y(t)=\sin(t)$. This traces the circle counter-clockwise, starting at $(1,0)$, going around once every $2\pi$ seconds. However, we could also start at a different point, go clockwise instead of counterclockwise, or go at a different speed. You should check that all of the following parametrizations also satisfy $x^2+y^2=1$, and so follow the unit circle:$$ x(t) = \sin(t); \qquad y(t)=\cos(t). $$ $$ x(t) = \cos(t); \qquad y(t)=-\sin(t). $$ $$ x(t) = \cos(2t); \qquad y(t)=\sin(2t). $$ The first starts at (0,1) and goes clockwise with period $2\pi$. The second starts at (1,0) and goes clockwise with period $2\pi$. The third starts at (1,0) and goes counter-clockwise with period $\pi$. They're all different parametrizations, but they all trace the same curve. The following video describes the three kinds of functions, and the three kinds of curves:
Research Open Access Published: Optimal control problem for a generalized sixth order Cahn-Hilliard type equation with nonlinear diffusion Boundary Value Problems volume 2015, Article number: 58 (2015) Article metrics 1631 Accesses Abstract In this paper, we study the initial-boundary-value problem for a generalized sixth order Cahn-Hilliard type equation, which describes the separation properties of oil-water mixtures when a substance enforcing the mixing of the phases is added. The optimal control under boundary condition is given and the existence of optimal solution is proved. Introduction We consider the equation in \(\Omega\times(0, T)\), where \(\Omega=(0,1)\), \(\gamma>0\), \(k>0\), and \(\gamma_{2}>0\) with the initial and boundary conditions The function \(f(u)\) stands for the derivative of a potential \(F(u)\) with \(F(u)\), \(a(u)\) approximated, respectively, by a sixth and a second order polynomial where \(a_{2}>0\). with the density given by Here u is the scalar order parameter, which is proportional to the local difference between oil and water concentrations. The properties of the amphiphile and its concentration enter model (1.1) implicitly via (1.4) and (1.5). \(F(u)\) has three minima at \(u=-1\), \(u=1\), and \(u=0\), which describe the oil, water and disordered microemulsion phases. In [2–4], the coefficient \(a(u)\) is approximated by the quadratic function (1.5) with constants \(a_{0}\) of arbitrary sign and \(a_{2}\) positive. Like in the classical Cahn-Hilliard the theory the order parameter u is a conserved quantity. Thus it satisfies the conservation law with the mass flux j given by the constitutive equation and μ representing the chemical potential where \(\mathcal{D}\geq0\), the dissipation potential, has the form and M is the mobility, k, \(\gamma_{2}\) are the viscosity coefficients corresponding to the rate of the order parameter and its spatial gradient. The first variation \(\frac{\delta\psi}{\delta u}\) is defined by the condition that most hold for all test functions \(\zeta\in C_{0}^{\infty}(\Omega)\). In the case of free energy this leads to the following expressions: From the above discussions we know that where \(\Omega\subset\mathbb{R}^{3}\) is a bounded domain with the boundary ∂Ω, occupied by the ternary mixture, and \((0, T)\) is the time interval. We endow this system with the initial and boundary condition (1.2) and (1.3), in this paper we consider the one-dimensional case with \(M=1\). They investigated the behavior of the solutions to the sixth order system as the parameter γ tended to 0, the uniqueness and regularization properties of the solutions have been discussed. Pawłow and Zaja̧czkowski [6] proved that the problem (1.1)-(1.5) with \(k=\gamma_{2}=0\) under consideration is well posed in the sense that it admits a unique global smooth solution which depends continuously on the initial datum. In past decades, the optimal control of distributed parameter system had received much attention in the academic field. A wide spectrum of problems in applications can be solved by methods of optimal control, such as chemical engineering and vehicle dynamics. Modern optimal control theories and applied models are not only represented by ODEs, but also by PDEs. Kunisch and Volkwein solved open-loop and closed-loop optimal control problems for the Burgers equation [7], Armaou and Christofides studied the feedback control of Kuramto-Sivashing equation [8]. In this paper, we consider the optimal control problem for the following equation: When \(y=u-kD^{2}u+\gamma_{2}D^{4}u\), we take the distributed optimal control problem For fixed \(T > 0\), we set \(\Omega= (0, 1)\) and \(Q =\Omega\times(0, T)\). Let \(Q_{0}\subset Q\) be an open set with positive measure. Let \(V=H_{0}^{1}(0, 1)\), \(H=L^{2}(0, 1)\); \(V^{*}=H^{-1}(0, 1)\), and \(H^{*}=L^{2}(0, 1)\) are dual spaces, respectively, and we have The extension operator \(B^{*}\in L(L^{2}(0, T; Q_{0}), L^{2}(0, T; V^{*}))\) is given by The space \(W(0, T; V)\) is defined by which is a Hilbert space endowed with the common inner product. The plan of the paper is as follows. In Section 2, we prove the existence of the weak solution in a special space. The optimal control is discussed in Section 3, and the existence of an optimal solution is proved. Existence of weak solution Consider the following the sixth order Cahn-Hilliard type equation: under the initial value and boundary condition where \(B^{*}\overline{\omega}\in L^{2}(0, T; V^{*})\) and the control item \(\overline{\omega}\in L^{2}(0, T; Q_{0})\). Let \(y=u-kD^{2}u+\gamma_{2}D^{4}u\); the above problem is rewritten as Now, we give the definition of the weak solution to the problem (2.2) in the space \(W(0, T; V)\). Definition 2.1 A function \(y(x, t)\in W(0, T; V)\) is called a weak solution to problem (2.2), if for all \(\phi\in V\), a.e. \(t\in[0, T]\) and \(y_{0}\in H\) are valid. Theorem 2.1 The problem (2.2) admits a weak solution \(y(x, t)\in W(0, T; V)\) in the interval \([0, T]\), if \(B^{*}\overline{\omega}\in L^{2}(0, T; V^{*})\) and \(y_{0}\in H\). Proof Employ the standard Galerkin method. The differential operator \(A=-\partial^{2}_{x}\) is a linear unbounded self-adjoint operator in H with \(D(A)\) dense in H, where H is a Hilbert space with a scalar product \((\cdot, \cdot)\) and norm \(\|\cdot\|\). There exists an orthogonal basis \(\{\psi_{i}\}\) of H. Let \(\{\psi_{i}\}_{i=1}^{\infty}\) be the eigenfunctions of the operator \(A=-\partial^{2}_{x}\) with For \(n\in\mathbb{N}\), we define the discrete ansatz space by Set \(y_{n}(t)=y_{n}(x, t)=\sum_{i=1}^{n}y_{i}^{n}(t)\psi_{i}(x)\) and require \(y_{n}(0, \cdot)\mapsto y_{0}\) in H holds true. To prove the existence of a unique weak solution to the problem (2.2), we are going to analyze the limiting behavior of sequences of smooth functions \(\{y_{n}\}\) and \(\{u_{n}\}\). Performing the Galerkin procedure for the problem (2.2), we have According to ODE theory, there is a unique solution to (2.3) in the interval \([0, t_{n}]\). We should show that the solution is uniformly bounded when \(t_{n}\rightarrow T\). As a first step, multiplying the first equation of (2.3) by and integrating with respect to x, we obtain where and Applying a simple calculation, we have where \(C_{1}>0\) and \(C_{0}\geq0\). Since \(B^{*}\overline{\omega}\in L^{2}(0, T; V^{*})\) is a control item, we assume Choosing \(\varepsilon_{1}\), \(\varepsilon_{2}\), and ε sufficiently small, from the above inequality and the Poincaré inequality, we have From (2.11), we know By Sobolev’s imbedding theorem it follows from (2.14) that As a second step, multiplying (1.1) by \(D^{2}u_{n}\) and integrating with respect to x, we obtain From a simple calculation, we have where where ε is sufficiently small. By the Gronwall inequality, (2.21) implies As a third step, multiplying (1.1) by \(D^{4} u_{n}\) and integrating with respect to x, we obtain On the other hand, by the Nirenberg inequality, we have Hence, by the Hölder and Young inequalities, we obtain Similarly, the \(\varepsilon_{3}\) is sufficiently small. Therefore, by the Gronwall inequality, we have From a simple calculation, we have Then Thus, we have: (i) For every \(t\in[0, T]\), the sequence \(\{y_{n}\}_{n\in\mathbb{N}}\) is bounded in \(L^{2}(0, T; H)\) as well as in \(L^{2}(0, T; V)\), which is independent of the dimension of the ansatz space n. (ii) For every \(t\in[0, T]\), the sequence \(\{y_{n, t}\}_{n\in\mathbb{N}}\) is bounded in \(L^{2}(0, T; V^{*})\), which is independent of the dimension of the ansatz space n. Hence, we get \(\{y_{n, t}\}_{n\in\mathbb{N}}\subset W(0, T; V)\), and \(\{y_{n, t}\}_{n\in\mathbb{N}}\) weak in \(W(0, T; V)\), weak star in \(L^{\infty}(0, T; H)\) and strong in \(L^{2}(0, T; H)\) to a function \(y(x, t)\in W(0, T; V)\). Obviously, the uniqueness of the solution is easy to obtain [13]. We omit it here. □ To ensure that the norm of weak solution in the space \(W(0, T; V)\) can be controlled by the initial value and the control item, we need the following theorem. Theorem 2.2 If \(B^{*}\overline{\omega}\in L^{2}(0, T; V^{*})\) and \(y_{0}\in H\), then there exist constants \(C_{3}>0\) and \(C_{4}>0\), such that Proof Similar to the proof of Theorem 2.1, we obtain Multiplying the equation by y and integrating the equation with respect to x, we obtain From the Hölder and Young inequalities, we have From (2.30), we have and Note that Integrating the above inequality with respect to t yields The proof is completed. □ Optimal problem In this section, we will study the distributed optimal control and the existence of the optimal solution is obtained based on Lions’ theory. We study the following problem when \(\overline{\omega}\in L^{2}(0, T;Q_{0})\), where \(y=u-kD^{2}u+\gamma_{2}D^{4}u\). As we know that there exists a weak solution y to (2.2), due to \(u=(1-k\partial_{x}^{2}+\gamma_{2}\partial_{x}^{2})^{-1}y\), we know that there exists a weak solution u to (2.1). Let there be given an observation operator \(C\in L(W(0, T; V), S)\), in which S is a real Hilbert space and C is continuous. We choose a performance index of tracking type where \(z\in S\) is a desired state and \(\delta>0\) is fixed. The optimal control problem as regards the further generalized sixth order Cahn-Hilliard equation is where \((y, \overline{\omega})\) satisfies the problem (2.2). Let \(X=W(0, T; V)\times L^{2}(0, T; Q_{0})\) and \(Y=L^{2}(0, T; V)\times H\). We define an operator \(e=e(e_{1}, e_{2}): X\rightarrow Y\) by where and \(D^{2}\) is an operator from \(H^{1}(0, 1)\) to \(H^{-1}(0, 1)\). Then (3.2) is rewritten as Now, we have the following theorem. Theorem 3.1 There exists an optimal control solution to the problem. Proof Let \((y, \overline{\omega})\in X\) satisfy the equation \(e=e(y, \overline{\omega})=0\). In view of (3.1), we have From Theorem 2.2, we have Hence As the norm is weakly lowered semi-continuous [14], we find that \(\mathcal{J}\) is weakly lowered semi-continuous. Since \(\mathcal{J}(y, \overline{\omega})\geq0\) for all \((y, \overline{\omega})\in X\) holds, there exists which means that there exists a minimizing sequence \(\{(y_{n}, \overline{\omega^{n}})\}_{n\in\mathbb{N}}\) in X such that From (3.3), there exists an element \((y^{*}, \overline{\omega}^{*})\in X\) such that when \(n\rightarrow\infty\). From (3.4), we have Since \(W(0, T; V)\) is compactly embedded into \(L^{2}(0, T; L^{\infty})\) and continuously embedded into \(C(0, T; H)\), we derive that \(y_{n}\rightarrow y^{*}\) strongly in \(L^{2}(0, T; L^{\infty})\) and \(y_{n}\rightarrow y^{*}\) strongly in \(C(0, T; H)\), as \(n\rightarrow\infty\). Then we also derive that \(u_{n}\rightarrow u^{*}\), \(Du_{n}\rightarrow Du^{*}\), \(D^{2}u_{n}\rightarrow D^{2}u^{*}\), \(D^{3}u_{n}\rightarrow D^{3}u^{*}\), \(D^{4}u_{n}\rightarrow D^{4}u^{*}\) strongly in \(C(0, T; H)\), as \(n\rightarrow\infty\). As the sequence \(\{y_{n}\}_{n\in\mathbb{N}}\) converges weakly, \(\|y_{n}\|_{W(0, T; V)}\) is bounded. Also, we see that \(\|y_{n}\|_{L^{2}(0, T; L^{\infty})}\) is bounded based on the embedding theorem. Since \(y_{n}\rightarrow y^{*}\) strongly in \(L^{2}(0, T; L^{\infty})\), we derive that \(\|y^{*}\|_{L^{2}(0, T; L^{\infty})}\), \(\|u^{*}\|_{L^{2}(0, T; L^{\infty})}\), \(\|D^{2}u^{*}\|_{L^{2}(0, T; L^{\infty})}\) and \(\|D^{4}u^{*}\|_{L^{2}(0, T; L^{\infty})}\) are bounded. Notice that As we know Note that For \(I^{1}_{1}\), we have Also we have Further, similar to (3.6), we have From (3.5), we have In view of the above discussion, we can conclude that Since \(y^{*}\in W(0, T; V)\), we have \(y^{*}(0)\in H\). From \(y_{n}\rightharpoonup y^{*}\) in \(W(0, T; V)\), we can infer that \(y_{n}(0)\rightharpoonup y^{*}(0)\). Thus we obtain which means that \(e_{2}(y^{*}, \overline{\omega}^{*})=0\), \(\forall n\in\mathbb{N}\). Hence, we can derive that \(e(y^{*}, \overline{\omega}^{*})=0\), \(\forall n\in\mathbb{N}\). In conclusion, there exists an optimal solution \((y^{*}, \overline{\omega}^{*})\) to the problem. We can infer that there exists an optimal solution \((y^{*}, \overline{\omega}^{*})\) to the viscous generalized Cahn-Hilliard equation due to \(u=(1-k\partial_{x}^{2}+\gamma_{2}\partial_{x}^{4})^{-1}y\). □ References 1. Gompper, G, Kraus, M: Ginzburg-Landau theory of ternary amphiphilic systems. I. Gaussian interface fluctuations. Phys. Rev. E 47, 4289-4300 (1993) 2. Gompper, G, Kraus, M: Ginzburg-Landau theory of ternary amphiphilic systems. II. Monte Carlo simulations. Phys. Rev. E 47, 4301-4312 (1993) 3. Gompper, G, Goos, J: Fluctuating interfaces in microemulsion and sponge phases. Phys. Rev. E 50, 1325-1335 (1994) 4. Gompper, G, Zschocke, S: Ginzburg-Landau theory of oil-water-surfactant mixtures. Phys. Rev. A 46, 4836-4851 (1992) 5. Schimperna, G, Pawłow, I: On a class of Cahn-Hilliard models with nonlinear diffusion. SIAM J. Math. Anal. 45, 31-63 (2013) 6. Pawłow, I, Zaja̧czkowski, W: A sixth order Cahn-Hilliard type equation arising in oil-water-surfactant mixtures. Commun. Pure Appl. Anal. 10, 1823-1847 (2011) 7. Kunisch, K, Volkwein, S: Control of the Burgers equation by a reduced-order approach using proper orthogonal decomposition. J. Optim. Theory Appl. 102, 345-371 (1999) 8. Armaou, A, Christofides, PD: Feedback control of the Kuramoto-Sivashinsky equation. Physica D 137, 49-61 (2000) 9. Shen, C, Tian, L, Gao, A: Optimal control of the viscous Dullin-Gottwald-Holm equation. Nonlinear Anal., Real World Appl. 11, 480-491 (2010) 10. Shen, C, Gao, A, Tian, L: Optimal control of the viscous generalized Camassa-Holm equation. Nonlinear Anal., Real World Appl. 11, 1835-1846 (2010) 11. Tian, L, Shen, C, Ding, D: Optimal control of the viscous Camassa-Holm equation. Nonlinear Anal., Real World Appl. 10, 519-530 (2009) 12. Zhao, X, Liu, C: Optimal control problem for viscous Cahn-Hilliard equation. Nonlinear Anal. TMA 74, 6348-6357 (2011) 13. Hinze, M, Volkwein, S: Analysis of instantaneous control for the Burgers equation. Nonlinear Anal. TMA 50, 1-26 (2002) 14. Wouk, A: A Course of Applied Functional Analysis. Wiley-Interscience, New York (1979) Acknowledgements The authors would like to express their sincere thanks to the referee’s valuable suggestions for the revision and improvement of the manuscript. This work is supported by the National Science Foundation of China (No. J1310022). Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the manuscript and read and approved the final manuscript.
This has been asked several times already, I am just too bored to look for duplicates. In general, for $x,y \in \Bbb C$, one defines $x^y$ as $\Bbb e ^{y \ln x}$. Of course, one has to tell what the natural logarithm means for complex numbers. The usual procedure is to eliminate a half-line from the complex plane, in order to be able to unambigously define a "principal argument". Fine, but what half-line to choose? Most people choose to eliminate the subset $\{ z \in \Bbb C \mid \Re z \le 0 \}$, which means that with this convention one may not speak about $\ln y$ for real $y \le 0$. In particular, $(-1)^{-x}$ would no longer make sense under this convention. Of course, other half-lines may be eliminated, in which case one might be able to define $(-1)^{-x}$ (which would be a complex number even for $x \in \Bbb R$!), but this approach is less common. Assuming that you choose to go this way, you could remove the half-line $\{ z \in \Bbb C \mid \Re z \ge 0 \}$ (this being just one possibility among many others), define the principal argument to be $\arg z$ to be the angle formed by the segment $0z$ with the $x$-axis (measured from the $x$-axis counter-clockwisely), and then define $\ln z = \ln |z| + \Bbb i \arg z$. It becomes clear that, with this convention, $\ln (-1) = \ln 1 + \Bbb i \pi = \Bbb i \pi$, so that $$(-1)^{-x} = \Bbb e^{-x \ln (-1)} = \Bbb e^{\pi \Bbb i (-x)} = \cos (-\pi x) + \Bbb i \sin (-\pi x) = \cos (\pi x) - \Bbb i \sin (\pi x)$$ so $\Re (-1)^{-x} = \cos (\pi x), \quad \Im (-1)^x = -\sin (\pi x)$, which explains the shape of the curves in your plot. (I have used Euler's formula $\Bbb e ^{\Bbb i x} = \cos x + \Bbb i \sin x$ and the fact that $\cos (-x) = \cos x$ and $\sin (-x) = \sin x$.)
The question is related to the question: detecting weak equivalences in a simplicial model category Suppose that we have a simplicial model category $M$ and denote by $M^{f}$ the full simplicial subcategory of fibrant objects. Suppose that $R$ is a subcategory of $M^{f}$ such that for any object $m\in M^{f}$ there exists an object $r\in R$ such that $r$ is (zigzag) equivalent to $m$ i.e. $r$ and $m$ are isomorphic in $Ho(M)$ the homotopy category of $M$. Let $w: a\rightarrow b$ be a morphism in $M$ such that for any object $r\in R$ the induced map of simplicial sets $w^{\ast}:Map_{M}(b,r)\rightarrow Map_{M}(a,r)$ is a weak homotopy equivalence of simplicial sets. Can we conclude that $w$ is a weak equivalence in the model category $M$ ? If $R=M^{f}$ this is true and is proved in Hirschhorn's book. EDIT : The question is very general, and it should have a formal answer in case it is true. After trying all suggestions (in the comments and deleted answer) and reading Hirshorn's book I had the impression that maybe the answer to my question is no, and there should be a counterexample. In the Hirshorn's book the fact we test for all fibrant objects seems to be essential.
The object you're talking about is called, in mathematics, a Clifford algebra. The case when the algebra is over the complex field in general has a significantly different structure from the case when the algebra is over the real field, which is important in Physics. In Physics, in the specific case of 4 dimensions, using the Minkowski metric as you have in your Question, and over the complex field, the algebra is called the Dirac algebra. Once you have the name Clifford algebra, you can look them up in Google, where the first entry is, unsurprisingly, Wikipedia, http://en.wikipedia.org/wiki/Clifford_algebra, which gives you a reasonable flavor of the abstract construction methods that mathematicians prefer. The John Baez page that is linked to from the Wikipedia page is well worth reading (if you spent a year learning everything that John Baez has posted over the years, almost always with unusual clarity and engagingly, you would know most of the mathematics that might be useful for Physics). It's not so much that the Clifford algebras are funny. Their quadratic construction is interrelated, often closely, with many other constructions in mathematics. There are people who are enthusiastic about Clifford algebras, sometimes very or too much so, and a lot of ink has been spilled (Joel Rice's and Luboš Motl's Answers are rather inadequate to the literature, except that I think they chose to interpret your Question narrowly where I've addressed what your construction has led to in Mathematics more widely), but there are many other fish in the sea to admire. EDIT: Particularly in light of Marek's comments below, it should be said that I interpreted Isaac's Question generously. There is a somewhat glaring mistake in the OP that is pointed out by Luboš (which I hope you see, Isaac). Nonetheless there is a type of construction that is closely related to what I chose to take to be the idea of the OP, Clifford algebras. Isaac, this is how I think your derivation ought to go, if we just use quaternions, taking $q=t+ix+jy+kz$, $$q^2=(t+ix+jy+kz)(t+ix+jy+kz)=t^2-x^2-y^2-z^2+2t(ix+jy+kz).$$The $xy,yz,zx$ terms cancel nicely, but the $tx,ty,tz$ terms don't, unless we do as Luboš did and introduce the conjugate $\overline{q}=t-ix-jy-kz$. This, however, doesn't do what I take you to be trying to do. So, instead, we introduce a fourth object, $\gamma^0$, for which $(\gamma^0)^2=+1$, and which anti-commutes with $i$,$j$, and $k$. Then the square of $\gamma^0t+ix+jy+kz$ is $t^2-x^2-y^2-z^2$. The algebra this generates, however, is more than just the quaternions, it's the Clifford algebra $C(1,3)$. EDIT(2): Hi, Isaac. I've thought about this way too much overnight. I think now that I was mistaken, you didn't make a mistake. I think you intended your expression $(a,b,c,d)^2$ to mean the positive-definite inner product $a^2+b^2+c^2+d^2$. With this reading, however, we see three distinct structures, the positive-definite inner product, the quaternions, and the Minkowski space inner product that emerges from using the first two together. Part of what made me want to introduce a different construction is that in yours the use of the quaternions is redundant, because you'd get the same result that you found remarkable if you just used $(a,ib,ic,id)^2$ (as Luboš also mentioned). Even the positive-definite inner product is redundant, insofar as what we're really interested in is just the Minkowski space inner product. Also, of course, I know something that looks similar and that has been mathematically productive for over a century, and that can be constructed using just the idea of a non-commutative algebra and the Minkowski space inner product. To continue the above, we can write $\gamma^1=i$, $\gamma^2=j$, $\gamma^3=k$ for the quaternionic basis elements, together with the basis element $\gamma^0$, then we can define the algebra by the products of basis elements of the algebra, $\gamma^\mu\gamma^\nu+\gamma^\nu\gamma^\mu=2g^{\mu\nu}$. Alternatively, for any vector $u=(t,x,y,z)$ we can write $\gamma(u)=\gamma^0u_0+\gamma^1u_1+\gamma^2u_2+\gamma^3u_3$, then we can define the algebra by the product for arbitrary 4-vectors, $\gamma(u)\gamma(v)+\gamma(v)\gamma(u)=2(u,v)$, where $(u,v)$ is the Minkowski space inner product. Hence, we have $[\gamma(u)]^2=(u,u)$. Now everything is getting, to my eye, and hopefully to yours, rather neat and tidy, and nicely in line with the conventional formalism.
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death. In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid. In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid. Question: In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
Let's count the number of ways not to get $7$ heads in a row. We will put together atoms that consist of $0$ to $6$ heads followed by a tail. Any arrangement of heads and tails without $7$ heads in a row, appended with a tail, can be uniquely made up of a number of such atoms. All arrangements of such atoms appear once somewhere in the sum$$\sum_{k=0}^\infty(x+x^2+x^3+x^4+x^5+x^6+x^7)^k$$where $x$ represents $T$ $x^2$ represents $HT$ $x^3$ represents $HHT$ $\vdots$ $x^7$ represents $HHHHHHT$ For example, if we are looking for $HTTHHTTH$, append a $T$ and we get the term for $k=5$ where in the first factor, the $x^2$ ($HT$) was chosen, in the second factor, the $x$ ($T$) was chosen, then $x^3$ ($HHT$), then $x$ ($T$), then $x^2$ (HT), to get $HTTHHTTHT$. Note that the exponent of $x$ matches the number of tosses. To count the number of sequences of $40$ flips that do not contain $7$ consecutive heads, we look at the coefficient of $x^{41}$ in$$\begin{align}\sum_{k=0}^\infty(x+x^2+x^3+x^4+x^5+x^6+x^7)^k&=\frac1{1-x\frac{x^7-1}{x-1}}\\&=\frac{1-x}{1-2x+x^8}\end{align}$$The coefficient of $x^{41}$ is $955427104501$. There is a degree $8$ recursion to compute this without dividing polynomials: $c_n=2c_{n-1}-c_{n-8}$, where $c_n$ starts$$1,1,2,4,8,16,32,64,\dots$$ The number of sequences of $40$ flips is $2^{40}$. Therefore, the probability of getting a sequence of $7$ heads in a row in $40$ flips is$$1-\frac{955427104501}{2^{40}}=0.131044110526$$
Let $\Phi: R^n \to R^n$ satisfy $\Phi(x)=u+Ax+Q(x)$, with $x=(x_1, x_2,\ldots, x_n) \in R^n$. $u$ is a given positive vector, $A$ non negative matrix, and $Q(x)$ quadratic mapping with $Q(x)_i=x_i(k_{i1}x_1+k_{i2}x_2+\ldots+k_{in}x_n)$, where all the $k_{ij}$ are nonnegative and at least one $k_{ij}, 1 \leq i, j \leq n $ is positive. Suppose $\Phi(\mathbf{1})=\mathbf{1}$, where $\mathbf{1}$ is the vector each entry being 1. How can I prove that there cannot be two distinct vectors u, v such that u, v are different from the vector $\mathbf{1}$ and $\Phi(v)=v, \Phi(u)=u$, $v, u$ are vectors with each entry positive and no greater than 1.
I'm afraid that a thorough answer requires the contents of chapters 2 and 3 of Leishman, but here is an outline. The relationship between power and true airspeed from this answer: Power equals torque times angular velocity of the rotor: $ P = Q * \Omega$. Most helicopters have constant speed rotors, so the relationship between torque and speed would be similar to the power curves. Rotor blade length and pitch appear as follows. Power P and torque Q equations for momentum theory in the hover: $$P = C_P \cdot ½ \rho A (\Omega R)^3$$ $$Q = C_Q \cdot ½ \rho A (\Omega R)^2 R$$ At forward speed the momentum based power becomes less due to lower induced power requirements. An inflow equation relates forward velocity of the helicopter with blade tip speed - too much scope for an answer here. Blade length appears in the rotor disk area A. Torque and power are functions of the rotor solidity $\sigma$, defined as (total blade area)/(disk area). Note that $C_P$ is always the same value as $C_Q$ From Leishman: ..if the amount of lift a helicopter generates changes dependant on pitch, how much does the pitch change.. Thrust T of the rotor Again from Leishman:
Power quality is related to the quality of power being supplied or consumed. It is maintained when: EB can assure power quality from their side only when its consumers do meet certain power quality standards (such as, IEEE 519-1992 and/or IEEE 519-2014). Recently, several EBs across the globe have started imposing strict regulations on quality of current, a customer can draw from the supply lines to maintain a healthy power distribution system. Ideally, the input supply voltage and load current should have the following characteristics The above requirements can be illustrated as shown in the graph: The balanced three-phase three-wire system is one having equal magnitude of currents in all the phases and having 120o phase angle difference between each of them. Unbalanced is characterized as currents (also applicable to the voltage) having unequal magnitude in one or more phase/line currents. The currents may also have different phase angle difference among one or more phases. In case of three-phase four-wire system (with neutral conductor), the unbalance results in neutral current flow. Higher the unbalance, higher would be the neutral current. Main reason for the unbalance current flow in a system is connection of different single-phase loads in one or more phases. Below figure illustrates unbalanced and neutral current profiles. The power electronic devices (such as, variable frequency drives-VFDs, phase controlled rectifiers, choppers, and battery chargers) and other major electrical loads in the industries draw currents that have three main components: Among these three parts, the active current is responsible for the actual work done in factory/plant, while the remaining two parts are just circulating between EB & load, and do not contribute to any useful work. These two non-active currents, however, have significant impact on the other connected loads and distribution system. Power factor can be defined as a quantitative measure to estimate how effectively the active power is being utilized by the loads. Mathematically, it is expressed as: Once the above parameters are known, the plant total harmonic current (I H) in amperes can be calculated as: $$Power Factor = PF = {{{Active Power} \over Apparent Power}} = {{{P} \over S}} $$ Where, P is active power and S is apparent power which is product of RMS voltage and net RMS current. When the load is pure resistive, both voltage and current will be in-phase (similar to Fig. 1). In such a case, there will not be any reactive current. That is, both P and S will be equal, and the power factor would be 1 (best possible scenario). When the voltage and current are sinusoidal, power factor is defined as cosine angle between voltage and current. Due to inductive loads (such as, motors), the current drawn from the system lags the voltage by angle φ The active power and power factor in this case are given as: $$P = Vrms.Irms.\cos(φ)$$ $$Power Factor = {{{P} \over S}} = {{{Vrms.Irms.\cos(φ)} \over Vrms.Irms}} = {{\cos(φ)}} $$ Note that when there are excessive capacitive banks, the current may lead voltage by angle . This case also results in poor power factor. When the voltage is pure sinusoidal and current is non-sinusoidal (distorted), power factor is defined by the below expression: $$Power Factor = {{{I1,rms} \over Irms}} = \cos(θ1-φ1) = distributionfactor \times displacementpowerfactor$$ And, in the worst-case scenario, when both voltage and current are non-sinusoidal (distorted), the power factor is called as true power factor and is defined as: $$TruePowerFactor = {{{\sum_P} \over Vrms.Irms}} $$ Where the total P is determined as summation of all powers resulted from individual harmonic currents. Harmonics in the current or voltage are components at a multiple of fundamental frequency of the system. They are generated mainly due to turn on and off (switching) operation of power electronics-based system, for example, variable frequency drives-VFDs, phase-controlled rectifiers, choppers, battery chargers, televisions, printers, laptops, CFLs and so on. Fig. 5 shows a profile of distorted current that has harmonics. Comparing Fig. 1 and Fig. 5, one can visualize from the waveshapes that any waveform (current or voltage) that is not pure sinusoidal generally contains some harmonics. Note in Fig. 5, the peak current has larger magnitude compared the RMS current for the given waveform.
With the approaching hurricane, I am curious about what would happen if I go outside, in particular whether the wind gusts might be fast enough to blow me away. How fast would the wind have to be to blow away a person? Let's do math before we look for information. First, what is the force that keeps you anchored to the ground? This is the force of static friction, which is $F_s = \mu m g$. What is this force opposing? The force of drag from the wind pushing on you. For the velocities involved (a high Reynolds number regime), the drag is quadratic in velocity, $F_d = \frac{1}{2} \rho v^2 C_d A$, where $\rho$ is the density of atmosphere, $v$ is the velocity, $C_d$ is a dimensionless drag coefficient, and $A$ is your body's cross-sectional area. So let's set the forces equal and solve for the velocity: $$v^2 = \frac{2\mu m g}{\rho C_d A}$$ We'll be very ballpark about this. The density of air is $\rho \approx 1.2 \text{ kg/m}^3$. I'll say your mass is $50 \text{ kg}$. Per this paper, we'll say $C_d A \approx 0.84 \text{ m}^2$. Per this thread, we'll say $\mu = 0.4$. Putting all these numbers in gives us $v \approx 20 \text{ m/s}$, or about 45 mph. But, this is just enough to make your body move (compared to standing still on the ground). It would take at least a 70 mph wind to overcome the force of gravity, and even then, that's assuming the wind keeps pushing on you with your body turned to face it (or away from it), not sideways. Hard thing to guarantee given how the body is likely to tumble or spin. It's hard to be exact about this sort of thing, but let's just say this: going out in this kind of storm is a bad idea. The numbers aren't clear-cut enough to say you're safe, so better safe than sorry. protected by Qmechanic♦ Aug 15 '14 at 14:30 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
The compression of the air outside the cylinder by the $\ce{N2}$ escaping from the cylinder takes place reversibly (as far as the air is concerned), but the process of $\ce{N2}$ leaking out of the cylinder through a valve or small orifice is definitely irreversible because of the Joule–Thompson throttling that takes place as the $\ce{N2}$ flows from a high pressure inside the cylinder to a much lower pressure outside the cylinder. So to calculate the work done by the $\ce{N2}$, one cannot treat the expansion of the nitrogen as reversible. Instead, one must focus on the reversible work required to compress the air. Assuming that the $\ce{N2}$ does not diffuse into the air (i.e., the boundary between the air and the $\ce{N2}$ remains well-defined), the final volume occupied by the $\ce{N2}$ is $11 × 20/1.19 = 184.9$ liters. That means that the volume of $\ce{N2}$ increased by $184.9 - 20 = 164.9$ liters. This is the volume of $\ce{N2}$ outside the cylinder (in the jar) in the final state of the system. If $V_0$ represents the volume in the jar outside the cylinder (and also the initial volume of air), the final volume of air is $V_0 - 164.9$. So, the ratio of the initial volume of air to the final volume of air is equal to the pressure ratio of the air, and given by: $$\frac{V_0}{V_0 - 164.9} = 1.19$$ Solving for $V_0$ then yields $V_0 = \pu{1033 L}$. The work done by the air on the $\ce{N2}$ is given by $$\begin{align}W_\mathrm{a} &= n_\mathrm{a}RT\ln{\left(\frac{1033 - 164.9}{1033}\right)} \\ &= P_0V_0\ln{\left(\frac{1033-164.9}{1033}\right)} \\ &= 1\cdot 1033\ln{\left(\frac{1033-164.9}{1033}\right)} \\ &= \pu{-180 L atm}\end{align}$$ This is equal in magnitude and opposite in sign to the work done by the $\ce{N2}$ on the air. So the work done by the $\ce{N2}$ on the air is 180 liter-atm.
Given matrix $A = \begin{bmatrix} -3 & 2 \\ 0 & -3 \end{bmatrix}$ The eigenvalues of $A$ are $\lambda_{1,2} = -3$ with algebraic multiplicity 2. Now to find the eigenvector, we use Gaussian elimination as follows $$ \quad (A-\lambda I | 0) \Rightarrow \left( \begin{array}{cc|c} 0 & 2 & 0 \\ 0 & 0 & 0 \end{array} \right)\quad \overset{\implies}{R_2/2} \quad \left(\begin{array}{cc|c} 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right) $$ Then we know $x_2=0$, and the eigenvector corresponding to $\lambda_{1,2}$ is $v_1=[ 1 \quad 0]^{T}$. My questions are: Is $x_1$ what we call a free variable? Since it has no specific value. I'm confused because there isn't a leading $1$ in the first row. What is the geometric multiplicity? Is it one? Can I diagonalize $A$? I have two eigenvalues and one eigenvector but can I say $\lambda_{1} = -3$ with $v_1=[ 1 \quad 0]^{T}$ and $\lambda_{2} = -3$ with $v_2=[ 1 \quad 0]^{T}$? or I have to have two different eigenvectors for the repeated eigenvector?
Let $u: \Omega\subset \mathbb{R}^N \to \mathbb{R}^M$ be a $BV$ function. Is the Hausdorff dimension of the graph of $u$ equal to $N$? How can we prove it? Update. In an answer to this post, it has been showed that there exist a representative $\tilde u$ of $u$ such that its graph has Hausdorff dimension equal to $N$. In a subsequent post If the Hausforff dimension of the graph of a function $u$ is $N$ and $\tilde u = u$ a.e. then $\dim_H \mathrm{graph} \, \tilde u = N$ too it has been showed that a function can be zero a.e. and still its graph may have dimension strictly greater than $1$. So probably this question is better formulated in terms of essential graphof $u$, which possibly is equivalent to asking for the property to hold for one representative of $u$ (see Question 2 in Hausdorff dimension of the graph of a BV function (in 1 dimensional setting)) In the post Hausdorff dimension of the graph of a BV function (in 1 dimensional setting), I've asked about a simpler proof of the result in the one-dimensional setting.
Research Open Access Published: Stability and Hopf bifurcation of a predator-prey model Boundary Value Problems volume 2019, Article number: 129 (2019) Article metrics 366 Accesses Abstract In this paper, we study a class of predator-prey model with Holling-II functional response. Firstly, by using linearization method, we prove the stability of nonnegative equilibrium points. Secondly, we obtain the existence, direction, and stability of Hopf bifurcation by using Poincare–Andronov Hopf bifurcation theorem. Finally, we demonstrate the validity of our results by numerical simulation. Introduction Population ecology is a discipline in which dynamic systems are involved in species, populations, and how these groups interact with the environment. Population ecology primarily studies how species population size changes over time and space. Since Lotka–Volterra’s groundbreaking work in the 1920s, the predator-prey model has become one of the most important research topics in mathematical ecology for nearly a century. At the same time, mathematicians used the theory of dynamics to analyze the differential equations based on a predator-prey model. Hsu and Huang in [4] got some results on the global stability of a predator-prey system. Xiao and Ruan have investigated the global analysis in a predator-prey system with nonmonotonic functional response (we can see [10]). In addition, there are some scholars who applied bifurcation theory in dynamics based on models. In [7], Li and Li considered the Hopf bifurcation of a predator-prey model with time delay and stage structure for the prey. Song studied the stability and Hopf bifurcation of a predator-prey model with stage structure and time delay for the prey (see [8]). There are also many related studies, and we can find them in [5, 11] etc. In this paper, we consider the Gause-type model raised by Caughley and Lawton in [1]. Namely, we are concerned with the predator-prey model with Holling-II functional response under positive initial conditions \(N(0)> 0\), \(P(0)> 0\). The average growth rate of a typical prey species is assumed to be a logistic model where N is the prey population density, P is the predator population density, K is the environmental capacity, a is the prey capture rate, h is the capture time, m is the predator’s intrinsic mortality, and c denotes the conversion efficiency of ingested prey into the predator. When the predator density is low, the prey density increases, the individual’s predation rate is the largest. For more details on the background about system (1.1), we can see [6]. Let system (1.1) is dimensionless to with Preliminary analysis In this section, we are concerned with the preliminary analysis of system (1.2), namely the boundedness of the solutions and the stability of each nonnegative equilibrium point of system (1.2). Note that the Jacobian of (1.2) is We cannot find the diagonal matrix L, so that \(LJ+J^{T}L=0\) is established, so system (1.2) is not conservative. Due to the boundedness of the functional response, we can find that Assume then these functions In fact, direct calculations indicate that system (1.2) satisfies the Lipschitz condition. Boundedness of solution Theorem 2.1 All the solutions of system (1.2) are uniformly bounded on \(\mathbb{R}_{+}^{2}\). Proof We define a function then For each \(\eta < \sigma \), Upon that we can find \(\varphi > 0~\) such that Through the above equation, we have \(\frac{d\tau }{dt}+\eta \tau \leq \varphi \), which implies that Moreover, we have □ Stability analysis In this section, we analyze the stability of the nonnegative equilibrium points for system (1.2). It is easy to get the nonnegative equilibrium points of system (1.2): \(E_{0} (0,0)\), \(E_{1}(1,0)\), and \(E_{*}(x^{*},y ^{*})\) with \(x^{*}=\frac{\sigma \alpha }{r-\sigma }\), \(y^{*}=(1-x^{*})(x ^{*}+\alpha )\), and \(r>\sigma (\alpha +1)\) ensures system (1.2) has a unique positive equilibrium point \(E_{*}(x^{*},y^{*})\). Stability analysis of the equilibrium \(E_{0}(0,0)\) Theorem 2.2 The equilibrium \(E_{0}(0,0)\) is unstable. Proof The Jacobi matrix of (1.2) at \(E_{0}(0,0)\) is Then the characteristic equation of \(J_{0}\) is with Clearly, \(E_{0}(0,0)\) is a saddle point which is unstable. □ Stability analysis of the equilibrium \(E_{1}(1,0)\) Theorem 2.3 (1) The equilibrium\(E_{1}(1,0)\) is locally asymptotically stable if\(r< (1+\alpha )\sigma \). (2) System(1.2) enters into transcritical bifurcation around\(r=(1+\alpha )\sigma \). (3) The equilibrium\(E_{1}(1,0)\) is globally asymptotically stable if\(r<\sigma -1\). Proof (1) The Jacobian of (1.2) at \(E_{1}(1,0)\) is Then the characteristic equation of \(J_{1}\) is with If \(r< (1+\alpha )\sigma \), we have \(P< 0\) and \(Q> 0\). Therefore, the equilibrium point \(E_{1}(1,0)\) is locally asymptotically stable. (2) The one of eigenvalues of \(J_{1}\) will be 0 if \(\operatorname{det} J_{1}=0\), which gives \(r=(1+\alpha )\sigma \). If Ω and Φ denote the eigenvectors corresponding to the eigenvalue 0 of the matrices \(J_{1}\) and \(J_{1}^{T}\), respectively. Let where \(\varPsi _{1}=-\frac{1}{1+\alpha }\varPsi _{2}\), and \(\varPsi _{2}\), \(\varpi _{2}\) are two nonzero numbers. Now Again and where and with (3) Let \((x,y)\in \mathbb{R}_{+}^{2}:= \{ (x,y)\in \mathbb{R} ^{2} :x> 0,y> 0 \}\) and consider the function \(V:\mathbb{R}_{x} ^{2}\rightarrow \mathbb{R}\), if \(\sigma >1+r\), then \(\frac{dV}{dt}<0\), and \(E_{1}(1,0)\) is globally asymptotically stable. □ Stability analysis of the positive equilibrium \(E _{*}(x^{*},y^{*})\) The Jacobi matrix of (1.2) at \(E_{*}(x^{*},y^{*})\) is Then the characteristic equation of \(J_{*}\) is with If \(r>\sigma (\alpha +1)\), we have \(Q>0\), by simple calculations, we get the following theorem. Theorem 2.4 Let \(\alpha <1\). If \(r> \frac{\sigma (1+\alpha )}{1-\alpha }\), then the eigenvalue of Eq. (2.2) has a pair of negative real parts, that is, the positive equilibrium point \(E_{*}(x^{*},y^{*})\) is locally asymptotically stable. If \(\sigma (\alpha +1)< r<\frac{\sigma (1+ \alpha )}{1-\alpha }\), then \(E_{*}(x^{*},y^{*})\) is unstable. The analysis of the Hopf bifurcation In this section, we consider the Hopf bifurcation of system (1.2) at \((x^{*},y^{*})\) by setting the parameter of bifurcation as r. Define \(r_{0}=\frac{\sigma (1+\alpha )}{1-\alpha }\). Let \(\mu =\delta (r) \pm \omega (r)i\) be the two roots of Eq. (2.2), by calculating, we can get According to Mainul’s theory in [2], we know that if \(\operatorname{tr}J_{*}=0\), then both eigenvalues of Eq. (2.2) will be purely imaginary provided \(\operatorname{det}J_{*}>0\). Therefore, the implicit function theorem implies that a Hopf bifurcation occurs where a periodic orbit is created as the stability of the equilibrium point \(E_{*}\) changes. Now, let \(\operatorname{tr}J_{*}=0\), we have Obviously, \(\operatorname{det}J_{*}>0\), which should be positive in order to get a Hopf bifurcation. In order to obtain more details of the Hopf bifurcation at \((x^{*},y^{*})\), we need to do a further analysis to system (1.2). Let \(\widetilde{x}=x-x^{*}\), \(\widetilde{y}=y-y^{*}\), we transform the equilibrium \((x^{*},y^{*})\) of system (1.2) to (\(0,0\)) of a new system. For the sake of simplicity, we denote x̃, ỹ by x, y, respectively. Thus, system (1.2) is transformed to Rewrite system (3.1) as where with Define with If and Through further transform, we have namely Thus, system (3.2) can be transformed into with where The Taylor expansion of equations above at \(r=r_{0}\) are In order to investigate the stability of the periodic solution, we need to calculate the sign of the coefficient \(a(r_{0})\), which is given by Calculate the partial derivative of the bifurcation at \((X,Y,r)=(0,0,r _{0})\) when \(\omega _{0}=\omega (r_{0})\), we have The explicit calculation of \(a(r_{0})\) can be found in [3]. According to Poincare–Andronow’s Hopf bifurcation theory and the above calculations of \(a(r_{0})\), we get the further result. Theorem 3.1 Set \(r>\sigma (\alpha +1)\) and \(\alpha <1\) hold. If \(a(r_{0})<0\), the periodic solution of the Hopf bifurcation from \((x^{*},y^{*})\) is asymptotically stable, the Hopf bifurcation is subcritical. If \(a(r_{0})>0\), the periodic solution of the bifurcation is unstable, and the Hopf bifurcation is supercritical. Numerical simulations In this section, we perform numerical simulations about system (1.2). Figure 1 shows that \(E_{0}(0,0)\) is a saddle point which is unstable and \(E_{1}(1,0)\) is also a saddle point when we set \(r=0.4\), \(\alpha =0.2\), \(\sigma =0.3\). We also can observe that \(E_{*}(x^{*},y^{*})\) is locally asymptotically stable when \(\alpha <1\) and \(r> \frac{\sigma (1+ \alpha )}{1-\alpha }\). The equilibrium point \(E_{1}(1,0)\) is globally asymptotically stable. In order to make sure \(\sigma > \frac{r}{1+\alpha }\), we set \(r=0.4\), \(\alpha =0.2\), \(\sigma =0.35\), as shown in Fig. 2. Let \(\alpha =0.2\), \(\sigma =0.3\), we have \(r_{0}=0.45\). When \(r=0.45\), system (1.2) emits a Hopf bifurcation at \((x^{*},y^{*})\). And by further calculation, we have \(a(r_{0})\approx -1.833<0\), the Hopf bifurcation is subcritical and the periodic solution of the Hopf bifurcation at \((x^{*},y^{*})\) is asymptotically stable, see Fig. 3. References 1. Caughley, G., Lawton, J.H.: Plant-Herbivore Systems, Theoretical Ecology, pp. 132–166. Sinauer Associates, Sunderland (1989) 2. Haque, M.: Ratio-dependent predator-prey models of interacting populations. Bull. Math. Biol. 71, 430–452 (2009) 3. Hassard, B.D., Kazarinoff, N.D.: Theory and Applications of Hopf Bifurcation. CUP Archive, (1981) 4. Hsu, S.B., Huang, T.W.: Global stability for a class of predator-prey systems. SIAM J. Appl. Math. 55(3), 763–783 (1995) 5. Kaper, T.J., Vo, T.: Delayed loss of stability due to the slow passage through Hopf bifurcations in reaction-diffusion equations. Chaos, Interdiscip. J. Nonlinear Sci. 28(9), 091103 (2018) 6. Kot, M.: Elements of Mathematical Ecology. Cambridge University Press, Cambridge (2001) 7. Li, F., Li, H.: Hopf bifurcation of a predator-prey model with time delay and stage structure for the prey. Math. Comput. Model. 55(3–4), 672–679 (2012) 8. Song, Y., Xiao, W., Qi, X.: Stability and Hopf bifurcation of a predator-prey model with stage structure and time delay for the prey. Nonlinear Dyn. 83(3), 1409–1418 (2016) 9. Sotomayor, J.: Generic bifurcations of dynamical systems. Dyn. Syst. 561–582 (1973) 10. Xiao, D., Ruan, S.: Global analysis in a predator-prey system with nonmonotonic functional response. SIAM J. Appl. Math. 61(4), 1445–1472 (2001) 11. Xiao, Y., Chen, L.: A ratio-dependent predator-prey model with disease in the prey. Appl. Math. Comput. 131(2–3), 397–414 (2002) Acknowledgements The work was supported by Fundamental Research Funds for the Central Universities (31920190057), the Key Subjects of Mathematics in Gansu Province, the First-class Discipline Program of Northwest Minzu University, and the National Natural Science Foundation of China (71861030). Availability of data and materials Not applicable. Funding Not applicable. Ethics declarations Competing interests All of the authors of this article claim that together they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
While I know this question has already been answered, I felt obligated to get around to writing a formal answer. I will not go into detail about irreversible/dissipative systems, as Arnold Neumaier's answer already addressed that issue. Rather, my answer will focus on the mathematics behind ergodicity and mixing. Note: Most of my comments were taken from Penrose [1979] in the following. Background First let us define $\boldsymbol{\Gamma}$ to be the whole of phase space, described by the position and momentum coordinates, $\mathbf{q}$ and $\mathbf{p}$, respectively. Then if we define the phase space density as that which satisfies:$$\int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ \rho\left( \mathbf{q}, \mathbf{p} \right) = 1 \tag{1}$$where $n$ is the number of degrees of freedom and $\rho\left( \mathbf{q}, \mathbf{p} \right)$ is the phase space probability density. Now if we use a generic variable, $G\left( \mathbf{q}, \mathbf{p} \right)$, to describe any dynamical variable (e.g., energy), then the ensemble average of $G$ is denoted by:$$\langle G \rangle = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \rho\left( \mathbf{q}, \mathbf{p} \right) \tag{2}$$ Three Principles However, there is an issue to be aware of at this point [i.e., page 1940 in Penrose, 1979]: The fundamental problem of statistical mechanics is what ensemble – that is, what phase-space probability density $\rho$ – corresponds to a given physical situation... It is, however, possible to state three principles that the phase-space density should satisfy; and it turns out, rather surprisingly, that these principles when combined with a study of the dynamics of our mechanical models give enough information to answer the fundamental problem satisfactorily in some important cases. 1st Principle The first of the three principles is just Liouville's theorem in the limit where $d \rho/dt = 0$. It's another way of saying that the Hamiltonian of the system does not explicitly depend upon time, which is how we define the system to be isolated. 2nd Principle The second principle is stated as [i.e., page 1941 in Penrose, 1979]: The second of the three principles is more general, since it does not require the system to be isolated... The principle, which I shall call the principle of causality, is simply that the phase-space density at any time is completely determined by what happened to the system before that time, and is unaffected by what will happen to the system in the future. 3rd Principle Finally, the third principle is stated as [i.e., page 1941 in Penrose, 1979]: The last of the three principles is that the probabilities in the ensemble really can be described by a phase-space density $\rho$ with $\rho$ a well-behaved (say, piecewise continuous) function, rather than some more general measure. Now the last principle, it is important to note, is actually very important but often overlooked. It is important because if we require it, we cannot include systems like a gas of hard spheres in a cubical box where all spheres bounce between the same two faces for eternity (i.e., the spheres move only along one-dimension). That is to say, a time-average of this imaginary system will not be the same as an ensemble average (see explanation below). Let us define any system like this as an exceptional system, for brevity. As an aside, the problems with time-averages in classical electricity and magnetism are well known and it is now known that spatial ensemble averages are the correct operations for converting between the micro- and macroscopic forms of Maxwell's equations [e.g., see pages 248-258 in Jackson, 1999 for a detailed discussion]. Ergodicity and Mixing Ergodicity If $G$ is a dynamical variable, then we can define the ensemble average over time as:$$\langle G \rangle_{t} = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \rho_{t}\left( \mathbf{q}, \mathbf{p} \right) \tag{3}$$where we can obtain $\rho_{t}$ using the assumption that Liouville's theorem holds (i.e., $d \rho/dt = 0$). Note that when $\langle G \rangle_{t}$ exists, we can define it as an equilibrium value of $G$. However, it is worth noting that $\langle G \rangle_{t}$ does not necessarily exist, as in the case of any oscillating system without damping (e.g., simple harmonic oscillator). In other words, $\lim_{t \rightarrow \infty} \langle G \rangle_{t}$ will not approach a single value, it will oscillate indefinitely. The time-average, however, always exists and one can avoid calculating a nonexistent value by redefining the equilibrium value of $G$ as:$$\langle G \rangle_{eq} \equiv \lim_{t \rightarrow \infty} \ \frac{1}{T} \int_{0}^{T} \ dt \ \langle G \rangle_{t} \tag{4}$$which is equal to $\lim_{t \rightarrow \infty} \langle G \rangle_{t}$ if $\langle G \rangle_{t}$ exists. If we define the time-average of $\rho$ as $\bar{\rho}$, we can write this as:$$\bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) = \lim_{t \rightarrow \infty} \ \frac{1}{T} \int_{0}^{T} \ dt \ \rho_{t}\left( \mathbf{q}, \mathbf{p} \right) \tag{5}$$which allows us to redefine $\langle G \rangle_{eq}$ as:$$\langle G \rangle_{eq} = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) \tag{6}$$ It is important to note some properties of ergodic theory here [e.g., page 1949 in Penrose, 1979]: It follows from the ergodic theorem of Birkhoff (1931) that $\bar{\rho}$ is well-defined at almost all phase points... consequently the integral in (1.16) is well-defined... Birkhoff's theorem also shows that $\bar{\rho}$ is constant on the trajectories in phase space... where the integral (1.16) in the quote refers to the version of $\langle G \rangle_{eq}$ in Equation 6. The last statement, namely that $\bar{\rho}$ is an invariant, is crucial here. Were it not an invariant, it "...would require us to solve the equations of motion for $10^{23}$-odd particles..." [e.g., page 1945 of Penrose, 1979]. Important Side Note: Recall again that Equation 6 given above for $\langle G \rangle_{eq}$ does not always hold, as in the trivial case of an undamped simple harmonic oscillator because the integral on the right-hand side oscillates forever. Assume we can write $\bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) = \phi\left( x \right)$, where $\phi$ is an arbitrary function of only one variable. If $\phi\left( x \right) \rightarrow \phi\left( H \right)$, where $H$ is the Hamiltonian, for all $\bar{\rho}$ in a system, then the system is said to be ergodic. Another way of stating this is that if the system were ergodic, the trajectories would cover all parts of an energy manifold if given enough time. Mixing Let us define the microcanonical average over energy of $G$ as:$$\langle G \rangle_{E} = \frac{ \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \delta\left( H\left( \mathbf{q}, \mathbf{p} \right) - E \right) }{ \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ \delta\left( H\left( \mathbf{q}, \mathbf{p} \right) - E \right) } \tag{7}$$where $\delta()$ is the Dirac delta function, $H\left( \mathbf{q}, \mathbf{p} \right)$ is the Hamiltonian, and $E$ are energy manifolds (i.e., systems that have energy $E$). Thus, we can redefine $\langle G \rangle_{eq}$ as:$$\begin{align} \langle G \rangle_{eq} & = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) \tag{8a} \\ & = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \phi\left( H \right) \tag{8b} \\ & = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \left[ \int_{-\infty}^{\infty} \ dE \ \phi\left( E \right) \ \delta\left( E - H\left( \mathbf{q}, \mathbf{p} \right) \right) \right] \tag{8c} \\ & = \int_{-\infty}^{\infty} \ dE \ P\left( E \right) \ \langle G \rangle_{E} \tag{8d}\end{align}$$where $P\left( E \right)$ is given by:$$P\left( E \right) = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ \bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) \ \delta\left( E - H\left( \mathbf{q}, \mathbf{p} \right) \right) \tag{9}$$Note that $P\left( E \right)$ is just the probability density of $H$ in the time-averaged ensemble. Now to define mixing we consider whether the following holds:$$\lim_{t \rightarrow \infty} \ \langle \rho_{0}\left( \mathbf{q}, \mathbf{p} \right) \ G\left( \mathbf{q}, \mathbf{p} \right) \rangle_{E} = \langle \rho_{0}\left( \mathbf{q}, \mathbf{p} \right) \rangle_{E} \ \langle G\left( \mathbf{q}, \mathbf{p} \right) \rangle_{E} \tag{10}$$where $\rho_{0}$ is just the initial value of $\rho_{t}$. If the system, for every $E$ and functions $\rho_{0}$ and $G$, satisfies the above relationship, the system is said to be mixing [i.e., pages 1948-1949 in Penrose, 1979]: Mixing can easily be shown to imply ergodicity (e.g. Arnold and Avez (1968, p20); the equivalence of our definition of mixing and theirs follows from their theorem 9.8), but is not implied by it; for example, as mentioned earlier, the harmonic oscillator is ergodic but not mixing... The precise definition of mixing is... 'whether an ensemble of isolated systems has any tendency in the course of time toward a state of statistical equilibrium'... Note that mixing is not sufficient to imply a system will approach equilibrium [i.e., page 1949 in Penrose, 1979]: Mixing tells us that the average $\langle G \rangle_{t}$ of a dynamical variable $G$, taken over the appropriate ensemble, approaches an equilibrium value $\langle G \rangle_{eq}$; it does not tell us anything about the time variation of $G$ in any of the individual systems comprised in that ensemble. To make useful predictions about the behaviour of G in any individual system we must show that the individual values of G are likely to be close to $\langle G \rangle$, i.e. that the fluctuations of $G$ are small, and to do this we have to use the large size of the system as well as its mixing property... Additional and/or Related Answers References Evans, D.J. "On the entropy of nonequilibrium states," J. Statistical Phys. 57, pp. 745-758, doi:10.1007/BF01022830, 1989. Evans, D.J., and G. Morriss Statistical Mechanics of Nonequilibrium Liquids, 1st edition, Academic Press, London, 1990. Evans, D.J., E.G.D. Cohen, and G.P. Morriss "Viscosity of a simple fluid from its maximal Lyapunov exponents," Phys. Rev. A 42, pp. 5990–5997, doi:10.1103/PhysRevA.42.5990, 1990. Evans, D.J., and D.J. Searles "Equilibrium microstates which generate second law violating steady states," Phys. Rev. E 50, pp. 1645–1648, doi:10.1103/PhysRevE.50.1645, 1994. Gressman, P.T., and R.M. Strain "Global classical solutions of the Boltzmann equation with long-range interactions," Proc. Nat. Acad. Sci. USA 107, pp. 5744–5749, doi:10.1073/pnas.1001185107, 2010. Hoover, W. (Ed.) Molecular Dynamics, Lecture Notes in Physics, Berlin Springer Verlag, Vol. 258, 1986. J.D. Jackson, Classical Electrodynamics, Third Edition, John Wiley & Sons, Inc., New York, NY, 1999. O. Penrose, "Foundations of statistical mechanics," Rep. Prog. Phys. 42, pp. 1937-2006, 1979.
Let $\mathbb F_q$ be a finite field, $C$ a curve over $\mathbb F_q$ of genus $g\geq 2$, $\rho: \pi_1(C) \to GL_2(\overline{\mathbb Q}_\ell)$ an irreducible local system. The geometric Langlands correspondence constructs a geometrically irreducible Hecke eigensheaf on $Bun_2(C)$ associated to $\rho$, which is a perverse sheaf. What is this sheaf's generic rank (as a function of $g$, presumably)? What is its singular locus? It seems to me that the singular locus should be more-or-less independent of $\rho$ because in the complex-analytic picture it is supposed to depend smoothly on $\rho$, but the coordinates of $\rho$ are $\ell$-adic and the coordinates of the singular locus are in $\overline{\mathbb F}_q$ so there is no natural way for the second to depend naturally on the first other than being locally constant. If the singular locus is locally constant, because the complex-analytic space of local systems is connected, it should be globally constant as well. By similar logic it seems like the generic rank should be independent of the local system. I could come up with some plausible guess for what the singular locus should be (maybe the unstable locus?), but I have no idea what the generic rank should be (some polynomial in $g$?).
Science Advisor Homework Helper 1,198 400 Part 1 of (9) ##\phi: R \to S## is a ring epimorphism. Define ##I:= \phi^{-1}(J)##. It is well known that the inverse image of an ideal is an ideal, thus ##I## is an ideal. Define ##\psi: R/I \to S/J: [r] \mapsto [\phi(r)]## This is well defined: If ##r \in I##, then ##\phi(r) \in J##. Clearly, this is also a ring morphism. For injectivity, assume ##[\phi(r)] = 0##, then ##\phi(r) \in J##, and ##r \in \phi^{-1}(J) = I##, thus ##[r] = 0##. The kernel is trivial and the map is injective. Surjectivity follows immediately by surjectivity of ##\phi##. It follows that ##\psi## is an isomorphism, and thus ##R/I \cong S/J##. Define ##\psi: R/I \to S/J: [r] \mapsto [\phi(r)]## This is well defined: If ##r \in I##, then ##\phi(r) \in J##. Clearly, this is also a ring morphism. For injectivity, assume ##[\phi(r)] = 0##, then ##\phi(r) \in J##, and ##r \in \phi^{-1}(J) = I##, thus ##[r] = 0##. The kernel is trivial and the map is injective. Surjectivity follows immediately by surjectivity of ##\phi##. It follows that ##\psi## is an isomorphism, and thus ##R/I \cong S/J##. Last edited:
Model A model of a theory $T$ is a set $M$ together with relations (eg. two: $a$ and $b$) satisfying all axioms of the theory $T$. Symbolically $\langle M, a, b \rangle \models T$. According to the Gödel completeness theorem, in $\mathrm{PA}$ (Peano arithmetic) (so also in $\mathrm{ZFC}$) a theory has models iff it is consistent. According to Löwenheim–Skolem theorem, in $\mathrm{ZFC}$ if a countable first-order theory has an infinite model, it has infinite models of all cardinalities. A model of a set theory (eg. $\mathrm{ZFC}$) is a set $M$ such that the structure $\langle M,\hat\in \rangle$ satisfies all axioms of the set theory. If $\hat \in$ is base theory's $\in$, the model is called a transitive model. Gödel completeness theorem and Löwenheim–Skolem theorem do not apply to transitive models. (But Löwenheim–Skolem theorem together with Mostowski collapsing lemma show that if there is a transitive model of ZFC, then there is a countable such model.) See Transitive ZFC model. Contents Class-sized transitive models One can also talk about class-sized transitive models. Inner model is a transitive class containing all ordinals. Forcing creates outer models, but it can also be used in relation with inner models.[1] Among them are canonical inner models like the core model the canonical model $L[\mu]$ of one measurable cardinal HOD and generic HOD (gHOD) mantle $\mathbb{M}$ (=generic mantle $g\mathbb{M}$) outer core the constructible universe $L$ Mantle $α$th inner mantle $\mathbb{M}^α$ is defined by $\mathbb{M}^0=V$, $\mathbb{M}^{α+1} = \mathbb{M}^{\mathbb{M}^α}$ (mantle of the previous inner mantle) and $\mathbb{M}^α = \bigcap_{β<α} \mathbb{M}^β$ for limit $α$. If there is uniform presentation of $\mathbb{M}^α$ for all ordinals $α$ as a single class, one can talk about $\mathbb{M}^\mathrm{Ord}$, $\mathbb{M}^{\mathrm{Ord}+1}$ etc. If an inner mantle is a ground, it is called the outer core.[1] It is conjenctured (unproved) that every model of ZFC is the $\mathbb{M}^α$ of another model of ZFC for any desired $α ≤ \mathrm{Ord}$, in which the sequence of inner mantles does not stabilise before $α$. It is probable that in the some time there are models of ZFC, for which inner mantle is undefined (Analogously, a 1974 result of Harrington appearing in (Zadrożny, 1983, section 7), with related work in (McAloon, 1974), shows that it is relatively consistent with Gödel-Bernays set theory that $\mathrm{HOD}^n$ exists for each $n < ω$ but the intersection $\mathrm{HOD}^ω = \bigcap_n \mathrm{HOD}^n$ is not a class.).[1] For a cardinal $κ$, we call a ground $W$ of $V$ a $κ$-ground if there is a poset $\mathbb{P} ∈ W$ of size $< κ$ and a $(W, \mathbb{P})$-generic $G$ such that $V = W[G]$. The $κ$-mantle is the intersection of all $κ$-grounds.[3] The $κ$-mantle is a definable, transitive, and extensional class. It is consistent that the $κ$-mantle is a model of ZFC (e.g. when there are no grounds), and if $κ$ is a strong limit, then the $κ$-mantle must be a model of ZF. However it is not known whether the $κ$-mantle is always a model of ZFC.[3] Mantle and large cardinals $\kappa$-model A weak $κ$-model is a transitive set $M$ of size $\kappa$ with $\kappa \in M$ and satisfying the theory $\mathrm{ZFC}^-$ ($\mathrm{ZFC}$ without the axiom of power set, with collection, not replacement). It is a $κ$-model if additionaly $M^{<\kappa} \subseteq M$.[4, 5] References Fuchs, Gunter and Hamkins, Joel David and Reitz, Jonas. Set-theoretic geology.Annals of Pure and Applied Logic 166(4):464 - 501, 2015. www arχiv DOI bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Usuba, Toshimichi. Extendible cardinals and the mantle.Archive for Mathematical Logic 58(1-2):71-75, 2019. arχiv DOI bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Holy, Peter and Schlicht, Philipp. A hierarchy of Ramsey-like cardinals.Fundamenta Mathematicae 242:49-74, 2018. www arχiv DOI bibtex
The problem is to determine whether the given differential operator $L[y]$, whose domain consists of all functions that have continuous second derivatives on the interval $[0,\pi]$ and satisfy the given boundary conditions, is selfadjoint. $$L[y]=y''+\lambda y;\;\;\;\;y(0)+y'(\pi)=0,\;\;\;\;y'(0)+y(\pi)=0$$ For an operator to be selfadjoint over an interval, it must satisfy the equation $(u,L[v])=(L[u],v)$ for all $u,v$ in the domain, where the inner product over the interval $[a,b]$ is defined as: $$(f,g)=\int_a^b f(x)g(x)dx$$ Applying integration-by-parts twice to the integral for $(u,L[v])$ goes as follows: $$ (u,L[v])= \int_0^\pi u(v''+\lambda v)dx = \int_0^\pi uv''\;dx + \int_0^\pi \lambda v\;dx = uv'|_0^\pi - \int_0^\pi u'v'\;dx + \int_0^\pi \lambda v\;dx $$ $$ = uv'|_0^\pi - u'v|_0^\pi + \int_0^\pi u''v\;dx + \int_0^\pi \lambda v\;dx = (uv'-u'v)|_0^\pi + (L[u],v) $$ Thus, the operator is selfadjoint iff $(uv'-u'v)|_0^\pi=0$. The boundary conditions can be used to show that $uv'|_0^\pi=-u'v|_0^\pi$, leading to the following condition for selfadjointness: $$uv'|_0^\pi=0$$ This seems promising, but I'm not sure where to go from here. I can't even think of functions that satisfy the boundary conditions, which makes finding a counter-example difficult. Edit: It looks like the following is a counter-example: $$u(x)=2\cos{2x}-\sin{2x},\;\;\;\;v(x)=\cos{x}+\sin{x}$$ $$ uv'|_0^\pi = (2\cos{2x}-\sin{2x})(\cos{x}-\sin{x})|_0^\pi = -4$$ Unfortunately, I don't find this particularly enlightening. Is there any way to show that the operator is not selfadjoint without a counter-example?
You call to someone's house and asked if they have two children. The answer happens to be yes. Then you ask if one of their children's name a William. The answer happens to be yes again.(We assume William is a boy's name, and that it's possible that both children are Williams) What's the probability that the second child is a boy? closed as off-topic by Jam, KReiser, NCh, Saad, choco_addicted Dec 11 '18 at 7:07 This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Jam, KReiser, NCh, Saad, choco_addicted The probability is $\dfrac{1-p}{2-p}$, where $p<0.5$ is the probability that a child’s name is William. Represent the sample space of one child’s gender and name by $I$, a unit interval where numbers from $0$ to $0.5$ represent girls and numbers from $0.5$ to $1$ represent boys. Assume that like names are contiguous within the gender range, so there is a subinterval of width $p$ within $[0.5,1]$ that represents Williams. The two children correspond to some point $(x,y)$ in $I\times I$. Knowing nothing other than that the family has (exactly) two children puts no restriction on $(x,y)$. However, if you know one child is a William, you must be in the colored cross-shaped region shown in the picture. Within that region, the red area represents one-boy-and-one-girl families, and the blue area represents two-boy families. The probability that you are in the blue region, given that you are in the cross-shaped region, is the quotient of areas blue/cross, or $\dfrac{p-p^2}{2p-p^2}$. For sake of argument suppose $1$ in $m$ boys (but no girls) are named william. And for sake of argument in every family has two children and one of the children is soaked in skunk urine to tell it apart for the other. $\frac 12*\frac 12$ of all families have two girls. None of them named william. $\frac 12*\frac 12=\frac 14$ of all families has one girl soaked in skunk urine and a clean boy. So these families $\frac 1m$ of them have the boy named william. So $\frac 1{4m}$ have a skunk urine girl and a boy named william and the other a girl. $\frac 12*\frac 12=\frac 14$ of all families has one girl clean and one boy soaked in skunk urine. Of these $\frac 1{m}$ have a boy named william. So $\frac 1{4m}$ has skunk urine boy named william and the other a girl. $\frac 12*\frac 12 = \frac 14$ of all families have two boys. $\frac 1{m^2}$ or $\frac 1{4m^2}$ of all families, of these both are named william. $\frac 1{m}\frac {m-1}m$ or $\frac {m-1}{4m^2}$ of all, of these the skunk urine one is called william and the other isn't. $\frac {m-1}m\frac 1{m}$ or $\frac {m-1}{4m^2}$ of all, of the the clean boy is called william that the other isn't. So $\frac 1{4m^2} + \frac {m-1}{4m^2} + \frac {m-1}{4m^2}= \frac {2m-1}{4m^2}$ of all families has a child name william and the other a boy. And $\frac 1{4m}+\frac 1{4m}=\frac 1{2m} = \frac {2m}{4m^2}$ of all families have a child named william and the other a girl. And there are $\frac 1{4m^2} + \frac {m-1}{4m^2} + \frac {m-1}{4m^2}+\frac 1{4m}+\frac 1{4m} = \frac {4m -1}{4m^2}$ of all families has a child named william. So the probability of a family with two children, one william, having two boys is $\frac {\frac {2m-1}{4m^2}}{\frac {4m-1}{4m^2}} = \frac {2m-1}{4m-1}\approx \frac 12$ (depending on how rare william is as a name) and the probability of having a boy and a girl is $\frac {2m}{4m-1}\approx \frac 12$. .... There is a well known paradox that if a family with two children have at least one boy what is the probability that "the other" is a boy, or in other words what is the probability that they have two boys. The answer is $\frac 13$. This is because of the four possible outcomes, BB, BG, GB, GG, the GG is thrown away. SO of the three equally likely outcomes only one is $BB$. But this question is worded differently. we are told specifically that a certain one of them is a boy, not just any of them. (Let's assume they aren't both named william). So of the four possibilities: William is a Boy: Boy, William is Boy: Girl, William is a girl:Boy, William is girl; Girl. Now 2 of them are thrown away and only William is a Boy: Boy, William is Boy: Girl are left. So the probability is now $\frac 12$. These questions are actually a lot of fun. Thank you for posting this! A related question to this question is this: You call households that have exactly two children. You ask whether one child is a boy. What is the probability that the second child is a boy. Let A be the event that both children are girls, B the event that the first-born is a boy, the second-born a girl, C the event that the first-born is a girl the second-born a boy. and D the event that both children are boys. Then Events A,B,C,D are mutually disjoint and each occur with probability $\frac{1}{4}$. Let us assume that, instead of asking if one child is a boy, you ask if the first-born is a boy. They will say yes if either Events B or D occured. Then the probability that the second-born is a boy, given that the first-born is a boy (i.e., it was Event D that occurred given that one of D or B occurred), is 1/2. But you are actually calling and asking if one child is a boy, and they will say yes if Events B,C, or D occurred. The only households that will say no are those for which event A occurred. So the probability that the other child is a girl, giventhat one child is a boy, is the probability of either Event B or Event C happening given that one of Events B,C,D happened. So the probability that the other child is a girl is 2/3. However, the above question is not equivalent to the original question. That only a small fraction William (as in about every other name). Here households with two boys will say yes at almost twice the rate that households will say yes to the question of "do you have a William", instead of at the same rate before. So as the fraction $p$ of Williams goes to 0, the probability that the 2nd child is a boy goes to 1/2. The calculations were already done in the other answers. The crux of this problem is that the probability changes whether we find out that one of the children is William before or after we draw the family. See this paper for details. Assume that both sexes are equiprobable and that the probability of a child being a William is $p$. Denote Williams by $BW$ and male non-Williams by $B'$. $$\begin{array}{|c|c|c|} \hline \text{Child 1} & \text{Child 2}&\text{P(Both Children)}\\ \hline BW&BW &p^2\\ \hline BW&B'&p\left(\frac12-p\right)\\ \hline BW&G&\frac12p\\ \hline B'&BW&p\left(\frac12-p\right)\\ \hline B'&B'&\left(\frac12-p\right)^2\\ \hline B'&G&\frac12\left(\frac12-p\right)\\ \hline G&BW&\frac12p\\ \hline G&B'&\frac12\left(\frac12-p\right)\\ \hline G&G&\frac14\\ \hline \end{array}\\ $$ In this space, we can see that the probability of child $2$ being a boy after we've found out that child $1$ is a William is $P(B|W)=\frac{P(B\land W)}{P(W)}=\frac1p\frac{p^2+p\left(\frac12-p\right)}{p^2+2p(1/2-p)+p+(1/2-p)^2+(1/2-p)+(1/2)^2}=\frac12$. However, if we knew that the family had a William but not specifically that child $1$ was a William, we would have the following space. \begin{array}{|c|c|} \hline \text{Child 1} & \text{Child 2} &\text{P(Both Children)}\\ \hline BW&BW&p^2\\ \hline BW&B'&\left(\frac12-p\right)p\\ \hline BW&G&\frac12p\\ \hline B'&BW&\left(\frac12-p\right)p\\ \hline G&BW&\frac12p\\ \hline \end{array} So then the probability of the other child being a boy becomes $\frac{p^2+\left(\frac12-p\right)p+\left(\frac12-p\right)p}{p^2+\left(\frac12-p\right)p+\frac12p+\left(\frac12-p\right)p+\frac12p}=\frac{1-p}{2-p}$, which agrees with @Steve_Kass's answer. This is a variant of the well known paradox whose solution depends on the exact phrasing of the problem. In essence, the prior knowledge of changes the system.
How can the following expression be simplified: Log[ Sqrt[ (m/(2*Pi*k*T)^3) *4*Pi*v^2 ] * Exp[ -(m*v^2 / (2*k*T)) ]] I would like to get the result: Log[ Sqrt[ (m/(2*Pi*k*T)^3) *4*Pi*v^2 ] ] - (m*v^2 / (2*k*T)) Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up.Sign up to join this community How can the following expression be simplified: Log[ Sqrt[ (m/(2*Pi*k*T)^3) *4*Pi*v^2 ] * Exp[ -(m*v^2 / (2*k*T)) ]] I would like to get the result: Log[ Sqrt[ (m/(2*Pi*k*T)^3) *4*Pi*v^2 ] ] - (m*v^2 / (2*k*T)) Assuming all variables are positive: FullSimplify[Log[Sqrt[(m/(2*Pi*k*T)^3)*4*Pi*v^2]*Exp[-(m*v^2/(2*k*T))]],{T>0,k>0,m>0,v>0}]1/2 (-((m v^2)/(k T))+Log[(m v^2)/(2 k^3 \[Pi]^2 T^3)]) Is this better? Yours to decide. EDIT: Okay since you explained it in detail you just want to use the rule $$\log\left(\exp \left(x\right)\cdot y\right)=x+\log\left(y\right)$$ So we implement this single rule as PatternMatching: Log[Sqrt[(m/(2*Pi*k*T)^3)*4*Pi*v^2]*Exp[-(m*v^2/(2*k*T))]]/.Log[Exp[a_]*b_]:>a+Log[b]-((m v^2)/(2 k T)) + Log[Sqrt[(m v^2)/(k^3 T^3)]/(Sqrt[2] \\[Pi])] Maybe PowerExpand is what you're looking for. Not simpler, but expanded.
Suppose you are calling someone from another part of the world and the person asks you "how is the weather there?". You answer promptly: "it's really nice today: $75^\circ\textrm{F}$", only to hear the confused reply "wait, what's that in Celsius?". Oops, what should you do now? If you ever learned to convert temperature values between the Fahrenheit and Celsius scales, you probably learned to do it using the equation below: $$ \displaystyle\frac{{}^\circ\textrm{F} - 32}{9} = \frac{{}^\circ\textrm{C}}{5} $$ where ${}^\circ\textrm{F}$ is the temperature in the Fahrenheit scale and ${}^\circ\textrm{C}$ is the temperature in the Celsius scale. More concise ways to express the equation above are: $$ {}^\circ\textrm{F} = {}^\circ\textrm{C} \times 1.8 + 32 \label{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_exact} $$ and its inverse: $$ {}^\circ\textrm{C} = ({}^\circ\textrm{F} - 32)/1.8 \label{post_8a4390f653cdd7dca0e05db14bd8f760_F_to_C_exact} $$ However, both equations are unnecessarily complicated to be used for temperatures which one experiences on a daily basis because dividing or multiplying by 1.8 and subtracting or adding 32 are not trivially easy to do. Consider, for instance, the following equation: $$ \boxed{ {}^\circ\textrm{F} = {}^\circ\textrm{C}\times 2 + 30 } \label{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} $$ and its inverse (just memorize the form which you find easier): $$ \boxed{ {}^\circ\textrm{C} = ({}^\circ\textrm{F} - 30)/2 } \label{post_8a4390f653cdd7dca0e05db14bd8f760_F_to_C_approx} $$ Much less daunting, aren't they? Dividing or multiplying by $2$ is much easier than by $1.8$, and subtracting or adding $30$ is much easier than it is with $32$. During the vast majority of the year, and on most regions where the Fahrenheit scale is used, temperatures are in the range $[-10^\circ\textrm{C}, 35^\circ\textrm{C}]$ = $[14^\circ\textrm{F}, 95^\circ\textrm{F}]$. Interestingly, equation \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} (or, equivalently, equation \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_F_to_C_approx}) works very well for converting between ${}^\circ\textrm{C}$ and ${}^\circ\textrm{F}$ over this temperature range (see figure 1). In fact, when converting from ${}^\circ\textrm{C}$ to ${}^\circ\textrm{F}$, the largest difference (in magnitude) between the values computed using equations \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_exact} and \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} is only $5^\circ\textrm{F}$ at $35^\circ\textrm{C}$: the exact value is $95^\circ\textrm{F}$ but equation \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} yields $100^\circ\textrm{F}$. A difference of only $4^\circ\textrm{F}$ occurs at the other end of the temperature range: equation \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} yields $10^\circ\textrm{F}$ at $-10^\circ\textrm{C}$ but the exact value is $14^\circ\textrm{F}$. Before you start thinking "well, $5^\circ\textrm{F}$ is not negligible", consider that as you walk over your house, you might already experience a difference of a few ${}^\circ\textrm{F}$ (or ${}^\circ\textrm{C}$ if you prefer). Some rooms will be warmer than others and you might not even notice the difference. Additionally, temperatures you see reported on the Internet, TV, radio etc. are just the values measured at some location near you and often differ from what you would experience in your garden by a few ${}^\circ\textrm{F}$ (${}^\circ\textrm{C}$). Finally, notice that the errors discussed above happen in the extremes of the given temperature range: "very hot" ($35^\circ\textrm{C}$ or $95^\circ\textrm{F}$) and "very cold" ($-10^\circ\textrm{C}$ or $14^\circ\textrm{F}$). For temperatures in between, the errors are smaller. For instance, $20^\circ\textrm{C}$ is exactly $68^\circ\textrm{F}$ but equation \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} yields $70^\circ\textrm{F}$. Not so bad, right? Not surprisingly, equation \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_F_to_C_approx} converts from ${}^\circ\textrm{F}$ to ${}^\circ\textrm{C}$ within an error of $2-3{}^\circ\textrm{C}$ on the temperature range we chose, with the largest errors happening at the extremes (very hot and very cold). In other words, you can use equations \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} and \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_F_to_C_approx} to convert between ${}^\circ\textrm{F}$ and ${}^\circ\textrm{C}$ and obtain good approximate answers with little effort. Fig. 1: Exact and approximate conversions from ${}^\circ\textrm{C}$ to ${}^\circ\textrm{F}$. These curves are described by equations \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_exact} and \eqref{post_8a4390f653cdd7dca0e05db14bd8f760_C_to_F_approx} respectively.
Geometry and Topology Seminar Contents 1 Fall 2013 2 Fall Abstracts 3 Spring 2014 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2013 date speaker title host(s) September 6 September 13, 10:00 AM in 901! Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Kent September 20 September 27 October 4 October 11 October 18 Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Kent October 25 Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT local November 1 Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Dymarz November 8 Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Kent November 15 Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel Kent November 22 Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. local Thanksgiving Recess December 6 Sean Paul (Wisconsin) (Semi)stable Pairs I local December 13 Sean Paul (Wisconsin) (Semi)stable Pairs II local Fall Abstracts Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Abstract: For a compact surface S, the associated pants graph P(S) consists of vertices corresponding to pants decompositions of S and edges corresponding to elementary moves between pants decompositions. Motivated by the Weil-Petersson geometry of Teichmüller space, Aramayona, Parlier, and Shackleton conjecture that the full subgraph G of P(S) determined by fixing a multicurve is totally geodesic in P(S). We resolve this conjecture in the case that G is a product of Farey graphs. This is joint work with Sam Taylor. Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Abstract: We discuss the notion of gap distributions of various lists of numbers in [0, 1], in particular focusing on those which are associated to certain low-dimensional dynamical systems. We show how to explicitly compute some examples using techniques of homogeneous dynamics, generalizing earlier work on gaps between Farey Fractions. This works gives some possible notions of `randomness' of special trajectories of billiards in polygons, and is based partly on joint works with J. Chaika, J. Chaika and S. Lelievre, and with Y.Cheung. This talk may also be of interest to number theorists. Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT Many problems in differential geometry can be reduced to solving a PDE of form [math] \mu(x)=0 [/math] where [math]x[/math] ranges over some function space and [math]\mu[/math] is an infinite dimensional analog of the moment map in symplectic geometry. In Hamiltonian dynamics the moment map was introduced to use a group action to reduce the number of degrees of freedom in the ODE. It was soon discovered that the moment map could be applied to Geometric Invariant Theory: if a compact Lie group [math]G[/math] acts on a projective algebraic variety [math]X[/math], then the complexification [math]G^c[/math] also acts and there is an isomorphism of orbifolds [math] X^s/G^c=X//G:=\mu^{-1}(0)/G [/math] between the space of orbits of Mumford's stable points and the Marsden-Weinstein quotient. In September of 2013 Dietmar Salamon, his student Valentina Georgoulas, and I wrote an exposition of (finite dimensional) GIT from the point of view of symplectic geometry. The theory works for compact Kaehler manifolds, not just projective varieties. I will describe our paper in this talk; the following Monday Dietmar will give more details in the Geometric Analysis Seminar. Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Abstract: A quasi-regular (QR) mapping between metric manifolds is a branched cover with bounded dilatation, e.g. f(z)=z^2. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that: 1) Every lens space admits a uniformly QR (UQR) mapping f. 2) Every UQR mapping leaves invariant a measurable conformal structure. The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces. Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Abstract: Given a triangulated 3-manifold M a natural question is: Does M admit a hyperbolic structure? While this question can be answered in the negative if M is known to be reducible or toroidal, it is often difficult to establish a certificate of hyperbolicity, and so computer methods have developed for this purpose. In this talk, I will describe a new method to establish such a certificate via verified computation and compare the method to existing techniques. This is joint work with Kazuhiro Ichihara, Masahide Kashiwagi, Hidetoshi Masai, Shin'ichi Oishi, and Akitoshi Takayasu. Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel The proof of the Hausdorff-Banach-Tarski paradox relies on the existence of a nonabelian free group in the group of rotations of [math]\mathbb{R}^3[/math]. To help generalize this paradox, Borel proved the following result on free groups. Borel’s Theorem (1983): Let [math]F[/math] be a free group of rank two. Let [math]G[/math] be an arbitrary connected semisimple linear algebraic group (i.e., [math]G = \mathrm{SL}_n[/math] where [math]n \geq 2[/math]). If [math]\gamma[/math] is any nontrivial element in [math]F[/math] and [math]V[/math] is any proper subvariety of [math]G(\mathbb{C})[/math], then there exists a homomorphism [math]\phi: F \to G(\mathbb{C})[/math] such that [math]\phi(\gamma) \notin V[/math]. What is the class, [math]\mathcal{L}[/math], of groups that may play the role of [math]F[/math] in Borel’s Theorem? Since the free group of rank two is in [math]\mathcal{L}[/math], it follows that all residually free groups are in [math]\mathcal{L}[/math]. In this talk, we present some methods for determining whether a finitely generated group is in [math]\mathcal{L}[/math]. Using these methods, we give a concrete example of a finitely generated group in [math]\mathcal{L}[/math] that is *not* residually free. After working out a few other examples, we end with a discussion on how this new theory provides an answer to a question of Brueillard, Green, Guralnick, and Tao concerning double word maps. This talk covers joint work with Michael Larsen. Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. The celebrated Poincare-Hopf theorem states that a vector field [math]X[/math] on a manifold [math]M[/math] has nonempty zero set [math]Z(X)[/math], provided [math]M[/math] is compact with empty boundary and [math]M[/math] has nonzero Euler characteristic. Surprising little is known about the set of common zeros of two or more vector fields, especially when [math]M[/math] is not compact. One of the few results in this direction is a remarkable theorem of Christian Bonatti (Bol. Soc. Brasil. Mat. 22 (1992), 215–247), stated below. When [math]Z(X)[/math] is compact, [math]i(X)[/math] denotes the intersection number of [math]X[/math] with the zero section of the tangent bundle. [math]\cdot [/math] Assume [math] dim_{\mathbb{R}(M)} ≤ 4[/math], [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Then every analytic vector field commuting with [math]X[/math] has a zero in [math]Z(X)[/math]. In this talk I will discuss the following analog of Bonatti’s theorem. Let [math]\mathfrak{g}[/math] be a Lie algebra of analytic vector fields on a real or complex 2-manifold [math]M[/math], and set [math]Z(g) := \cap_{Y \in \mathfrak{g}} Z(Y)[/math]. • Assume [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Let [math]\mathfrak{g}[/math] be generated by analytic vector fields [math]Y[/math] on [math]M[/math] such that the vectors [math][X,Y]p[/math] and [math]Xp[/math] are linearly dependent at all [math]p \in M[/math]. Then [math]Z(\mathfrak{g}) \cap Z(X) \neq \emptyset [/math]. Related results on Lie group actions, and nonanalytic vector fields, will also be treated. Sean Paul (Wisconsin) (Semi)stable Pairs I Sean Paul (Wisconsin) (Semi)stable Pairs II Spring 2014 date speaker title host(s) January 24 January 31 Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups Kent February 7 February 14 February 21 Ioana Suvaina (Vanderbilt) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach Maxim February 28 Jae Choon Cha (POSTECH, Korea) Universal bounds for the Cheeger-Gromov rho-invariants Maxim March 7 Mustafa Kalafat (Michigan-State and Tunceli) Conformally Kahler Surfaces and Orthogonal Holomorphic Bisectional Curvature March 14 Spring Break March 28 April 4 Matthew Kahle (Ohio) MOVED TO COLLOQUIUM SLOT Dymarz April 11 Yongqiang Liu (UW-Madison and USTC-China) Nearby cycles and Alexander modules of hypersurface complements Maxim April 18 Pallavi Dani (LSU) Large-scale geometry of right-angled Coxeter groups. Dymarz April 25 Jingzhou Sun (Stony Brook) TBA Wang May 2 May 9 Spring Abstracts Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups The beautiful theory developed by Thurston, Fried and McMullen provides a near complete picture of the various ways a hyperbolic 3-manifold M can fiber over the circle. Namely, there are distinguished convex cones in the first cohomology M^1(M;R) whose integral points all correspond to fibrations of M, and the dynamical features of these fibrations are all encoded by McMullen's "Teichmuller polynomial." This talk will describe recent work developing aspects of this picture in the setting of a free-by-cyclic group G. Specifically, I will introduce a polynomial invariant that determines a convex polygonal cone C in the first cohomology of G whose integral points all correspond to algebraically and dynamically interesting splittings of G. The polynomial invariant additionally provides a wealth of dynamical information about these splittings. This is joint work with Ilya Kapovich and Christopher J. Leininger. Ioana Suvaina (Vanderbilt) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach" The talk presents an explicit classification of the ALE Ricci flat Kahler surfaces (M,J,g), generalizing previous classification results of Kronheimer. The manifolds are related to Q-Gorenstein deformations of quotient singularities of type C^2/G, with G a finite subgroup of U(2). Using this classification, we show how these metrics can also be obtained by a construction of Tian-Yau. In particular, we find good compactifications of the underlying complex manifold M. Jae Choon Cha (POSTECH) Universal bounds for the Cheeger-Gromov rho-invariants" Cheeger and Gromov showed that there is a universal bound of their L2 rho-invariants of a fixed smooth closed (4k-1)-manifold, using a deep analytic method. We give a new topological proof of the existence of a universal bound. For 3-manifolds, we give explicit estimates in terms of triangulations, Heegaard splittings, and surgery descriptions. The proof employs interesting ideas including controlled chain homotopy and a geometric reinterpretation of the Atiyah-Hirzebruch bordism spectral sequence. Applications include new results on the complexity of 3-manifolds. Mustafa Kalafat (Michigan-State and Tunceli) Conformally Kahler Surfaces and Orthogonal Holomorphic Bisectional Curvature We show that a compact complex surface which admits a conformally Kahler metric g of positive orthogonal holomorphic bisectional curvature is biholomorphic to the complex projective plane. In addition, if g is a Hermitian metric which is Einstein, then the biholomorphism can be chosen to be an isometry via which g becomes a multiple of the Fubini-Study metric. This is joint work with C.Koca. Matthew Kahle (Ohio) TBA Yongqiang Liu Nearby cycles and Alexander modules of hypersurface complements For a polynomial transversal at infinity, we show that the Alexander modules of the hypersurface complement can be realized by the nearby cycle complex, and we obtain a divisibility result for the associated Alexander polynomial. As an application, we use nearby cycles to recover the mixed Hodge structure on the torsion Alexander modules, as defined by Dimca and Libgober. Pallavi Dani (LSU) A finitely generated group can be endowed with a natural metric whichis unique up to coarse isometries, or quasi-isometries. A fundamentalquestion is to classify finitely generated groups up toquasi-isometry. I will report on the progress on this question in thecase of right-angled Coxeter groups. In particular I will describehow topological features of the visual boundary can be used toclassify a family of hyperbolic right-angled Coxeter groups. I willalso discuss the connection with commensurability, an algebraicproperty which implies quasi-isometry, but is stronger in general.This is joint work with Anne Thomas. Jingzhou Sun(Stony Brook) "On the Demailly-Semple jet bundles of hypersurfaces in the 3-dimensional complex projective space" Let $X$ be a smooth hypersurface of degree $d$ in the 3-dimensional complex projective space. By totally algebraic calculations, we prove that on the third Demailly-Semple jet bundle $X_3$ of $X$, the Demailly-Semple line bundle is big for $d\geq 11$, and that on the fourth Demailly-Semple jet bundle $X_4$ of $X$, the Demailly-Semple line bundle is big for $d\geq 10$, improving a recent result of Diverio.
Maybe I misunderstood the question, but what you are describing sounds like a test-retest reliability study on your Q scores. You have a series of experts each going to assess a number of items or questions, at two occasions (presumably fixed in time). So, basically you can assess the temporal stability of the judgments by computing an intraclass correlation coefficient (ICC), which will give you an idea of the variance attributable to subjects in the variability of observed scores (or, in other words of the closeness of the observations on the same subject relative to the closeness of observations on different subjects). The ICC may easily be obtained from a mixed-effect model describing the measurement $y_{ij}$ of subject $i$ on occasion $j$ as $$y_{ij}=\mu+u_i+\varepsilon_{ij},\quad \varepsilon\sim\mathcal{N}(0,\sigma^2)$$ where $u_i$ is the difference between the overall mean and subject $i$'s mean measurement, and $\varepsilon_{ij}$ is the measurement error for subject $i$ on occasion $j$. Here, this is a random-effect model. Unlike a standard ANOVA with subjects as factor, we consider the $u_i$ as random (i.i.d.) effects, $u_i\sim\mathcal{N}(0,\tau^2)$, independent of the error terms. Each measurement differ from the overall mean $\mu$ by the sum of the two error terms, among which the $u_i$ is shared between occasion on the same subjects. The total variance is then $\tau^2+\sigma^2$ and the proportion of the total variance that is accounted for by the subjects is $$\rho=\frac{\tau^2}{\tau^2+\sigma^2}$$ which is the ICC, or the reliability index from a psychometrical point of view.Note that this reliability is sample-dependent (as it depends on the between-subject variance). Instead of the mixed-effects model, we could derive the same results from a two-way ANOVA (subjects + time, as factors) and the corresponding Mean Squares. You will find additional references in those related questions: Repeatability and measurement error from and between observers, and Inter-rater reliability for ordinal or interval data. In R, you can use the icc() function from the psy package; the random intercept model described above corresponds to the "agreement" ICC, while incorporating the time effect as a fixed factor would yield the "consistency" ICC. You can also use the lmer() function from the lme4 package, or the lme() function from the nlme package. The latter has the advantage that you can easily obtain 95% CIs for the variance components (using the intervals() function). Dave Garson provided a nice overview (with SPSS illustrations) in Reliability Analysis, and Estimating Multilevel Models using SPSS, Stata, SAS, and R constitutes a useful tutorial, with applications in educational assessment. But the definitive reference is Shrout and Fleiss (1979), Intraclass Correlations: Uses in Assessing Rater Reliability, Psychological Bulletin, 86(2), 420-428. I have also added an example R script on Githhub, that includes the ANOVA and mixed-effect approaches. Also, should you add a constant value to all of the values taken at the second occasion, the Pearson correlation would remain identical (because it is based on deviations of the 1st and 2nd measurements from their respective means), whereas the reliability as computed through the random intercept model (or the agreement ICC) would decrease. BTW, Cronbach's alpha is not very helpful in this case because it is merely a measure of the internal consistency (yet, another form of "reliability") of an unidimensional scale; it would have no meaning should it be computed on items underlying different constructs. Even if your questions survey a single domain, it's hard to imagine mixing the two series of measurements, and Cronbach's alpha should be computed on each set separately. Its associated 95% confidence interval (computed by bootstrap) should give an indication about the stability of the internal structure between the two test occasions. As an example of applied work with ICC, I would suggest Johnson, SR, Tomlinson, GA, Hawker, GA, Granton, JT, Grosbein, HA, and Feldman, BM (2010). A valid and reliable belief elicitation method for Bayesian priors. Journal of Clinical Epidemiology, 63(4), 370-383.
$\exists j:L(V_{\lambda+1})\to L(V_{\lambda+1})$ See first: rank into rank axioms The axiom I0, the large cardinal axiom of the title, asserts that some nontrivial elementary embedding $j:V_{\lambda+1}\to V_{\lambda+1}$ extends to a nontrivial elementary embedding $j:L(V_{\lambda+1})\to L(V_{\lambda+1})$, where $L(V_{\lambda+1})$ is the transitive proper class obtained by starting with $V_{\lambda+1}$ and forming the constructible hierarchy over $V_{\lambda+1}$ in the usual fashion (see constructible universe). An alternate, but equivalent formulation asserts the existence of some nontrivial elementary embedding $j:L(V_{\lambda+1})\to L(V_{\lambda+1})$ with $\mathrm{crit}(j) < \lambda$. The critical point assumption is essential for the large cardinal strength as otherwise the axiom would follow from the existence of some measurable cardinal above $\lambda$. The axiom is of rank into rank type, despite its formulation as an embedding between proper classes, and embeddings witnessing the axiom known as $\text{I0}$ embeddings. Originally formulated by Woodin in order to establish the relative consistency of a strong determinacy hypothesis, it is now known to be obsolete for this purpose (it is far stronger than necessary). Nevertheless, research on the axiom and its variants is still widely pursued and there are numerous intriguing open questions regarding the axiom and its variants, see [1]. The axiom subsumes a hierarchy of the strongest large cardinals not known to be inconsistent with $\text{ZFC}$ and so is seen as "straining the limits of consistency" [1]. An immediate observation due to the Kunen inconsistency is that, under the $\text{I0}$ axiom, $L(V_{\lambda+1})$ cannot satisfy the axiom of choice. Contents The $L(V_{\lambda+1})$ Hierarchy Relation to the I1 Axiom Ultrapower Reformulation Despite the class language formulation of $I_0$, there is a first-order formulation in terms of normal ultrafilters: define, for $j:L(V_{\lambda + 1})\prec L(V_{\lambda+1})$, an ultrafilter $U_j$ as the collection of sets $X\in L(V_{\lambda+1})\cap\{k:L(V_{\lambda+1})\prec L(V_{\lambda+1})\}$ where $$X\in U_j \Leftrightarrow j\restriction V_\lambda \in jX.$$ Note that $U_j$ is a normal non-principal $L(V_{\lambda+1})$ ultrafilter over $V_{\lambda+1}$, hence the ultrafilter $Ult(L(V_{\lambda+1}), j)=\big(L(V_{\lambda+1}^{\mathcal{E}(V_{\lambda+1})})\cap L(V_{\lambda+1})\big)/U_j$ is well-defined and well-founded. It is important to note that $U_j$ contains only elementary embeddings from $L(V_{\lambda+1})$ to itself which are contructible from $V_{\lambda+1}$ and parameters from this set. As \(I0\) is therefore equivalent to the existence of a normal non-principle $L(V_{\lambda+1})$ ultrafilter over $V_{\lambda+1}$, the assertion $\kappa$ is $I0$ is $\Sigma_2$ and every critical point of $k: V_{\lambda+2}\prec V_{\lambda+2}$ is $I0$. Unfortunately, this requires $DC_{\lambda}$ to get ultrapower. An equivalent second-order formulation is: there is some $j:V_\lambda\prec V_\lambda$ and a proper class of ordinals $C$ such that $\alpha_0<\alpha_1<\dots< \alpha_n$ all elements of $C$ and $A\in V_{\lambda+1}$ with $$L_{\alpha_n}(V_{\lambda+1}, \in, \alpha_0, \alpha_1, \dots, \alpha_{n-1})\models \Phi(A)\leftrightarrow \Phi(jA).$$ Similarities with $L(\mathbb{R})$ under Determinacy The axiom $I0$ was originally formulated by Woodin to establish the consistency of the Axiom of Determinancy. What Woodin established was that $AD^{L(\mathbb R)}$ follows from the existence of an $I0$ cardinal [1]. It is now known that this is a massive overkill; namely, $AD$, $AD^{L(\mathbb R)}$, and infinitely many Woodin cardinals are equiconsistent, and furthermore, $AD^{L(\mathbb R)}$ follows from infinitely many Woodin cardinals with a measurable above them all [1]. This seems like it should be the end of it; $I0$ was simply an axiom to strong for the purpose for which it was created. But there are deeper connections between $AD^{L(\mathbb R)}$ and $I0$. First off, under $V=L(\mathbb R)$, if $AD$ holds then so does $DC\leftrightarrow DC_\omega$. Similarly, under $I0$ $DC_\lambda$ holds in $L(\mathbb R)$. Furthermore, if $AD$ holds then $\omega_1$ is measurable in $L(\mathbb R)$. Similarly, if $X\subseteq V_{\lambda+1}$ and there is some $j: L(X,V_{\lambda+1})\prec L(X,V_{\lambda+1})$, then $\lambda^+$ is measurable. The connections between $I0$ and determinancy are still not fully understand.[2] [WIP] Strengthenings of $\text{I0}$ and Woodin's $E_\alpha(V_{\lambda+1})$ Sequence We call a set $X ⊆ V_{λ+1}$ an Icarus set if there is an elementary embedding $j : L(X, V_{λ+1}) ≺ L(X, V_{λ+1})$ with $\mathrm{crit}(j) < λ$. In particular, “$(V_{λ+1})^{(n+1)♯}$ is Icarus” strongly implies “$(V_{λ+1})^{n♯}$ is Icarus”, but above the first $ω$ sharps it becomes more difficult.[2, 3] to complete References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Dimonte, Vincenzo. I0 and rank-into-rank axioms., 2017. arχiv bibtex Woodin, W Hugh. Suitable extender models II: beyond $\omega$-huge.Journal of Mathematical Logic 11(02):115-436, 2011. www DOI bibtex This article is a stub. Please help us to improve Cantor's Attic by adding information.
Geometry and Topology Seminar Contents 1 Fall 2013 2 Fall Abstracts 3 Spring 2014 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2013 date speaker title host(s) September 6 September 13, 10:00 AM in 901! Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Kent September 20 September 27 October 4 October 11 October 18 Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Kent October 25 Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT local November 1 Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Dymarz November 8 Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Kent November 15 Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel Kent November 22 Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. local Thanksgiving Recess December 6 Sean Paul (Wisconsin) (Semi)stable Pairs I local December 13 Sean Paul (Wisconsin) (Semi)stable Pairs II local Fall Abstracts Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Abstract: For a compact surface S, the associated pants graph P(S) consists of vertices corresponding to pants decompositions of S and edges corresponding to elementary moves between pants decompositions. Motivated by the Weil-Petersson geometry of Teichmüller space, Aramayona, Parlier, and Shackleton conjecture that the full subgraph G of P(S) determined by fixing a multicurve is totally geodesic in P(S). We resolve this conjecture in the case that G is a product of Farey graphs. This is joint work with Sam Taylor. Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Abstract: We discuss the notion of gap distributions of various lists of numbers in [0, 1], in particular focusing on those which are associated to certain low-dimensional dynamical systems. We show how to explicitly compute some examples using techniques of homogeneous dynamics, generalizing earlier work on gaps between Farey Fractions. This works gives some possible notions of `randomness' of special trajectories of billiards in polygons, and is based partly on joint works with J. Chaika, J. Chaika and S. Lelievre, and with Y.Cheung. This talk may also be of interest to number theorists. Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT Many problems in differential geometry can be reduced to solving a PDE of form [math] \mu(x)=0 [/math] where [math]x[/math] ranges over some function space and [math]\mu[/math] is an infinite dimensional analog of the moment map in symplectic geometry. In Hamiltonian dynamics the moment map was introduced to use a group action to reduce the number of degrees of freedom in the ODE. It was soon discovered that the moment map could be applied to Geometric Invariant Theory: if a compact Lie group [math]G[/math] acts on a projective algebraic variety [math]X[/math], then the complexification [math]G^c[/math] also acts and there is an isomorphism of orbifolds [math] X^s/G^c=X//G:=\mu^{-1}(0)/G [/math] between the space of orbits of Mumford's stable points and the Marsden-Weinstein quotient. In September of 2013 Dietmar Salamon, his student Valentina Georgoulas, and I wrote an exposition of (finite dimensional) GIT from the point of view of symplectic geometry. The theory works for compact Kaehler manifolds, not just projective varieties. I will describe our paper in this talk; the following Monday Dietmar will give more details in the Geometric Analysis Seminar. Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Abstract: A quasi-regular (QR) mapping between metric manifolds is a branched cover with bounded dilatation, e.g. f(z)=z^2. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that: 1) Every lens space admits a uniformly QR (UQR) mapping f. 2) Every UQR mapping leaves invariant a measurable conformal structure. The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces. Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Abstract: Given a triangulated 3-manifold M a natural question is: Does M admit a hyperbolic structure? While this question can be answered in the negative if M is known to be reducible or toroidal, it is often difficult to establish a certificate of hyperbolicity, and so computer methods have developed for this purpose. In this talk, I will describe a new method to establish such a certificate via verified computation and compare the method to existing techniques. This is joint work with Kazuhiro Ichihara, Masahide Kashiwagi, Hidetoshi Masai, Shin'ichi Oishi, and Akitoshi Takayasu. Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel The proof of the Hausdorff-Banach-Tarski paradox relies on the existence of a nonabelian free group in the group of rotations of [math]\mathbb{R}^3[/math]. To help generalize this paradox, Borel proved the following result on free groups. Borel’s Theorem (1983): Let [math]F[/math] be a free group of rank two. Let [math]G[/math] be an arbitrary connected semisimple linear algebraic group (i.e., [math]G = \mathrm{SL}_n[/math] where [math]n \geq 2[/math]). If [math]\gamma[/math] is any nontrivial element in [math]F[/math] and [math]V[/math] is any proper subvariety of [math]G(\mathbb{C})[/math], then there exists a homomorphism [math]\phi: F \to G(\mathbb{C})[/math] such that [math]\phi(\gamma) \notin V[/math]. What is the class, [math]\mathcal{L}[/math], of groups that may play the role of [math]F[/math] in Borel’s Theorem? Since the free group of rank two is in [math]\mathcal{L}[/math], it follows that all residually free groups are in [math]\mathcal{L}[/math]. In this talk, we present some methods for determining whether a finitely generated group is in [math]\mathcal{L}[/math]. Using these methods, we give a concrete example of a finitely generated group in [math]\mathcal{L}[/math] that is *not* residually free. After working out a few other examples, we end with a discussion on how this new theory provides an answer to a question of Brueillard, Green, Guralnick, and Tao concerning double word maps. This talk covers joint work with Michael Larsen. Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. The celebrated Poincare-Hopf theorem states that a vector field [math]X[/math] on a manifold [math]M[/math] has nonempty zero set [math]Z(X)[/math], provided [math]M[/math] is compact with empty boundary and [math]M[/math] has nonzero Euler characteristic. Surprising little is known about the set of common zeros of two or more vector fields, especially when [math]M[/math] is not compact. One of the few results in this direction is a remarkable theorem of Christian Bonatti (Bol. Soc. Brasil. Mat. 22 (1992), 215–247), stated below. When [math]Z(X)[/math] is compact, [math]i(X)[/math] denotes the intersection number of [math]X[/math] with the zero section of the tangent bundle. [math]\cdot [/math] Assume [math] dim_{\mathbb{R}(M)} ≤ 4[/math], [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Then every analytic vector field commuting with [math]X[/math] has a zero in [math]Z(X)[/math]. In this talk I will discuss the following analog of Bonatti’s theorem. Let [math]\mathfrak{g}[/math] be a Lie algebra of analytic vector fields on a real or complex 2-manifold [math]M[/math], and set [math]Z(g) := \cap_{Y \in \mathfrak{g}} Z(Y)[/math]. • Assume [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Let [math]\mathfrak{g}[/math] be generated by analytic vector fields [math]Y[/math] on [math]M[/math] such that the vectors [math][X,Y]p[/math] and [math]Xp[/math] are linearly dependent at all [math]p \in M[/math]. Then [math]Z(\mathfrak{g}) \cap Z(X) \neq \emptyset [/math]. Related results on Lie group actions, and nonanalytic vector fields, will also be treated. Sean Paul (Wisconsin) (Semi)stable Pairs I Sean Paul (Wisconsin) (Semi)stable Pairs II Spring 2014 date speaker title host(s) January 24 January 31 Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups Kent February 7 February 14 February 21 Ioana Suvaina (Vanderbilt) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach Maxim February 28 Jae Choon Cha (POSTECH, Korea) Universal bounds for the Cheeger-Gromov rho-invariants Maxim March 7 Mustafa Kalafat (Michigan-State and Tunceli) Conformally Kahler Surfaces and Orthogonal Holomorphic Bisectional Curvature March 14 Spring Break March 28 April 4 Matthew Kahle (Ohio) MOVED TO COLLOQUIUM SLOT Dymarz April 11 Yongqiang Liu (UW-Madison and USTC-China) Nearby cycles and Alexander modules of hypersurface complements Maxim April 18 Pallavi Dani (LSU) Large-scale geometry of right-angled Coxeter groups. Dymarz April 25 Jingzhou Sun (Stony Brook) TBA Wang May 2 May 9 Spring Abstracts Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups The beautiful theory developed by Thurston, Fried and McMullen provides a near complete picture of the various ways a hyperbolic 3-manifold M can fiber over the circle. Namely, there are distinguished convex cones in the first cohomology M^1(M;R) whose integral points all correspond to fibrations of M, and the dynamical features of these fibrations are all encoded by McMullen's "Teichmuller polynomial." This talk will describe recent work developing aspects of this picture in the setting of a free-by-cyclic group G. Specifically, I will introduce a polynomial invariant that determines a convex polygonal cone C in the first cohomology of G whose integral points all correspond to algebraically and dynamically interesting splittings of G. The polynomial invariant additionally provides a wealth of dynamical information about these splittings. This is joint work with Ilya Kapovich and Christopher J. Leininger. Ioana Suvaina (Vanderbilt) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach" The talk presents an explicit classification of the ALE Ricci flat Kahler surfaces (M,J,g), generalizing previous classification results of Kronheimer. The manifolds are related to Q-Gorenstein deformations of quotient singularities of type C^2/G, with G a finite subgroup of U(2). Using this classification, we show how these metrics can also be obtained by a construction of Tian-Yau. In particular, we find good compactifications of the underlying complex manifold M. Jae Choon Cha (POSTECH) Universal bounds for the Cheeger-Gromov rho-invariants" Cheeger and Gromov showed that there is a universal bound of their L2 rho-invariants of a fixed smooth closed (4k-1)-manifold, using a deep analytic method. We give a new topological proof of the existence of a universal bound. For 3-manifolds, we give explicit estimates in terms of triangulations, Heegaard splittings, and surgery descriptions. The proof employs interesting ideas including controlled chain homotopy and a geometric reinterpretation of the Atiyah-Hirzebruch bordism spectral sequence. Applications include new results on the complexity of 3-manifolds. Mustafa Kalafat (Michigan-State and Tunceli) Conformally Kahler Surfaces and Orthogonal Holomorphic Bisectional Curvature We show that a compact complex surface which admits a conformally Kahler metric g of positive orthogonal holomorphic bisectional curvature is biholomorphic to the complex projective plane. In addition, if g is a Hermitian metric which is Einstein, then the biholomorphism can be chosen to be an isometry via which g becomes a multiple of the Fubini-Study metric. This is joint work with C.Koca. Matthew Kahle (Ohio) TBA Yongqiang Liu Nearby cycles and Alexander modules of hypersurface complements For a polynomial transversal at infinity, we show that the Alexander modules of the hypersurface complement can be realized by the nearby cycle complex, and we obtain a divisibility result for the associated Alexander polynomial. As an application, we use nearby cycles to recover the mixed Hodge structure on the torsion Alexander modules, as defined by Dimca and Libgober. Pallavi Dani (LSU) A finitely generated group can be endowed with a natural metric whichis unique up to coarse isometries, or quasi-isometries. A fundamentalquestion is to classify finitely generated groups up toquasi-isometry. I will report on the progress on this question in thecase of right-angled Coxeter groups. In particular I will describehow topological features of the visual boundary can be used toclassify a family of hyperbolic right-angled Coxeter groups. I willalso discuss the connection with commensurability, an algebraicproperty which implies quasi-isometry, but is stronger in general.This is joint work with Anne Thomas. Jingzhou Sun(Stony Brook) "On the Demailly-Semple jet bundles of hypersurfaces in the 3-dimensional complex projective space" Let X be a smooth hypersurface of degree d in the 3-dimensional complex projective space. By totally algebraic calculations, we prove that on the third Demailly-Semple jet bundle X_3 of X, the Demailly-Semple line bundle is big for d not ness than 11, and that on the fourth Demailly-Semple jet bundle X_4 of X, the Demailly-Semple line bundle is big for d not ness than 10, improving a recent result of Diverio."
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The Limit Comparison Test The Example 1 (from previous page): We were trying to determine whether $\displaystyle\sum_{n=1}^\infty\frac{1}{5n+10}$ converges or diverges, and the basic comparison test was not helpful. DO: Try the limit comparison test on this series, comparing it to the harmonic series, before reading further. Solution 1: It does not matter which series you choose to have terms $a_n$ and $b_n$. $\displaystyle\frac{a_n}{b_n}=\frac{\frac{1}{5n+10}}{\frac1n}=\frac{1}{5n+10}\frac{n}{1}=\frac{n}{5n+10}$. Then $\displaystyle\lim_{n\to\infty}\frac{a_n}{b_n}=\lim_{n\to\infty}\frac{n}{5n+10}=\frac15$ and $0<\frac15<\infty$, so the series behave the same. Since the harmonic series diverges, so does our series. DO: What if we had chosen $a_n$ and $b_n$ the other way around? Can you see why it doesn't matter? ------------------------------------------------------------------- Example 2: To determine whether the series $\displaystyle\sum_{n=1}^\infty \frac{4^n}{2^n+3^n}$ converges or diverges, we'll look for a series that "behaves like" it when $n$ is large. Solution 2: Since we think $$ \frac{4^n}{2^n+3^n}\approx \frac{4^n}{3^n}, $$ when $n$ is large, we'll use $\displaystyle\sum_{n=1}^\infty\left(\frac43\right)^n$ for comparison. ( DO: Why can we not use the basic comparison test with this series?) Since $$ \lim_{n\to\infty}\frac{ \frac{4^n}{2^n+3^n}}{\frac{4^n}{3^n}}= \lim_{n\to\infty}\frac{ 3^n}{2^n+3^n}= \lim_{n\to\infty}\frac{ 3^n}{2^n+3^n}\frac{\frac1{3^n}}{\frac1{3^n}}= \lim_{n\to\infty}\frac{1}{\left(\frac{2^n}{3^n}\right)+1}= 1,$$ and $0<1<\infty$, our series are comparable. Since the geometric series $\displaystyle\sum_{n=1}^\infty\left(\frac43\right)^n$ diverges ($r=\frac43>1$), we can conclude that our original series diverges as well. The following test, which was discussed in the video, is not explicitly used by most instructors, and is not in most calculus texts -- discuss it with your instructor before using.
Kakeya problem Define a Kakeya set to be a subset [math]A\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. General lower bounds Trivially, [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, we have [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below. To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, etermining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\gtrsim 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a-b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] General upper bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa. Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207\ldots+o(1))^n \le k_n \le (1.88988+o(1))^n.[/math]
I was wondering if there is a way to write the derivative as an exponential? This might sound crazy at first, but I recently came across this formula for the Taylor expansion in three dimensions: $$\phi(\vec{x}+\vec{a})=\sum_{n=0}^{\infty}\frac{(\vec{a}\cdotp\vec{\nabla_{x}})^{n}}{n!}\Phi(\vec{x})=e^{\vec{a}\cdotp\vec{\nabla}}\Phi(\vec{x})$$ where, the hat arrows denote vectors. Found in: [http://www2.ph.ed.ac.uk/~rhorsley/SI09-10_t+f/lec09.pdf||http://www2.ph.ed.ac.uk/~rhorsley/SI09-10_t+f/lec09.pdf] I'd never seen a Taylor expansion written as an exponential. From a physics perspective, I think of it as propagating $\phi$ from $x$ to $x+a$ . Anyway, I was wondering if you can write the derivative (or directional derivative) in a manner such as (for a one dimensional case): $$\frac{\partial\Phi}{\partial x}=Lim_{\triangle x\rightarrow0},\frac{\Phi(x+\triangle x)-\Phi(x)}{\triangle x}=Lim_{\triangle x\rightarrow0}\frac{(e^{\triangle x\frac{\partial}{\partial x}}-1)}{\triangle x}\Phi(x)$$ $$=\frac{(e^{\partial x\frac{\partial}{\partial x}}-1)}{\partial x}\Phi(x)$$??? If correct, that implies: $$\Longrightarrow\partial\Phi=(e^{\partial x\frac{\partial}{\partial x}}-1)\Phi(x)$$ And that made me wonder: $$\Phi(\frac{\partial}{\partial x})=\int\partial\Phi=\int(e^{\partial x\frac{\partial}{\partial x}}-1)\Phi(x)$$ Of course this isn't correct as there's no differential under the latter integral, but it got me thinking. Since there are a ton (ok infinite) number of ways to write the derivative as a limit, I was playing with some and writing them as an integral as above, and all I can think is how much they remind me of laplace and fourier transforms (especially the latter if we move to general minkowski space). like in a fourier transform we have $\phi(x)\longrightarrow\tilde{\phi}(k)$ where k is a wavenumber, whereas in this treatment we have (when actually done correctly) $\phi(x)\rightarrow\tilde{\phi}(\frac{\partial}{\partial x})$ but $\frac{\partial}{\partial x}$ can be related to a wavenumber. Anyway, can anyone point me in the right direction (a book perhaps) to learn more about this type of thing??? It would be immensely useful in solving a problem I'm working on. thank you
I do not think so. Observation: Without loss of generality, $p(x)$ can be taken to be monic (constant multiples won't affect either $p(A) = 0$ or boundedness). Case 1: $p$ is degree $2$. By the above reduction, $p(x) = (x - \lambda)(x - \mu)$ for some $\lambda$ and $\mu$ in $\mathbb{C}$ (since $A$ clearly commutes with itself and with $I$, and since $A$ maps $D(A)$ to itself, this decomposition is reasonable). Yet if $p(A) = 0$, take the operator $\displaystyle B = A - \frac{\lambda + \mu}{2} I$ (with the same domain), and we see that for $\displaystyle \nu = \frac{\lambda - \mu}{2}$, $B$ satisfies $(B + \nu)(B - \nu) = 0$, or $B^2 - \nu^2 = 0$. We will show that an unbounded choice of $B$ exists, satisfying $B: D(B) \to D(B)$, hence an unbounded choice of $A$ exists, with $A: D(A) \to D(A)$. Subcase 1: $\nu = 0$. Then take $H = \ell^2(\mathbb{N})$, let $H_0 = D(B)$ be the sequences with only finitely many nonzero elements, and let $B$ be the operator represented by the infinite matrix $$ \begin{pmatrix} 0 & 1 & & & & & \cdots \\ 0 & 0 & & & & & \cdots \\ & & 0 & 2 & & & \cdots \\ & & 0 & 0 & & & \cdots \\ & & & & 0 & 3 & \cdots \\ & & & & 0 & 0 & \ddots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots \end{pmatrix},$$which is clearly well-defined on $H_0$ and clearly maps $H_0$ to itself. Then $B^2 = 0$, but letting $e_j$ be the $j$th basis vector, $B e_{2j} = j e_{2j - 1}$, so clearly $B$ is unbounded. Subcase 2: $\nu \neq 0$. Then again take $H = \ell^2(\mathbb{N})$, and $H_0$ the almost-everywhere-$0$ sequences. We now define $B$ by the matrix $$ \begin{pmatrix} 0 & \nu & & & & & \cdots \\ -\nu & 0 & & & & & \cdots \\ & & 0 & 2\nu & & & \cdots \\ & & -\frac{1}{2}\nu & 0 & & & \cdots \\ & & & & 0 & 3\nu & \cdots \\ & & & & -\frac{1}{3}\nu & 0 & \ddots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots \end{pmatrix}.$$ Again, $B$ maps $H_0$ to itself, and $B^2 e_j = \nu^2 e_j$ for all $j$, so $B^2 = \nu^2$ on $H_0$. Yet $Be_{2j} = j e_{2j - 1}$, so $B$ is unbounded. Case 2: $\deg p > 2$. Well, then $p(x) = q_1(x) q_2(x)$, with $\deg q_1 = 2$, $\deg q_2 \geq 1$. Again take $H$ and $H_0$ as above, and take $A$ to be an unbounded solution to $q_1(A) = 0$, satisfying $A: D(A) \to D(A)$. Then $p(A) = q_1(A) q_2(A) = 0 q_2(A) = 0$, and $A$ is unbounded. [Again, my naive factoring really depends on $D(A) \subseteq D(A^n)$, hence I am strongly using the $A: D(A) \to D(A)$ fact here.] QED. Of course, Case 2 is sort of a cheat. I think there should be a "natural" example in general, since you can construct unbounded examples to $A^n = 0$ by just increasing the order of the nilpotency, and $A^n = I$ by taking positive real numbers $a_1, \dotsc, a_n$ with $a_1 a_2 \dotsc a_n = 1$ and letting the building-block matrix be $$ \begin{pmatrix} 0 & a_1 & & & \cdots & & \\ & 0 & a_2 & & \cdots & & \\ & & 0 & a_3 & \cdots & & \\ \vdots & \vdots & \vdots & \ddots & \ddots & &\\\vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \\ & & & & & 0 & a_{n-1} \\a_n & 0 & 0 & & \dotsc & 0 & 0 \end{pmatrix} $$Then again make a block-diagonal infinite matrix such that as we repeat the blocks, $a_3, \dotsc, a_n$ are constant, and $a_1 \to \infty$ and $a_2 \to 0$ (or somesuch).
Let $x_{k+1} = \frac{x_k^{d_k}+1}{x_{k-1}}$, $k \in \mathbb{Z}$, where $d_{k+2} = d_k \in \mathbb{Z_{>0}}$. Let $b=d_1$ and $c=d_2$. Define the cluster algebra $A = A(\left( \begin{matrix} 0 & b \\ -c & 0 \end{matrix} \right))$ to be the algebra generated by all $x_k$. Is the following set $B$ a totally positive basis of $A$? $$ B = \{ x_0^{m_0} x_1^{m_1} x_2^{m_2} x_3^{m_3}: min(m_0, m_2)=0, min(m_1, m_3)=0 \}. $$ Are there some references for the canonical basis of the algebra $A$? Thank you very much. You should look at Positivity and canonical bases in rank 2 cluster algebras of finite and affine types by Sherman and Zelevinsky where they are able to construct a canonical basis explicitly for finte type and affine type (i.e. $bc < 4$ and $bc = 4$ respectively). Your $B$ is a basis for $A$ called the standard monomial basis. However it is not canonical as you can choose any four consecutive $x_i, x_{i+1}, x_{i+2}, x_{i + 3}$. Also it does not have the following notion of positivity. Given $y \in A$ we call $y$ positive if the expansion of $y$ as a Laurent polynomial in every cluster $x_i, x_{i+1}$ has only positive coefficients. We want a basis so that the positive elements of $A$ are positive combinations of the basis elements. For $(b,c) = (2,2)$ consider $z = x_0x_3 - x_1x_2$, then $$z = \frac{x_1^2 + x_2^2 + 1}{x_1 x_2},$$ and by symmetry is also a Laurent polynomial with positive coefficients for any other cluster. For finite type the canonical basis is all cluster monomials and for affine type the basis is all cluster monomials along with some additional elements.
$3\Rightarrow 2$: Suppose $X$ is a collection of nonempty sets, then $f:P(\bigcup X)\setminus\{\varnothing\}\to\bigcup X$ exists which chooses from every nonempty subset of $\bigcup X$. If we restrict $f$ to the set $X$ then we have a choice function. $2\Rightarrow 3$: Trivial, if every non-empty collection of non-empty sets has a choice functions, given a non-empty $X$ we have that $P(X)\setminus\{\varnothing\}$ is a non-empty collection of non-empty sets, thus has a choice function. $2\Rightarrow 1$: Trivial, if every collection has a choice function then every pairwise disjoint collection has a choice function $\{f(x)\mid x\in X\}$ is a choice set. $1\Rightarrow 2$: Given $X$ a non-empty collection of non-empty sets, let $\{\{x\}\times x\mid x\in X\}$ be a collection of now pairwise disjoint sets. The cut set is a choice function. (You may want to show why they are pairwise disjoint, this is because if $x\neq y$ then $(\{x\}\times x)\cap(\{y\}\times y)=\varnothing$; also you might want to show why the choice set is a function, but it only meets $\{x\}\times x$ at one point... so functionality holds and it is indeed a choice function.)
The power law function for gamma-ray burst is defined for amplitude ($A$), energy ($E$), pivot energy ($E_{piv}$) (fixed) and index $\lambda$ as $ f(E; A, \lambda) = N(E) = A*\left(\dfrac{E}{E_{piv}}\right)^\lambda $ Where we use a range of energy for model fitting of GRB. The fitting gives us a single value for all variables except for energy (which is the input), and error propagation for the said formula can be calculated by partial differentiating it with respect to it's variables. What I'm confused at is that what value will I give to $E$ when I'll evaluate the error expression $ err_{pl} = \displaystyle\sqrt(\left(\frac{E}{E_{piv}}\right)^{2\lambda}s_A^2 + \frac{A^2\lambda^2\left(E/E_{piv}\right)^{2\lambda-2}s_E^2}{E_{piv}^2} + A^2\left(\frac{E}{E_{piv}}\right)^{2\lambda}\log\left(\frac{E}{E_{piv}}\right)^2s_{\lambda}^2) $
A recent video by 3Blue1Brown (previously seen generating Pi from bouncing blocks) introduced me to an interesting problem from the 2011 International Mathematical Olympiad. The problem (described below) describes a windmill process where a line rotates through a cloud of points, switching pivots whenever it hits a new point. I found the video really informative and the process mesmerizing to watch, so I just had to code up a simulation of the process to play with. The end result can be seen below. In the rest of this post, I will talk about the problem and how I implemented it. In this demo, you can generate a point cloud and watch the windmill rotate. You can switch the pivot by clicking on a point. You can also ask the demo to “solve” the problem. The problem For our problem, we examine a windmill process. We start with a set of points on a plane \(\mathcal{S}\) and a pivot point \( \mathcal{P} \in \mathcal{S} \). No three points in \(\mathcal{S}\) are collinear meaning that no three points are on the same line. We then draw a line \( \ell \) through \(\mathcal{P}\) and start rotating the line clockwise. Whenever the line hits another point in \(\mathcal{S}\), that point becomes our new pivot, and we continue rotating the line. The non-collinearity property ensures that we will never hit two new points at the same time. Our question then is: can you pick a starting pivot \( \mathcal{P} \) so that every point will be hit infinitely many times? Before we look at the actual solution, I’d like to mention a few insights. Giventhat we want to hit every point infinitely many times, we want to find a cyclethat hits every point at least once. We can also look at the opposite problem.Can we pick a starting pivot so that we don’t hit every point infinitelymany times, i.e. can we find a cycle that never touches some points? The answer to that last question is a yes, at least sometimes. If the points in \(\mathcal{S}\) do not form a convex hull, we can start with our line perfectly vertical at the leftmost point of the convex hull of \(\mathcal{S}\) and the line will simply walk the the convex hull, skipping any interior points. The solution If you play with the demo, you may notice that the line will circle around the center of the point cloud. If the line starts in the center of the cloud, it will stay in the center, and if it starts at the edge, it will stay at the edge. Obviously, if we stay in the center, we will hit every point in a full rotation. Now let’s prove this. After selecting an initial pivot, the number of points on either side of the line stays the same. When a we hit a new pivot, we have two options: The new pivot is above the current pivot. This means that it was originally on the right of the line since the line had to rotate into it. When the line rotates on, this means that our original pivot is now on the right of the line, maintaining the status-quo. When the new pivot is below our current pivot, the situation is the opposite. The new pivot must have been on the left, and our old pivot will end up on the left. We can use this invariant to choose or pivot so that yes all points are hit atleast once. To do so, we just need to have as many points on the left as on theright, and the line pointing straight up. When we our line has made half a rotation, we still have the same number of points on the left as on the right, although left and right has changed. If we had an odd number of points, this must mean that our current pivot is the same as the one we started with, since there is only one point with as many points to its left as to its right. Since the only way to switch sides of the line is to get hit by the line, this must mean that we hit every point at least once. If we started with an even number, things are more complicated since we lose our symmetry, but luckily it doesn’t get that much more complicated. If the number of points \(|\mathcal{S}| = N \), we pick the \(n = \frac{N}{2}\)th point from the left, so that we have \(n\) points on the left and \(n - 1\) on the right of the pivot. Now, if we rotate half a circle, every point has flipped sides, except for two: our initial \(\mathcal{P}\) which is now on the right side of the line, and the point directly next to it, which has become the current pivot. If we once again turn half a circle, this flips again, and we are back in our starting position. This explanation is based on the explanation by 3Blue1Brown in his videostarting from about 9 minutes in. Please dowatch his video; he explains it better and with nicer graphics than I do orcan. The demo Now to actually make this work. There are roughly two parts to this: showing itvisually and actually simulating the problem. For my visualisation, I chose touse SVG because it allows me to easily scale the resulting images, and saved mefrom wrapping my head around coordinates. I can simply add circle elements andspecify coordinates, optionally adding a pivot-class when relevant. Drawing aline is not much harder, except that SVG doesn’t do infinite lines. Instead, Ijust use a line that over-scans the view box by a lot. So with that out of theway, I will go over the algorithmic details of implementing the demo. Initialization In order to generate our point cloud we simply generate \(2 \times n \) random numbers between zero and one and be done. This does not guarantee that no 3 of those points are on one line. We can technically just check new candidates and discard them when they are invalid, but: checking the \(i\)th requires evaluating all \((i - 1)(i - 2)\) existing pairs causes the point generation process to be \(O\left(n^3\right)\) which gets expensive at higher point counts, checking is not even necessarily feasible due to floating point rounding errors in JavaScript, but all of that doesn’t matter because the probability of completely random points being multicollinear is very nearzero since \(\mathbb{R}\) is just reallylarge. I really wanted to find a source that proces the last one, it just intuitively feels true, but I didn’t. If you have one please get in touch. Finding collisions If we want to rotate our line around like the described windmill, we need to know when we are going to hit the next point. To do this, we can simply compute the angle between each point \(\mathcal{Q}_i\) and the \(x\)-axis relative to our current pivot \(\mathcal{P}\). We can then compare this to the current angle of our line, and determine which will bit hit first. Since we are only interested in hitting things with a clockwise rotation, we add half a circle if our result would be a counter-clockwise rotation. This works as follows: All that remains is doing this for every point, finding the one with the smallest distance to hit, rotating by a bit, and continuing on. There is one slight caveat: when you are currently hitting two points, you will start flip-flopping between them when trying to compute the next point to hit. There are various ways you can work this, but I decided that I would ignore all points with a rotation less than some \(\epsilon\), in this case \(10^{-6}\). As far as I know, this doesn’t cause issues. Final thoughts I find the windmill problem discussed in this post rather beautiful. Its statement is very easy to understand and visualize, and its solution and proof can be understood by everyone. Crafting the visualisation for it really made me appreciate it even more. On the other hand, creating that visualisation with SVG was a nice experiencegetting a bit deeper into that. Sure, you can draw al this on a <canvas>element but why would you? SVG gives you all sorts all sorts of viewbox controlwithout having to work for it, and you can even depend on CSS to do yourstyling. All around a nice exercise.
Based on Lorentz factor $\gamma = \frac{1}{\sqrt {1-\frac{v^2}{c^2}}}$ it is easy to see $v < c$ since otherwise $\gamma$ would be either undefined or a complex number, which is non-physical. Also, as far as I understand this equation was known before Einstein's postulates were published. My question is: why didn't Lorentz himself conclude that no object can go faster than speed of light? Or maybe he did, I do not know. I feel I am missing some contexts here. If I had to sum up my findings in a sound bite it would be this: Einstein was the first to derive the Lorentz transformation laws based on physical principles--namely that the speed of light is constant and the principle of relativity. The fact that Lorentz and Poincaré were not able to do this naturally leads to why they were not able to justify making any fundamental statements about the nature of space and time--namely that nothing can go faster than light. This is seen by a careful reading of the Einstein (1905) – Special relativity section of the History of Lorentz Transformations Wikipedia article On June 30, 1905 (published September 1905) Einstein published what is now called special relativity and gave a newderivation of the transformation, which was based onlyon the principle on relativity and the principle of the constancy of the speed of light. [Emphasis mine] Furthermore, it is stated that (idem) While Lorentz considered "local time" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. My reading of this seems so indicate that that at the time of publishing, Lorentz considered the notion of "local time" (via his transformations) to be just a convenient theoretical device, but didn’t seem to have a justifiable reason for why it it should be physically true. It looks obvious in hindsight I know, but model building is tough. So the reason, in short, seems (to me) to be this: As far as Lorentz saw it, he was able to "explain" the Michaelson-Morely experiment in a way not unlike the way that Ptolemy could explain the orbits with epicycles. Did it work? Yes, but its mechanism lacked physical motivation. That is, he didn't have a physical reason for such a transformation to arise. Rather it was Einstein who showed that these transformation laws could be derived from a single, physical assumption--the constancy of the speed of light. This insight was the genius of Einstein. Picking up at the end of the last blockquote, we further have that (idem) For quantities of first order in v/c, this was also done by Poincaré in 1900; while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations concern the nature of space and time. This implies actually that Lorentz and Poincaré were able to derive the Lorentz transformations to first order in $\beta$, but since they believed that the Aether existed they failed to be able to make the fundamental connection to space, time and the constancy of the speed of light. The failure to make this connection means that there would have been no justifiable reason to take it physically serious. So, to Lorentz and Poincaré the Lorentz transformation laws would remain ad-hoc mathematical devices to explain the Michaelson-Morley experiment within the context of the Aether but not saying anything fundamental about space and time. This failure to conclude any fundamental laws about the nature of spacetime subsumes, by implication, making any statements such as no moving object can surpass the speed of light. Edit: @VladimirKalitvianski has pointed me to this source, which provides the opinions of historians on the matter. Poincaré's work in the development of special relativity is well recognised, though most historians stress that despite many similarities with Einstein's work, the two had very different research agendas and interpretations of the work. Poincaré developed a similar physical interpretation of local time and noticed the connection to signal velocity, but contrary to Einstein he continued to use the Aether in his papers and argued that clocks at rest in the Aether show the "true" time, and moving clocks show the local time. So Poincaré tried to keep the relativity principle in accordance with classical concepts, while Einstein developed a mathematically equivalent kinematics based on the new physical concepts of the relativity of space and time. Indeed this resource is useful, as it adds an additional dimension as to why Lorentz didn't publish any claims about a maximum signal velocity. It reads rather clearly, so I won't bother summarizing it. Because typically if you find an expression that seems to break down at some value of $v$, you would conclude that the expression simply loses its validity for that value of $v$, not that the value isn't attainable. Presumably this was the conclusion of Lorentz and others. The reason Einstein concluded otherwise is that special relativity gives a physical argument for "superluminal speeds are equivalent to time running backwards" -- the argument is "does a superluminal ship hit the iceberg before or after its headlight does?" This depends on the observer, and because the headlight would melt the iceberg, the consequences of each observation are noticeably different. The only possible conclusions are "superluminal ships don't exist", "time runs backwards for superluminal observers", or "iceberg-melting headlights don't exist".
Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source College of Mathematics and Statistics, Chongqing University, Chongqing University, Chongqing 401331, China $\left\{\begin{array}{llll}u_t=\Delta u-\chi_1\nabla\cdot( u\nabla w)+\mu_1u(1-u-a_1v),\quad &x\in \Omega,\quad t>0,\\v_t=\Delta v-\chi_2\nabla\cdot( v\nabla w)+\mu_2v(1-a_2u-v),\quad &x\in\Omega,\quad t>0,\\w_t=\Delta w- w+u+v,\quad &x\in\Omega,\quad t>0,\\\end{array}\right.$ Mathematics Subject Classification:Primary: 35B40, 92C17; Secondary: 35B65, 35K4. Citation:Ke Lin, Chunlai Mu. Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2233-2260. doi: 10.3934/dcdsb.2017094 References: [1] [2] [3] [4] [5] [6] T. Cieślak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, [7] [8] P. de Mottoni, [9] [10] [11] [12] [13] [14] [15] D. Horstemann, Generalizing the Keller-Segel model: Lyapunov functionals, [16] [17] [18] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolicparabolic type on non-convex bounded domains, [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] Y. Lou and M. Winkler, Global existence and uniform boundedness of smooth solutions to a cross-diffusion system with equal diffusion rates, [30] [31] M. Mizukami and T. Yokota, Global existence and asymptotic stability of solutions to a twospecies chemotaxis system with any chemical diffusion, [32] J. Murray, [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] M. Winkler, Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity, [55] [56] [57] show all references References: [1] [2] [3] [4] [5] [6] T. Cieślak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, [7] [8] P. de Mottoni, [9] [10] [11] [12] [13] [14] [15] D. Horstemann, Generalizing the Keller-Segel model: Lyapunov functionals, [16] [17] [18] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolicparabolic type on non-convex bounded domains, [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] Y. Lou and M. Winkler, Global existence and uniform boundedness of smooth solutions to a cross-diffusion system with equal diffusion rates, [30] [31] M. Mizukami and T. Yokota, Global existence and asymptotic stability of solutions to a twospecies chemotaxis system with any chemical diffusion, [32] J. Murray, [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] M. Winkler, Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity, [55] [56] [57] [1] Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. [2] Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. [3] Shijie Shi, Zhengrong Liu, Hai-Yang Jin. Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source. [4] Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. [5] Chunhua Jin. Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source. [6] [7] Xie Li, Zhaoyin Xiang. Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source. [8] Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. [9] [10] Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. [11] Giuseppe Viglialoro, Thomas E. Woolley. Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth. [12] Rachidi B. Salako, Wenxian Shen. Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$. [13] Rachidi B. Salako, Wenxian Shen. Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source. [14] Rachidi B. Salako. Traveling waves of a full parabolic attraction-repulsion chemotaxis system with logistic source. [15] [16] Monica Marras, Stella Vernier-Piro, Giuseppe Viglialoro. Decay in chemotaxis systems with a logistic term. [17] Wei Mao, Liangjian Hu, Xuerong Mao. Asymptotic boundedness and stability of solutions to hybrid stochastic differential equations with jumps and the Euler-Maruyama approximation. [18] Tomás Caraballo, Francisco Morillas, José Valero. Asymptotic behaviour of a logistic lattice system. [19] [20] Tobias Black. Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Morley's Miracle In 1899, more than a hundred years ago, Frank Morley, then professor of Mathematics at Haverford College, came across a result so surprising that it entered mathematical folklore under the name of Morley's Miracle. Morley's marvelous theorem states that The three points of intersection of the adjacent trisectors of the angles of any triangle form an equilateral triangle. The applet below serves to demonstrate that, indeed, whatever the shape of the given triangle, the Morley triangle is always equilateral. 22 January 2015, Created with GeoGebra Morley's original proof stemmed from his results on algebraic curves tangent to a given number of lines. As usual in mathematics, numerous attempts have been made to find a simple, elementary proof that could match the level of knowledge and proficiency required to grasp the statement of the theorem. The simplest proofs proceed backwards starting with an equilateral triangle. They differ in subsequent steps. Most such proofs highlight some additional features of the configuration but complicate matters unnecessarily as a few most trivial proofs convincingly demonstrate. Before giving several backward proofs, here is a direct one which, while logically absolutely transparent, requires some high school trigonometry. Proof #1 In all likelihood, this proof first appeared in A. Letac, Solution (Morley's triangle), Problem No. 490, Sphinx, 9(1939) 46. I came across it in a Russian book D. O. Shklyarsky, N. N. Chentsov, Y. M. Yaglom, Selected Problems and Theorems of Elementary Mathematics, v. 2, problem 97, Moscow, 1952 and also in The Art of Mathematics by B. Bollobás (Cambridge University Press, 2006, p. 126-127.) The idea of the proof is fairly straightforward. In triangles $ARB,$ $BPC,$ $CQA,$ we know the bases - $AB,$ $BC,$ and $AC$ - and the adjacent angles. The Law of Sines then yields the segments $AR,$ $BR,$ $BP,$ $CP,$ $CQ,$ and $AQ.$ Next we apply the Law of Cosines to triangles $AQR,$ $BPR,$ and $CPQ$ to determine (and compare) the segments $QR,$ $PR,$ and $PQ.$ The fact that they come out equal proves the theorem. For simplicity, let (angles) $A = 3\alpha,$ $B = 3\beta,$ and $C = 3\gamma.$ This implies that $\alpha + \beta + \gamma = 60^{\circ}.$ Also, assuming that the radius of the circle circumscribed around $\Delta ABC$ equals $1,$ we get $AB = 2\sin (3\gamma),$ $BC = 2\sin (3\alpha),$ $AC = 2\sin (3\beta).$ Consider now $\Delta BPC.$ By the Law of Sines, $\begin{align}\displaystyle \frac{BP}{\sin (\gamma)} &= \frac{BC}{\sin (180^{\circ} - \beta - \gamma)}\\ &= 2\frac{\sin (3\alpha)}{\sin (\beta + \gamma)}\\ &= 2\frac{\sin (3\alpha)}{\sin (60^{\circ} - \alpha)}. \end{align}$ Therefore, $\displaystyle BP = 2\frac{\sin (3\alpha)\sin (\gamma)}{\sin (60^{\circ} - \alpha)}.$ To simplify the expression note that $\begin{align} \sin (3\alpha) &= 3\sin (\alpha) - 4\sin^{3}(\alpha)\\ &= 4\sin (\alpha)[(\sqrt{3}/2)^{2} - \sin^{2}(\alpha)]\\ &= 4\sin (\alpha)[\sin^{2}(60^{\circ}) - \sin^{2}(\alpha)]\\ &= 4\sin (\alpha)(\sin (60^{\circ}) + \sin (\alpha))(\sin (60^{\circ}) - \sin (\alpha))\\ &= 4\sin (\alpha) 2\sin[(60^{\circ} + \alpha)/2]\cos[(60^{\circ} - \alpha)/2] 2\sin[(60^{\circ} - \alpha)/2]\cos[(60^{\circ} + \alpha)/2]\\ & = 4\sin (\alpha)\sin (60^{\circ} + \alpha)\sin (60^{\circ} - \alpha). \end{align}$ Reaping the fruits of this effort, $BP = 8\sin (\alpha)\sin (\gamma)\sin (60^{\circ} + \alpha).$ Similarly, $BR = 8\sin (\gamma)\sin (\alpha)\sin (60^{\circ} + \gamma).$ There are two ways to proceed to the second step. The traditional one that was employed in the above references invokes the Law of Cosines. A more recent one, due to Leo Giugiuc, makes use of the Law of Sines. I continue with the traditional proof, while Leo's proof deserves to be treated seperately. We invoke the Law of Cosines in $\Delta BPR:$ $PR^{2} = BP^{2} + BR^{2} - 2 BP\cdot BR \cos (\beta),$ from where $PR^{2} = 64\sin^{2}(\alpha)\sin^{2}(\gamma)[\sin^{2}(60^{\circ} + \alpha) + \sin^{2}(60^{\circ} + \gamma) - 2\sin (60^{\circ} + \alpha)\sin (60^{\circ} + \gamma)\cos (\beta)].$ Note, however, that $(60^{\circ} + \alpha) + (60^{\circ} + \gamma) + \beta = 180^{\circ}.$ Thus, there exists a triangle with angles $(60^{\circ}+\alpha),$ $(60^{\circ}+\gamma),$ and $\beta.$ Indeed, there is a whole family of similar triangles with those angles. Out of this family, choose the one with the circumscribed radius equal to $1$ (then, as above, by the Law of Sines, its sides have a very simple form.) In that triangle, apply the Law of Cosines: $\sin^{2}(\beta) = \sin^{2}(60^{\circ} + \alpha) + \sin^{2}(60^{\circ} + \gamma) - 2\sin (60^{\circ} + \alpha)\sin (60^{\circ} + \gamma)\cos (\beta).$ Which gives $PR = 8\sin (\alpha)\sin (\beta)\sin (\gamma),$ an expression which is symmetric in $\alpha,$ $\beta,$ and $\gamma.$ $QR$ and $PQ$ are similarly found to be equal to the same expression. Therefore, $PR=PQ=QR.$ On Morley and his theorem Doodling and Miracles Morley's Pursuit of Incidence Lines, Circles and Beyond On Motivation and Understanding Of Looking and Seeing Backward proofs J.Conway's proof D. J. Newman's proof B. Bollobás' proof G. Zsolt Kiss' proof Backward Proof by B. Stonebridge Morley's Equilaterals, Spiridon A. Kuruklis' proof J. Arioni's Proof of Morley's Theorem Trigonometric proofs Bankoff's proof B. Bollobás' trigonometric proof Proof by R. J. Webster A Vector-based Proof of Morley's Trisector Theorem L. Giugiuc's Proof of Morley's Theorem Dijkstra's Proof of Morley's Theorem Synthetic proofs Another proof Nikos Dergiades' proof M. T. Naraniengar's proof An Unexpected Variant Proof by B. Stonebridge and B. Millar Proof by B. Stonebridge Proof by Roger Smyth Proof by H. D. Grossman Proof by H. Shutrick Original Taylor and Marr's Proof of Morley's Theorem Taylor and Marr's Proof - R. A. Johnson's Version Morley's Theorem: Second Proof by Roger Smyth Proof by A. Robson Algebraic proofs Invalid proofs Copyright © 1996-2018 Alexander Bogomolny 65607583
I just want to add something to the correct @annav answer, with a practical example in basic Quantum Field Theory. Imagine a particle process with $2$ initial particles and $2$ final particles, you have some initial state (say at t= $-\infty$), which is $|i\rangle =|1\rangle |2\rangle$, where $|1\rangle$ and $|2\rangle$ are the states (at t= $-\infty$) of the initial particles. This initial state $|i\rangle$ has a unitary evolution. Practically, the non-trivial part of this evolution is due to the exchange of "virtual particles" (for instance you may imagine two initial electrons exchanging a "virtual photon", or a initial left-handed electron and initial right-handed electron exchanging a "virtual Higgs") Now, the initial state $|i\rangle$ is evolving, so at $t = +\infty$, the final state could be written $|f\rangle = \sum\limits_{k,l} A_{1,2;k,l}|k\rangle |l\rangle$, where $|k\rangle$ and $|l\rangle$ represent some possible state for the final particles. Until now, you see that there is a (unitary) evolution due to the interaction, but there is no "collapse". $A_{1,2;k,l}$, in the above expression, is simply the probability amplitude to find the final particles in a state $|k\rangle |l\rangle$, supposing the initial particles in a state $|1\rangle |2\rangle$. However, if you make a measurement (at t=$+\infty$), you will have a "collapse", and you will find a final state $|k\rangle |l\rangle$ with the probability $|A_{1,2;k,l}|^2$ An other interesting point is that, considering here simple Quantum Mechanics, interactions between a particle and a measurement apparatus , may appear by entanglement. We may consider the example of the 2-slit experiment with photons. Without any measurement appararatus, the total state is $|\psi\rangle = |\psi_L\rangle + |\psi_R \rangle$, where $L$ and $R$ represent the two slits. If you bring a measurement apparatus potentially able to detect which slit has been used for the photon, but without doing explicitely the measurement, the new state is ; $|\psi'\rangle = |\psi_L\rangle |M_L \rangle + |\psi_R \rangle |M_R \rangle$, where $|M_R\rangle$ and $|M_L\rangle$ are states of the measurement apparatus which are quasi-orthogonal ($\langle M_R|M_L\rangle = 0$). This is a pre-measurement state, we see that there is an entanglement between the states of the particle, and the states of the measurement apparatus. Because the states of the apparatus are orthogonal, this destroys the interference pattern. Now, you may really perform a measurement, in this case, you explicitely detect which slit has been used by the photon. After this, the final state would be $|\psi''\rangle = |\psi_L\rangle |M_L \rangle$, if the $L$ slit path is detected. More correct models would involve in fact entangled (pre-measurement) states between the particle, the measurement apparatus and the environment $ \sum\limits_i |\psi_i\rangle |M_i \rangle |E_i \rangle$.
What Is Area? What Is Area Elementary Introduction into the Concept of Area Equidecomposition of a Triangle and a Rectangle: first variant Pick's Theorem Area of a Circle by Rabbi Abraham bar Hiyya Hanasi Area of a Circle by Leonardo da Vinci Volume and Area of Torricelli's Trumpet Area, as many other mathematical concepts, has experiential origins. Intuitively, this is the measure of expanse associated with plane figures. Area is a \(2\)-dimensional analogue of the \(1\)-dimensional length and \(3\)-dimensional volume. Paint containers carry a mention of the area the paint should suffice to cover. Area of a forest region is an indicator of the number of trees that grow there, and vice versa. As many other mathematical concepts, area can be defined in a veriety of ways. The context and potential applications may dictate which of the possible definitions is preferable under given circumstances. In general, the three commonplace terms length, area, and volume relate to three instances of a general concept of measure. A measure \(M\) is a non-negative \(M(A) \ge 0.\) \(M(\cup_{n=1}^{\infty}A_{n}) = \sum_{n=1}^{\infty}M(A_{n})\), for all families of pairwise disjoint sets \(A_{n}.\) Measures are constructed starting with an assignment of values to some basic sets. For example, in the linear (\(1\)-dimensional) case, all segments \((a, b), [a, b), (a, b], [a, b], a < b,\) may be assigned the value \(b - a\). Next, the disjoint unions of open segments \((a_{n}, b_{n}), n = 1, 2, ...\) are assigned the value of \(\sum_{n=1}^{\infty}(b_{n}-a_{n})\), and similarly for the unions of semi-open intervals. Every linear set \(A\) admits a cover by countably many semi-open intervals \([a_{n}, b_{n}), n = 1, 2, ...\). The values \(\sum_{n=1}^{\infty}(b_{n}-a_{n})\) of all such covers, being non-negative, are bounded from below which allows one to assign the value of \(\inf(\sum_{n=1}^{\infty}(b_{n}-a_{n}))\) to the set \(A\). This, however, does not define a measure yet, but what is known as the outer measure \(M_{e}\). In this manner, all linear sets A are assigned an outer measure. So defined outer measure is non-negative, monotone and subadditive: \(M(A) \ge 0.\) \(A\subset B\) implies \(M_{e}(A) \le M_{e}(B).\) \(M(\cup_{n=1}^{\infty}A_{n}) ≤ \sum_{n=1}^{\infty}M(A_{n}).\) Not all sets are created equal. A set is called (Carathéodory) measurable if \(M_{e}(A∩E) + M_{e}(E-A) = M_{e}(E)\) for every set \(E\). Measurable sets form a \(\sigma\)-algebra on which the outer measure has the two required properties of measure. In a similar manner, one may define the Hausdorff (outer) measure \(H^{s}, s > 0,\) by setting \(H^{s}([a, b)) = (b-a)^{s}.\) (In passing, the Hausdorff dimension of set A is defined as a real number \(s_{0}\) such that \(H^{s}(A) = \infty,\) for \(s < s_{0}\) and \(H^{s}(A) = 0,\) for \(s > s_{0}\).) The Dirac measure assign values 1 to points in a discrete set, say the set of natural numbers \(Z\), and to each interval the number of points from that descrete that. In the \(2\)-dimensional case, an \(a\times b\) rectangle is usually assinged a value of \(a\cdot b\) and the construction analogous to the above leads to the measure that generalizes the concept of area. As you can see (or just sense, because I omitted all the proofs) the mathematical notion of area as a full pledged measure is not at all simple. However, for polygonal shapes, the process of assigning areas could be simplified based on the idea of equidecomposition: Two shapes are equidecomposable, provided one could be cut into a finite number of pieces that can be rearranged to form the second shape without overlap. Formally, we shall define area as a non-negative finitely additive function \(S\) taking the same values for all congruent shapes: \(S(A) \ge 0.\) \(S(\cup_{n=1}^{N}A_{n}) = \sum_{n=1}^{N}S(A_{n})\), for all families of pairwise disjoint sets \(A_{n}, n = 1, 2, ..., N.\) \(A = B\) implies \(S(A) = S(B).\) (Here, disjoint sets may share parts of their boundaries but not interior points.) There is a number of ways to construct such a function. Each proceeds in several steps starting with a chosen basic figure. [Euclid I.35-I.38, Kiselev, Ch. 5, Hadamard, Book IV, and Jacobs, Ch. 9] choose rectangle as the basic figure, defining its area (Euclid only implicitly) as the product of its sides. [Hilbert, Ch. IV] starts with triangle and defines its area as half the product of a base and the corresponding altitude. The fact that this definition is independent of which side is perceived as a base follows from the similarity of right triangles \(ADC\) and \(BEC\) in the following diagram: The similarity of the two triangles gives a proportion \(AC/BC = AD/BE\) which is the same as \(BC\cdot AD = AC\cdot BE.\) Clearly, bringing in the third side would give the same product. The area of a triangle is uniquely defined as half that (or either) product. As a consequence, the area of a rectangle is the product of two sides; the area of parallelogram is the product of either base times the corresponding altitude. Were we to start with a rectangle, the finite addivity property would make it necessary to define the area of a triangle as half that of one of the associated rectangles and then prove independence of the result on the choice of the rectangle. By Euclid I.37-38, the area of \(\Delta ABC\) is half that of rectangle \(ABLK\) and also that of \(BCMN\). But this exactly means that the products of the sides of the two rectangles are the same. (As an exercise, find a decomposition of one of the rectangles into triangles that, when rearranged, form the other rectangle.) To define the area of a polygon \(P\) triangulate it into the union of non-overlapping triangles \(\{T_{n}\}\) and declare the sum of their areas as that of \(P\). This definition is independent of triangulation. For another triangulation \(\{U_{n}\}\) of \(P\), form all possible intersection \(\{T_{k}\cap U_{m}\}\). Each of this (if not empty) is a polygon which, if not in itself a triangle, could be triangulated. Thus we find that both families of triangles \(\{T_{n}\}\) and \(\{U_{m}\}\) are composed of the same triangular pieces and define, therefore, the same area. Finite additivity implies monotonicity of area: \(A\subset B\) implies \(S(A) \le S(B),\) for polygonal A and B. The adopted formula for the area of rectangles and triangles leads to the well known formulas of other shapes, trapezoids in particular. It also implies continuity: if two polygons are close (e.g., in the sense of the Hausdorff distance) then their areas are also close. Continuity, in turn makes it possible to define area for curvilinear shapes. For example, the area of a circle is defined as the common limit of the areas of inscribed and circumscribed regular polygons. References J. Hadamard, Leçons de géométrie élémentaire, tome I, 13e édition, 1947, Editions Jacques Gabay, 1988, ISBN 2-87647-038-1. D. Hilbert, Foundations of Geometry, Open Court, 1999 H. R. Jacobs, Geometry, 3 rdedition, W. H. Freeman and Company, 2003 Kiselev's Geometry. Book I. PLANIMETRY, adapted from Russian by Alexander Givental, Sumizdat, 2006. Copyright © 1996-2018 Alexander Bogomolny 65607734
I have to admit, I am not familiar with the use of $\mathscr{O}\left[\lambda^{n}\right]$ notation. Apparently it doesn't mean what I thought. In John L. Friedman's lecture notes on Lie derivatives, forms, densities, and integration He writes: A vector field $\mathbf{w}$ is Lie-derivedby $\mathbf{v}$ if, for small $\lambda$, $\lambda\mathbf{w}$ is dragged along by the fluid flow. To make this precise, we are requiring that the equation [eq. (3)] $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\overline{\mathbf{r}}\left(t\right),$$ be satisfied to $\mathscr{O}\left(\lambda\right)$. At first I thought I knew exactly what that meant. Then I tried to put it into more rigorous terms and realized that I don't really know what it means in this context. Since equation 3 is linear in $\lambda$, I expect it to be accurate to $\mathscr{O\left(\lambda^{2}\right)}$. What does it mean to say equation 3 is "satisfied to $\mathscr{O}\left(\lambda\right)$", or to use $\mathscr{O}\left(\lambda^{2}\right)$ in equation 4 $$\mathbf{v}\left(\mathbf{r}\right)+\lambda\mathbf{v}\cdot\nabla\mathbf{w}\left(\mathbf{r}\right)=\mathbf{v}\left(\overline{\mathbf{r}}\right)=\mathbf{v}\left[\mathbf{r}+\lambda\mathbf{w}\left(\mathbf{r}\right)\right]$$ $$=\mathbf{v}\left(\mathbf{r}\right)+\lambda\mathbf{w}\cdot\nabla\mathbf{v}\left(\mathbf{r}\right)+\mathscr{O}\left(\lambda^{2}\right)?$$ I know it means "don't worry about error terms. They will vanish when the limit is taken." But that's not very satisfying. How is $\mathscr{O}\left(\lambda^{2}\right)$ stated in terms of a Taylor polynomial with a remainder? Edit to add: If $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\overline{\mathbf{r}}\left(t\right),$$ satisfied to $\mathscr{O}\left(\lambda\right),$ means $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\overline{\mathbf{r}}\left(t\right)+\mathscr{O}\left(\lambda\right),$$ then $\mathscr{O}\left(\lambda\right)=\mathbf{k}\lambda,$ where $\mathbf{k}\ne\vec{0}$ is a constant. So, $$\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\lim_{\lambda\to0}\frac{\overline{\mathbf{r}}-\mathbf{r}+\mathscr{O}\left(\lambda\right)}{\lambda}=\frac{d\mathbf{r}}{d\lambda}+\mathbf{k}.$$ Which is pretty clearly not what is intended. If it means $$\overline{\mathbf{r}}\left(t\right)-\mathbf{r}\left(t\right)=\mathscr{O}\left(\lambda\right),$$ then $\mathscr{O}\left(\lambda\right)=\mathbf{w}\left(\mathbf{r}\left(t\right)\right).$ Which makes sense. But that implies $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)+\mathscr{O}\left(\lambda^{2}\right)=\overline{\mathbf{r}}\left(t\right).$$ That is what has me confused. Are equation 3 satisfied to $\mathscr{O}\left(\lambda\right)$ and the last statement above equivalent?
In the Frobenius Coin problem (or Chicken McNugget theorem) for $n>2$, do the numbers have to be pairwise relatively prime, or just relatively prime in totality? They$\let\geq\geqslant\let\leq\leqslant$ need not be pairwise relatively prime. The reason why is essentially that Bézout's theorem works fine when we are only given that $\gcd(a_1,\ldots,a_n)=1$. Of course, it does not immediately follow that we can take the Bézout coefficients positive for sufficiently large numbers. Here's a quick way to see why: Suppose we want to write $m\in\mathbb N$ as $x_1a_1+\cdots+x_na_n$ with $x_1,\ldots,x_n\geq0$. Bézout already gives us such $x_1,\ldots,x_n\in\mathbb Z$, but they can be negative. Now let $(q_k,r_k)$ be quotient and remainder when $x_k$ is divided by $\frac{a_1\cdots a_n}{a_k}$. Then $$\begin{align*} m &= \sum_{k=1}^nq_k(a_1\cdots a_n)+r_ka_k\\ &= (q_1+\cdots+q_n)\cdot a_1\cdots a_n+\sum_{k=1}^nr_ka_k. \end{align*}$$ Because $r_k\leq\frac{a_1\cdots a_n}{a_k}-1$, we have $q_1+\cdots+q_n>0$ as soon as $m>\sum_{k=1}^na_k\left(\frac{a_1\cdots a_n}{a_k}-1\right)$. Then $$m=a_1(a_2\cdots a_n(q_1+\cdots+q_n)+r_1)+\sum_{k=2}^nr_ka_k$$ where all coefficients are non-negative. Note: this technique allows to prove that$$g(a_1,\ldots,a_n)\leq(n-1)a_1\cdots a_n-(a_1+\cdots+a_n).$$This upperbound is sharp for $n=2$, but weak for $n>2$.
I'm curious if there is a known asymptotic scaling for the return-to-origin (i.e. recurrence) probability for a random on $Z^d$ as a function of $d$? Mathworld gives the recurrence probability: http://mathworld.wolfram.com/PolyasRandomWalkConstants.html $p(d) = 1-\frac{1}{u(d)} = 1- \left(\int_{t=0}^{\infty}I_0(\frac{t}{d})^de^{-t} dt\right)^{-1}$ Where $I_0$ is a modified Bessel function of the first kind: http://mathworld.wolfram.com/ModifiedBesselFunctionoftheFirstKind.html Taking a wild guess, I found that a quartic fit seems to work well for values of $p(d)$ for $d = 1$ to $8$: http://www.wolframalpha.com/input/?i=quartic+fit+%7B3%2C+0.340537%7D%2C%7B4%2C+0.193206%7D%2C%7B5%2C+0.135178%7D%2C%7B6%2C+0.104715%7D%2C%7B7%2C+0.0858449%7D%2C%7B8%2C+0.0729126%7D Also, what's the best way to approximate the integral expression necessary to calculate $p(d)$ if we only need $k$ digits of precision? Sorry folks: Pólya's Random Walk Constants at infinity I should have done a more thorough job checking for previous questions about this topic. =(
[Originally posted on math stackexchage but have not received feedback in over a month, I'm hoping someone here could point me in the right direction]. Consider the eigenvalue equation for the Laplace-Beltrami operator on a manifold with metric $ds^2=|K|^{-1}[d\chi^2+\sin_K^2\chi(d\theta^2+\sin^2\theta\,d\phi^2)]$, where: $$\sin_K\chi=\left. \begin{cases} \sin(\chi),\, K>0\\ \sinh(\chi),\, K<0 \end{cases} \right. $$ i.e. the Helmholtz equation: $$ (\Delta + k^2)\,Q_k(x) = 0$$ In the case $K\rightarrow 0$, we obtain the Euclidean metric, for which the (generalized) eigenfunctions are just $Q_k(x)=e^{i \langle k, x\rangle}$ (where $\langle u,w\rangle$ denotes the inner product), and have the property that $Q_k(x+y) = Q_k(x)Q_k(y)$. The question: in the case $K\neq 0$, do the (generalized) eigenfunctions have the above property? I am aware that in the case $K\rightarrow 0$, we can expand the eigenfunctions in spherical coordinates as spherical harmonics so that $e^{i\langle k, x \rangle} = 4\pi\sum\limits_{\ell,m} i^\ell Y^m_l(\theta,\phi) j_l(k\, r)$, and that in the case $K\neq 0$ only the radial equation is changed, for which the solutions are hyperspherical Bessel functions, however I can't seem to prove the additive property as there's no closed form for the eigenfunctions. Is there something elementary that I am missing here?
Brazilian Journal of Probability and Statistics Braz. J. Probab. Stat. Volume 29, Number 4 (2015), 767-777. A note on space–time Hölder regularity of mild solutions to stochastic Cauchy problems in $L^{p}$-spaces Abstract This paper revisits the Hölder regularity of mild solutions of parabolic stochastic Cauchy problems in Lebesgue spaces $L^{p}(\mathcal{O})$, with $p\geq2$ and $\mathcal{O}\subset\mathbb{R}^{d}$ a bounded domain. We find conditions on $p,\beta$ and $\gamma$ under which the mild solution has almost surely trajectories in $\mathcal{C}^{\beta}([0,T];\mathcal{C}^{\gamma}(\bar{\mathcal{O}}))$. These conditions do not depend on the Cameron–Martin Hilbert space associated with the driving cylindrical noise. The main tool of this study is a regularity result for stochastic convolutions in M-type 2 Banach spaces by Brzeźniak ( Stochastics Stochastics Rep. 61 (1997) 245–295). Article information Source Braz. J. Probab. Stat., Volume 29, Number 4 (2015), 767-777. Dates Received: July 2013 Accepted: April 2014 First available in Project Euclid: 17 September 2015 Permanent link to this document https://projecteuclid.org/euclid.bjps/1442513445 Digital Object Identifier doi:10.1214/14-BJPS245 Mathematical Reviews number (MathSciNet) MR3397392 Zentralblatt MATH identifier 1334.60114 Citation Serrano, Rafael. A note on space–time Hölder regularity of mild solutions to stochastic Cauchy problems in $L^{p}$-spaces. Braz. J. Probab. Stat. 29 (2015), no. 4, 767--777. doi:10.1214/14-BJPS245. https://projecteuclid.org/euclid.bjps/1442513445
Riemann-Hilbert problem, integrability and reductions 1. Department of Applied Mathematics, National Research Nuclear University MEPHI, 31 Kashirskoe Shosse, Moscow 115409, Russian Federation 2. Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 8 Acad. G. Bonchev Street, Sofia 1113, Bulgaria 3. School of Mathematical Sciences, Technological University Dublin - City Campus, Kevin Street, Dublin D08 NF82, Ireland 4. Faculty of Mathematics and Infromatics, Sofia University St. Kliment Ohridsky, 5 James Bourchier Blvd., Sofia 1164, Bulgaria 5. Institute for Advanced Physical Studies, New Bulgarian University, 21 Montevideo Street, Sofia 1618, Bulgaria The present paper is dedicated to integrable models with Mikhailov reduction groups $G_R \simeq \mathbb{D}_h.$ Their Lax representation allows us to prove, that their solution is equivalent to solving Riemann-Hilbert problems, whose contours depend on the realization of the $G_R$-action on the spectral parameter. Two new examples of Nonlinear Evolution Equations (NLEE) with $\mathbb{D}_h$ symmetries are presented. Mathematics Subject Classification:Primary: 35K10, 35Q15; Secondary: 37K15, 35Q55. Citation:Vladimir S. Gerdjikov, Rossen I. Ivanov, Aleksander A. Stefanov. Riemann-Hilbert problem, integrability and reductions. Journal of Geometric Mechanics, 2019, 11 (2) : 167-185. doi: 10.3934/jgm.2019009 References: [1] M. J. Ablowitz, B. Prinari and A. D. Trubach, [2] M. Adler, On a trace functional for pseudo-differential operators and the symplectic structure of the Korteweg-deVries Equation, [3] [4] N. C. Babalic, R. Constantinescu and V. S. Gerdjikov, On Tzitzeica equation and spectral properties of related Lax operators, [5] [6] G. Berkeley, A. V. Mikhailov and P. Xenitidis, Darboux transformations with tetrahedral reduction group and related integrable systems, [7] [8] R. T. Bury, [9] R. T. Bury, A. V. Mikhailov and J. P. Wang, Wave fronts and cascades of soliton interactions in the periodic two dimensional Volterra system, [10] [11] A. Constantin, V. S. Gerdjikov and R. Ivanov, Inverse scattering transform for the Camassa-Holm equation, [12] [13] [14] [15] [16] [17] [18] V. S. Gerdjikov, Riemann-Hilbert Problems with canonical normalization and families of commuting operators, [19] [20] V. S. Gerdjikov, M. I. Ivanov, The quadratic bundle of general form and the nonlinear evolution equations. I. Expansions over the "squared" solutions are generalized Fourier transforms, [21] V. S. Gerdjikov and M. I. Ivanov, The quadratic bundle of general form and the nonlinear evolution equations. Ⅱ. Hierarchies of Hamiltonian structures, [22] V. S. Gerdjikov, G. G. Grahovski, A. V. Mikhailov and T. I. Valchev, On soliton interactions for the hierarchy of a generalised Heisenberg ferromagnetic model on SU(3)/S(U(1)$\times$ U(2)) symmetric space, [23] V. S. Gerdjikov, D. M. Mladenov, A. A. Stefanov and S. K. Varbev, MKdV-type of equations related to $B^{(1)}_{2}$ and $A^{(2)}_{4}$, in [24] V. S. Gerdjikov, D. M. Mladenov, A. A. Stefanov and S. K. Varbev, Soliton equations related to the affine Kac-Moody algebra $D^{(1)}_{4}$. [25] V. S. Gerdjikov, D. M. Mladenov, A. A. Stefanov and S. K. Varbev, On mKdV equations related to the affine Kac-Moody algebra $A_{5}^{(2)}$, [26] V. Gerdjikov, G. Vilasi and A. Yanovski, [27] [28] V. S. Gerdjikov and A. B. Yanovski, Riemann-Hilbert Problems, families of commuting operators and soliton equations, [29] V. S. Gerdjikov and A. B. Yanovski, On soliton equations with $\mathbb{Z}_{ {h}}$ and $\mathbb{D}_{ {h}}$ reductions: conservation laws and generating operators, [30] V. S. Gerdjikov and A. B. Yanovski, CBC systems with Mikhailov reductions by Coxeter automorphism. Ⅰ. Spectral theory of the recursion operators, [31] [32] J. Haberlin and T. Lyons, Solitons of shallow-water models from energy-dependent spectral problems, [33] S. Helgasson, Differential Geometry, Lie Groups and Symmetric Spaces, Academic Press, New York-London, 1978. Google Scholar [34] [35] [36] [37] D. Holm and R. Ivanov, Two-component CH system: Inverse scattering, peakons and geometry, [38] [39] R. Ivanov and T. Lyons, Integrable models for shallow water with energy dependent spectral problems, [40] [41] D. J. Kaup, On the inverse scattering problem for cubic eigenvalue problems of the class $\psi_{xxx}+6Q\psi_{x}+6R\psi = \lambda \psi $, [42] [43] [44] [45] S. Lombardo and J. Sanders, On the classification of automorphic Lie algebras, [46] S. V. Manakov, On the theory of two-dimensional stationary self-focusing of electromagnetic waves, [47] S. V. Manakov and V. E. Zakharov, Soliton theory, in: [48] [49] [50] A. V. Mikhailov, A. B. Shabat and R. I. Yamilov, The symmetry approach to the classification of non-linear equations. Complete lists of integrable systems, [51] A. V. Mikhailov, A. B. Shabat and R. I. Yamilov, Extension of the module of invertible transformations. classification of integrable systems, [52] A. V. Mikhailov, A. B. Shabat and V. V. Sokolov, The symmetry approach to classification of integrable equations, In: [53] [54] S. Novikov, S. Manakov, L. Pitaevskii and V. Zakharov, [55] [56] A. B. Shabat, A one-dimensional scattering theory. Ⅰ, [57] [58] [59] A. B. Yanovski and T. I. Valchev, Pseudo-Hermitian reduction of a generalized Heisenberg ferromagnet equation. Ⅰ. Auxiliary system and fundamental properties;, [60] [61] V. E. Zakharov and A. B. Shabat, A scheme for integrating nonlinear evolution equations of mathematical physics by the inverse scattering problem. Ⅰ, [62] V. E. Zakharov and A. B. Shabat, Integration of nonlinear equations of mathematical physics by the method of inverse scattering Ⅱ., [63] [64] V. E. Zakharov, The inverse scattering method, In: [65] V. E. Zakharov, Integrable systems in multidimensional spaces, in show all references References: [1] M. J. Ablowitz, B. Prinari and A. D. Trubach, [2] M. Adler, On a trace functional for pseudo-differential operators and the symplectic structure of the Korteweg-deVries Equation, [3] [4] N. C. Babalic, R. Constantinescu and V. S. Gerdjikov, On Tzitzeica equation and spectral properties of related Lax operators, [5] [6] G. Berkeley, A. V. Mikhailov and P. Xenitidis, Darboux transformations with tetrahedral reduction group and related integrable systems, [7] [8] R. T. Bury, [9] R. T. Bury, A. V. Mikhailov and J. P. Wang, Wave fronts and cascades of soliton interactions in the periodic two dimensional Volterra system, [10] [11] A. Constantin, V. S. Gerdjikov and R. Ivanov, Inverse scattering transform for the Camassa-Holm equation, [12] [13] [14] [15] [16] [17] [18] V. S. Gerdjikov, Riemann-Hilbert Problems with canonical normalization and families of commuting operators, [19] [20] V. S. Gerdjikov, M. I. Ivanov, The quadratic bundle of general form and the nonlinear evolution equations. I. Expansions over the "squared" solutions are generalized Fourier transforms, [21] V. S. Gerdjikov and M. I. Ivanov, The quadratic bundle of general form and the nonlinear evolution equations. Ⅱ. Hierarchies of Hamiltonian structures, [22] V. S. Gerdjikov, G. G. Grahovski, A. V. Mikhailov and T. I. Valchev, On soliton interactions for the hierarchy of a generalised Heisenberg ferromagnetic model on SU(3)/S(U(1)$\times$ U(2)) symmetric space, [23] V. S. Gerdjikov, D. M. Mladenov, A. A. Stefanov and S. K. Varbev, MKdV-type of equations related to $B^{(1)}_{2}$ and $A^{(2)}_{4}$, in [24] V. S. Gerdjikov, D. M. Mladenov, A. A. Stefanov and S. K. Varbev, Soliton equations related to the affine Kac-Moody algebra $D^{(1)}_{4}$. [25] V. S. Gerdjikov, D. M. Mladenov, A. A. Stefanov and S. K. Varbev, On mKdV equations related to the affine Kac-Moody algebra $A_{5}^{(2)}$, [26] V. Gerdjikov, G. Vilasi and A. Yanovski, [27] [28] V. S. Gerdjikov and A. B. Yanovski, Riemann-Hilbert Problems, families of commuting operators and soliton equations, [29] V. S. Gerdjikov and A. B. Yanovski, On soliton equations with $\mathbb{Z}_{ {h}}$ and $\mathbb{D}_{ {h}}$ reductions: conservation laws and generating operators, [30] V. S. Gerdjikov and A. B. Yanovski, CBC systems with Mikhailov reductions by Coxeter automorphism. Ⅰ. Spectral theory of the recursion operators, [31] [32] J. Haberlin and T. Lyons, Solitons of shallow-water models from energy-dependent spectral problems, [33] S. Helgasson, Differential Geometry, Lie Groups and Symmetric Spaces, Academic Press, New York-London, 1978. Google Scholar [34] [35] [36] [37] D. Holm and R. Ivanov, Two-component CH system: Inverse scattering, peakons and geometry, [38] [39] R. Ivanov and T. Lyons, Integrable models for shallow water with energy dependent spectral problems, [40] [41] D. J. Kaup, On the inverse scattering problem for cubic eigenvalue problems of the class $\psi_{xxx}+6Q\psi_{x}+6R\psi = \lambda \psi $, [42] [43] [44] [45] S. Lombardo and J. Sanders, On the classification of automorphic Lie algebras, [46] S. V. Manakov, On the theory of two-dimensional stationary self-focusing of electromagnetic waves, [47] S. V. Manakov and V. E. Zakharov, Soliton theory, in: [48] [49] [50] A. V. Mikhailov, A. B. Shabat and R. I. Yamilov, The symmetry approach to the classification of non-linear equations. Complete lists of integrable systems, [51] A. V. Mikhailov, A. B. Shabat and R. I. Yamilov, Extension of the module of invertible transformations. classification of integrable systems, [52] A. V. Mikhailov, A. B. Shabat and V. V. Sokolov, The symmetry approach to classification of integrable equations, In: [53] [54] S. Novikov, S. Manakov, L. Pitaevskii and V. Zakharov, [55] [56] A. B. Shabat, A one-dimensional scattering theory. Ⅰ, [57] [58] [59] A. B. Yanovski and T. I. Valchev, Pseudo-Hermitian reduction of a generalized Heisenberg ferromagnet equation. Ⅰ. Auxiliary system and fundamental properties;, [60] [61] V. E. Zakharov and A. B. Shabat, A scheme for integrating nonlinear evolution equations of mathematical physics by the inverse scattering problem. Ⅰ, [62] V. E. Zakharov and A. B. Shabat, Integration of nonlinear equations of mathematical physics by the method of inverse scattering Ⅱ., [63] [64] V. E. Zakharov, The inverse scattering method, In: [65] V. E. Zakharov, Integrable systems in multidimensional spaces, in [1] [2] [3] [4] [5] Masaya Maeda, Hironobu Sasaki, Etsuo Segawa, Akito Suzuki, Kanako Suzuki. Scattering and inverse scattering for nonlinear quantum walks. [6] Francesco Demontis, Cornelis Van der Mee. Novel formulation of inverse scattering and characterization of scattering data. [7] [8] M. P. de Oliveira. On 3-graded Lie algebras, Jordan pairs and the canonical kernel function. [9] Isaac A. García, Jaume Giné, Jaume Llibre. Liénard and Riccati differential equations related via Lie Algebras. [10] [11] Thierry Paul, David Sauzin. Normalization in Banach scale Lie algebras via mould calculus and applications. [12] [13] [14] [15] [16] [17] [18] [19] [20] 2018 Impact Factor: 0.525 Tools Metrics Other articles by authors [Back to Top]
Forgot password? New user? Sign up Existing user? Log in If f(x)=(9log2(3−2x)−1)13f(x)=(\dfrac{9}{\log_{2}(3-2x)}-1)^{\frac{1}{3}}f(x)=(log2(3−2x)9−1)31 , then the value of aaa which satisfies f−1(2a−4)=0.5f^{-1}(2a-4)=0.5f−1(2a−4)=0.5 , is given by Problem Loading... Note Loading... Set Loading...
For many different machine learning problems, finding a solution involves these similar steps. Firstly, one begins with gathering input data \(x\). Secondly, work out a hypothesis (model) that maps from input to possible output \(h_{\theta}(x) = y\). \(\theta\) is trainable (adjustable) parameters of the model. Next, a cost(loss) function \(J(\theta)\) which determines how far-off our hypothesis is from the expected output. Last but not least, a learning algorithm (e.g gradient descent), which is basically a strategy of updating \(\theta\) at eaching training step to reduce cost \(J(\theta)\). At each step, there are design decisions to make. In this post, I’m noting two different views in designing a loss function in step 2. Both ways of explaining work and help one better understand why ML works. Minimum error perspective The cost function from this point of view is a function that measures the difference between expected output and the model’s output. Cost function accumulates as hypothesis gets astray from the expected, and decreases as hypothesis gets closer. Some loss functions designed from this perspective are mean squared error (MSE) in linear regression and hinge loss support vector machine (SVM) classification. MSE is the scalar difference between hypothesized value and expected value, and square it. (Or it is the difference between L2 norm of hypothesized and expected output vectors). It is pretty straight forward from here. To increase the model’s accuracy is to minimize the difference between hypothesized and the expected. Solve for min of \(J(\theta)\): \[J(\theta) = \frac{1}{2M} \sum_{i=1}^{M} (h_{\theta}(x_{i}) - y_{i})^2\] For classification, it is less intuitive to visualize loss in the same way. An somewhat simplified reason is, it is hard to quantify difference between hypothesized class and expected output class. Say, a machine learning model to classify animals. How much output cat differs from output dog is not quantitative. The simplest possible approach is that, we can use discriminant functions to map from input to a scalar value e.g \(f_{theta}(x) = y\), and then use a threshold value to assign output class e.g positive output if \(y > 0\). MSE can then be used here as the loss function. Though it suffers some drawbacks like being sensitive to outliers, gradient vanishing and limited to guassian distribution of output [1]. Still, it is possible to devise a cost function Another example is SVM classification and hingeloss function. Hingeloss is a loss function which accumulates when the difference between score of correct class and an incorrect one is below certain threshold \(\delta\). \[J(\theta) = \frac{1}{M} \sum_{i=1}^{M} \sum_{k=1}^{K} max(0, s_{k} - s_{y_{i}} + \delta)\] Maximum likelihood perspective This approach seem to be more intuitive for classification problem. Given an input \(x\), K possible classes and expected output \(C_{j}\) 1 <= j <= K, the model should maximize the conditional probability of \(P_{\theta}(C_{j}|x)\). To determine this class conditional probability, one can either uses the generative modelling or discriminative modelling approach. The former involves finding out the prior \(P(x|C_{j})\) and input distribution \(P(x)\) and then infer posteri via Bayesian rule: \[ P_{\theta}(C_{j}|x) = \frac{P_{\theta}(x|C_{j}) P(C_{j})}{P(x)}\] However, determining the input distribution \(P(x)\) can be costly for some dataset. So we often directly model posterior \(P_{\theta}(C_{j}|x)\) and tweak \(\theta\) to maximize it (discriminative modelling). With m as number of samples: \[ P_{\theta}(Y|X) = \prod_{i=1}^{m} P_{\theta}(y_{i} |x_{i}) \] Maximizing the above likelihood is the same as minimizing the negative of likelihood, arriving at our cost function. And since probability is small, log is often taken. Hence, cost function becomes: \[ J_{\theta} = - \frac{1}{m} \sum_{i=1}^{m} log P_{\theta}(y_{i} |x_{i}) \] For example, if we model \(P_{\theta}(y_{i}|x_{i})\) with softmax function, we can have the cost function as: \[ J_{\theta} = - \frac{1}{m} \sum_{i=1}^{m} softmax(y_{i}, \theta, x_{i}) \]
Indeed, in non-relativistic quantum mechanics, the equation of evolution of the quantum state is given by Schrödinger's equation and measurement of a state of particle is itself a physical process and thus, should and is indeed be governed by the Schrödinger's equation. Indeed, people like to predict probabilities using Born's rule, and sometimes they do this correctly, and sometimes incorrectly. Do we use Born's rule just because it becomes mathematically cumbersome to account for all the degree of freedoms using the Schrödinger equation? Yes and no. Indeed sometimes you can just use the Born rule to get the same answer as the correct answer you get from using the Schrödinger rule. And when you can do that, it is often much easier both computationally and for subjective reasons. However, that is not the reason people use the Born rule, they use it because they have trouble knowing how to relate experimental results to wavefunctions. And the Born rule does exactly that. You give it a wavefunction and from it you compute something that you know how to compare to the lab. And that is why people use it. Not the computational convenience. Is it possible to derive Born's rule using Schrödinger's equation? Yes, but to do so you need to overcome the exact reason people use the Born rule. All the Schrödinger equation does is tell us how wavefunctions evolve. It doesn't tell you how to relate that to experimental results. When a person learns how to do that, then they can see that the job done by Born's rule is already done by the unitary Schrödinger evolution. How are probabilistic observations implied by causal evolution of the wave function? The answer is so simple it will seem obvious. Just think about how you verify it in the lab, and then write down the appropriate system that models the actual laboratory setup, then setup the Schrödinger for that system. For the Born rule you use one wavefunction for one copy of a system, then you pick an operator, and then you get a number between zero and one (that you interpret as relative frequency if you did many experiments on many copies of that one system). And you get a number for each eigenvalue in a way that depends on the one wavefucntion for one copy of a system even though you verify this result by taking a whole collection of identically prepared particles. So that's what the Born rule does for you. It tells you about the relative frequency of different eigenvalues for a whole bunch of identically prepared systems, and so you verify it by making a whole bunch of identically prepared systems and measuring the relative frequency of different eigenvalues. So how do you do this with the Schrödinger equation? Given the state and operator in question you find the Hamiltonian that describes the evolution corresponding to a measurement of the operator (as an example my other answer to this question cites an example where they explicitly tell you the Hamiltonian to measure the spin of a particle). Then you also write down the Hamiltonian for the device that can count how many times a particle was produced, and the device that write down the Hamiltonian for the device that can count how many times a particle was detected with a particular outcome, and the device that takes the ratio. Then you write down the Schrödinger equation for a factored wavefunction system that has a huge number of factors that are identically wavefunctions, and also where there are sufficient numbers of devices to split different eigenfunctions of the operator in question and the device that counts the number of results. You then evolve the wavefunction of the entire system according to the Schrödinger equation. When 1) the number of identical factors is large and 2) the devices the send different eigenfunction to different paths make the evolved eigenfunctions mutually orthogonal, then something happens. The part of the wavefunction describing the state of the device that took the ratio of how many got a particular eigenvalue evolves to have almost all of its $L^2$ norm concentrated over a state corresponding to the ratio that the Born rule predicts and is almost orthogonal to the parts corresponding to states the Born rule did not predict. Some people will then apply the Born rule to this state of the aggregator, but then you have failed. We are almost there. Except all we have is a wavefunction with most of its $L^2$ norm concentrated over a region with an easily described state. The Born rule tells us that we can subjectively expect to personally experience this aggregate outcome, the Born rule says this happens with near certainty since almost all the $L^2$ norm corresponds to this state of the aggregator. The Schrödinger equation by itself does not tell us this. But we had to interpret the Born rule as saying that those numbers between 0 and 1 correspond to observed frequencies. How can we interpret "the wavefuntion being highly concentrated over a state with an aggregator reading that same number" as corresponding to an observation? This is literally the issue of the question, interpreting a mathematical result about a mathematical wavefunction as being about observations. The answer is that we and everything else are described by the dynamics of a wavefunction, and that a part of a wave with small $L^2$ norm that is almost entirely orthogonal doesn't really affect the dynamics of the rest of the wave. We are the dynamics. People are processes, dynamical processes of subsystems. We are like the aggregator in that we are only sensitive to some aspects of some parts of the rest of the wavefunction. And we are robust in that we are systems that can act and time evolve in ways that can be insensitive to small deviations in our inputs, so the part of the wavefunction that corresponds to the aggregator having most of the $L^2$ norm concentrated on having the value predicted by the Born rule (ant that state with that concentration on that value is what the Schrödinger equation predicts) is something that can interact with us, the robust information processing system that also evolves according to the Schrödinger equation interacts with us in the exact same way as a state where all the $L^2$ norm was on that state, not just most of it. This dynamical correlation between the state of the system (the aggregator) and us, the interaction of the two, is exactly what observation is. You have to use the Schrödinger equation to describe what an observation is to use the Schrödinger equation to predict the outcome of an observation. But you only need to do that on states very very very close to get the Born rule since the Born rule only predicts the outcomes of an aggregator's response to large numbers of identical systems. And those states are exactly the ones we can give a purely operational definition in terms of the Schrödinger equation. We simply say that the Schrödinger equation describes the dynamics, including the dynamics of us, the things being "measured" and the whole universe. The way a measurement works is that you have a Hamiltonian that acts on your subsystem $|\Psi_i\rangle$ and your entire universe $|\Psi_i\rangle\otimes |U\rangle$ and evolves it like: $$|\Psi_i\rangle\otimes |U\rangle\rightarrow|\Psi_i'\rangle\otimes |U_i\rangle.$$ The essential aspects of it being a measurement is that when $|\Psi_i\rangle$ and $|\Psi_j\rangle$ are in different eigenspaces they are originally orthogonal, but that orthogonality transfers over to $|U_i\rangle$ and $|U_i\rangle$ in such as way as to ensure the Schrödinger time evolution evolutions of $|\Psi_i'\rangle\otimes |U_i\rangle$ remain orthogonal. (And also we need that $|\Psi_i'\rangle$ is still in the eigenspace.) That's our restriction on the Hamiltonians that are used in the actual Schrödinger What is the problem? The problem is that we had to say how to relate a mathematical object to us and where probability words entered. And there isn't any probability. We just have ratios that look like the ratios that probability would predict for us if there were probabilities. And we have to bring up how our observations and experiences relate to the mathematics. Historically there were strong objections to this, that talking about how human people dynamically evolve should not be relevant to physics. Seems like Philosophy the old fashioned objections would go. But if you think of people as dynamical information processors, then we can characterize them as a certain kind of computer that interacts with the wavefucntion of the rest of the world in a particular way. And other kinds of computer are possible, things we call quantum computers. And now we can make this excuse no longer. We need to talk about the difference between a classical computer that is designed to be robust against small quantum effects, and one that can be sensitive to these effects so that it can vontinue to interact before it has gotten to the point in the evolution where the Born rule could be used. We must now own up to the fact that the Schrödinger equation evolution is the only one we've seen, and that is what corresponds to what we actually observe in the laboratory experiments where the Born rule is used. And we must own it so that we can correctly describe what happens in experiments where the Born rule doesn't apply, where as always we must use the Schrödinger equation.
I couldn't resist putting the sledgehammers to work that Robin Ekman alluded to, and I will try to present their general argumentation in a way one can understand without knowing all the details, but I don't intend to presume that this is in any sense a better answer, it is simply one that shows which things one must know to rigorously understand the weird statement that "sometimes there isn't, but sometimes there is a function so that $\nabla f = A$". We consider vector fields $A^i$, by the Euclidean metric, to be equivalent to their dual 1-forms as per $A_i = A^i$. On 1-forms on 3D manifolds $\mathcal{M}$, we have an exact cochain complex of p-forms (denoted $\Omega^p(\mathcal{M})$) [with $d^p$ being the exterior derivative on p-forms] $$ \Omega^0(\mathcal{M}) \overset{\mathrm{d^0}}{\rightarrow} \Omega^1(\mathcal{M}) \overset{\mathrm{d^1}}{\rightarrow} \Omega^2(\mathcal{M}) \overset{\mathrm{d^2}}{\rightarrow} \Omega^3(\mathcal{M})$$ As is usual for such things, we can take the cohomology of such a cochain complex, defined as $H^p(\mathcal{M}) := \frac{\mathrm{ker}(d^{p})}{\mathrm{im}(d^{p-1})}$. It can be shown that this is, by the Eilenberg-Steenrod axioms, an ordinary cohomology theory, thus the same as every other ordinary cohomology theory of $\mathcal{M}$. (this deRham cohomology has naturally coefficients in $\mathbb{R}$ instead of $\mathbb{Z}$, but that is nothing to worry too much about) The marvelous things about (co)homology theories is that they are the same for every homotopic topological space, i.e. they are the same for space with can be continously deformed into each other. Now, $\mathbb{R} \times (\mathbb{R}^2 - \{0\})$ is, since $\mathbb{R}$ can be deformation retracted to a point, homotopy equivalent to $\text{Pt} \times (\mathbb{R}^2 - \{0\}) = \mathbb{R}^2 -\{0\}$, which is in turn homotopy equivalent to the circle $S^1$. On the contrary, $\mathbb{R}^3 - \{0\}$ is homotopy equivalent to the sphere $S^2$. Of both these homotopy equvialences you may convince yourself by your intuition, but, as I was writing this, I had to admit that I cannot put this into words that transmit well between me and my readers without a blackboard and some time. So, accepting Eilenberg-Steenrod, our question of whether we can or cannot lift a 1-form in the kernel of the derivative to be the image of a 0-form has reduced to the question whether the first cohomology of the circle or the sphere vanish. Now, certainly all n-spheres are compact spaces, and certainly they have now boundary. Since we can describe them with polar coordinates locally, they are also manifolds. Now, for such compact orientable manifolds, there is Poincaré duality, which states that the $p$-th cohomology $H^p(\mathcal{M})$ is isomorphic to the $n-p$-th homology $H_{n-p}(\mathcal{M})$. But the ordinary homology of spheres is well understood! In fact, just by thinking with the Eilenberg-Steenrod axioms, you can show that, since every sphere $S^n$ is essentially made out of two disks $D^n$ glued together (the hemispheres), $H^n(S^n) = \mathbb{Z}$ and $H^0(S^n) = \mathbb{Z}$, but $H^p(S^n) = 0$ if $p \neq n \wedge p \neq 0$. The duality then yields directly that $$ H_1(S^1) = \mathbb{Z} \wedge H_1(S^2) = 0 $$ Thus, for the actual 2-sphere $S^2$ , the kernel of the derivative is the image of the derivative, and every vector field on the sphere (and anything that is homotopy equivalent to it) whose curl vanishes is the exterior derivative of some scalar function. But for the circle $S^1$, there are fields which are in the kernel of the derivative but not its image, else the cohomology would be trivial. Since the Aharonov-Bohm situation has a line singularity and is homotopic to the circle, the usual argument for "conservative forces" fails. But, for point singularities in 3D, we are homotopic to the sphere, where the argument goes through.
I searched about the difference between state vector and basis vector in Quantum mechanics but couldn't find any clear explanation. Can someone please give a simple and clear explanation of this? Basis vectors are a special set of vectors that have two properties: The vectors in the set are linearly independent (meaning you cannot write one vector as the linear combination of other vectors in the set) Every vector in the vector space can be written as a linear combination of these basis vectors Basis vectors are widely used in linear algebra and are not unique to quantum mechanics. When we start talking about state vectors in QM, like $|\psi\rangle$, we can choose to express this state vector in terms of any basis we want. In other words, for a discrete basis: $$|\psi\rangle=\sum_i c_i|a_i\rangle$$ where $|a_i\rangle$ represents basis vector $i$, and $c_i$ is a coefficient saying "how much of $|a_i\rangle$ is in $|\psi\rangle$ Now it could be that $|\psi\rangle$ is equal to one of our basis vectors, say $|a_j\rangle$, so that $c_i=\delta_{i,j}$ and $$|\psi\rangle=\sum_i \delta_{i,j}|a_i\rangle=|a_j\rangle$$ But this is a unique case. We could even choose express this example in some other basis: $$|\psi\rangle=|a_j\rangle=\sum_id_i|b_i\rangle$$ So to answer the question: basis vectors are just a special set of vectors with the two properties listed above. Each basis vector could be a state vector, if the system is purely in that state, but it does not have to be that way. You can get the entire picture by being more general: state vectors can be expressed as linear combinations of basis vectors of whatever basis we choose to work in. This then covers the case for when our state vector is one of our basis vectors, since this is still the case of a linear combination. The choice of basis is completely subjective though (although some bases are better to work in than others for certain problems).
I am new to MCMC and reading a intro paper regarding Gibbs sampling. However, there are two parts in the paper I cannot understand and get stuck. The first part is equation 2.3 in page 168. It says that if $X^{'}_{j}$ is drawn from the process of Gibbs sampling, $X^{'}_{j}\sim f(x|Y^{'}_{j}=y^{'}_{j})$. It further claims under "general conditions," the distribution of $X^{'}_{j}$ converges to $f(x)$ as $n\rightarrow\infty$. Can anyone help explain what these general conditions are and why/how this statement is true. The second part is at the beginning of page 169, expression 2.9 says $\hat{f(x)}=\frac{1}{m}\sum^{m}_{i=1}f(x|y_{i})$, and the theory behind this equation is $E[f(x|Y)]=\int f(x|y)f(y)dy$. This confuses me because my understanding about monte carlo integration is that for example, $E[X]=\frac{1}{m}\sum^{m}_{i=1}X_{i}$ is true due to the law of large number that $\frac{X_{i}}{m}$ converges to $X_{i}\times p(X_{i})$ when $m\rightarrow\infty$. However, I don't see how equation 2.9 can be explained by the concept of monote carlo integration. Can anyone explain the intuition or math behind this equation? Thanks! EDIT 1 (My Answer/Thought for Part 2) I have thought about this question, and since no one responds yet, I will try my answer first and see whether someone has a better explanation. For the second part, instead of using $f(x|y_{i})$, let's focus on the density of a specific value $x^{a}$, which is $f(x^{a}|y_i)$. For simplicity, now assume $y_i$ is randomly drawn from a density $f(y)$ instead of $f(y|x_i)$ but the result could be generalized to $f(y|x_i)$. Both $f(x|y_i)$ and $f(y)$ are know distribution. Now we first randomly draw a $y_i$ from $f(y)$ and once we have $y_i$, we further randomly draw a $x_i$ from $f(x|y_i)$. Suppose we do this process $n$ time, and $n\rightarrow\infty$. Given a specific $y^{a}$ and $x^{a}$, we do know the value of $f(x^{a}|y^{a})$, which is just the density of $x^{a}$ cnditional on $y^{a}$. In other words, by Monte Carlo integration, for a given set of $y_i$'s and $x^{a}$, we can dervive that $\frac{1}{n}\sum^{n}_{i=1}f(x^{a}|y_i)\approx\int f(x^{a}|y_i)f(y_i)dy_i=\int f(x^{a}|y)f(y)dy=\int \dfrac{f(x^{a},y)}{f(y)}f(y)dy=\int f(x^{a},y)dy=f(x^{a})$ We can do the same process to dervie $f(x^{b})$, $f(x^{c})$, and etc. In the end, we will have $f(x^{a})$, $f(x^{b})$, $f(x^{c})$, ..., which all together constrcut $f(x)$. Therefore, $\hat{f(x)}=\frac{1}{m}\sum^{m}_{i=1}f(x|y_{i})$.
Consider electromagnetic cylindrical waves. Cylindrical waves can be derived from the plane waves using energy conservation consideration: since the power must be a constant the amplitude of a cylindrical wave must decrease with $\sqrt{r}$. Therefore a cylindrical wave expression must be $$\mathbf{E}(r,t)=\frac{\mathbf{E}_0}{\sqrt{r}} \mathrm{sin}(kr-\omega t)$$ The function $\sqrt{r} \mathbf{E}(r,t)$ satisfies one dimensional wave equation $$\frac{\partial^2\xi}{\partial r^2}-\frac{1}{c^2}\frac{\partial^2\xi}{\partial t^2}=0$$ In complex notation the cylindrical wave becomes $$\mathbf{E}(r,t)=\frac{\mathbf{E}_0}{\sqrt{r}} e^{j(kr-\omega t)}\tag{1}$$ If we call $\xi$ a generic component of $\mathbf{E}$, the three dimensional wave equation is $$\nabla^2\xi-\frac{1}{c^2}\frac{\partial^2\xi}{\partial t^2}=\square \xi=0$$ The solution in cylindrical coordinates is $$\xi (r,\phi , z,t) =\sum_{\omega,n,h} R^{0}_{\omega, n, h } H_n\Bigg(r \sqrt{\frac{\omega^2}{c^2}-h^2}\Bigg) e^{j(n\phi +hz-\omega t)} \tag{2}$$ Where $R^{0}_{\omega, n, h }$ is a (complex) constant and $H_n$ is the Hankel function of order $n$. Under the assumption of cylindrical symmetry of the wave, that is $$\frac{\partial \xi}{\partial \phi}=0 \,\,\,\,\,\, \mathrm{and} \,\,\,\,\,\, \frac{\partial \xi}{\partial z}=0$$the asymptotic approximation of $(2)$ (for $r >> \frac{c}{\omega}$) lead to a field that is the same as $(1)$. My question is: why (under cylindrical symmetry) is $(2)$ equal to $(1)$ only at large distances? I always thought that $(1)$ gives the expression of a cylindrical wave in all the circumstances. So is $(1)$ "wrong" for small $r$? Or are $(1)$ and $(2)$ describing two different things? If so, what are the differences? (I have an identical doubt for spherical waves).
Let $X_1,X_2,...,X_n$ be a random sample from a uniform distribution on $(\mu-\sqrt 3\sigma,\mu+\sqrt3\sigma)$. Here the unknown parameters are two, namely $\mu$ and $\sigma$, which are the population mean and standard deviation. Find the point estimator of $\mu$ and $\sigma$. I have tried to do that by the Method-of-Moments(MOM). The procedure is, $M\prime_j=\mu\prime_j(\theta_1,\theta_2,...,\theta_k); j=1,2,...,k$ where $M\prime_j$ is the $j^{th}$ sample moment about zero & $M\prime_j=\frac{1}{n}\sum_{i=1}^n X_i^j$ & $\mu\prime_j$ is the $j^{th}$ moment about zero ,ie, $j^{th}$ raw moment. Now, $M\prime_1=\mu\prime_1=\mu\prime_1(\mu,\sigma)=\mu$ And $M\prime_1=\frac{1}{n}\sum_{i=1}^n X_i=\bar X$ Again, $M\prime_2=\mu\prime_2=\mu\prime_2(\mu,\sigma)=\sigma^2+\mu^2$ $\Rightarrow M\prime_2=\sigma^2+\mu^2$ $\Rightarrow \sigma^2=M\prime_2-\mu^2$ $\Rightarrow \sigma=\sqrt{\frac{1}{n}\sum_{i=1}^n(X_i-\bar X^2)}$ Hence Method-of-Moment estimators are $\bar X$ for $\mu$ and $\sqrt{\frac{1}{n}\sum_{i=1}^n(X_i-\bar X^2)}$ for $\sigma$. But the procedure seems illogical to me for the following reason: $\bullet$ I haven't considered the pdf of uniform density. so this procedure is also applicable for normal density. Then where is the difference? What is the correct process of finding the point estimators for the above situation?
To calculate an equilibrium concentration from an equilibrium constant, an understanding of the concept of equilibrium and how to write an equilibrium constant is required. Equilibrium is a state of dynamic balance where the ratio of the product and reactant concentrations is constant. Introduction An equilibrium constant, K c, is the ratio of the concentrations of the products to the concentrations of the reactants at equilibrium. The concentration of each species is raised to the power of that species' coefficient in the balanced chemical equation. For example, for the following chemical equation, \(\ aA + bB {\rightleftharpoons} \ cC + dD\) the equilibrium constant is written as follows: \(K_{c} = \dfrac{[C]^{c}[D]^{d}}{[A]^{a}[B]^{b}}\) The ICE Table The easiest approach for calculating equilibrium concentrations is to use an ICE Table, which is an organized method to track which quantities are known and which need to be calculated. ICE stands for: "I" is for the "initial" concentration or the initial amount "C" is for the "change" in concentration or change in the amount from the initial state to equilibrium "E" is for the "equilibrium" concentration or amount and represents the expression for the amounts at equilibrium. Example 1: Hydrogenation of Ethylene (\(C_2H_4\)) For the gaseous hydrogenation reaction below, what is the concentration for each substance at equilibrium? \[C_2H_{4(g)} + H_{2(g)} \rightleftharpoons C_2H_{6(g)} \] with \(K_{c} = 0.98 \) characterized from previous experiments and with the following initial concentrations: First the equilibrium expression is written for this reaction: \[K_{c} = \dfrac{[C_{2}H_{6}]}{[C_{2}H_{4}][H_{2}]} = 0.98\] The concentrations for the reactants are added to the "Initial" row of the table. The initial amount of \(C_2H_6\) is not mentioned, so it is given a value of 0. This amount will change over the course of the reaction. The change in the concentrations are added to the table. Because ethane, C Equilibrium is determined by adding "Initial" and "Change together. The expressions in the "Equilibrium" row are substituted into the equilibrium constant expression to find calculate the value of x. The equilibrium expression is simplified into a quadratic expression as shown: \[0.98= \dfrac{x}{(0.33-x)(0.53-x)}\] \[0.98= \dfrac{x}{x^{2} - 0.86x + 0.1749}\] \[0.98 {(x^{2} - 0.86x + 0.1749)}= {x}\] \[0.98x^{2} - 0.8428x + 0.171402= x\] \[0.98x^{2} - 1.8428x + 0.171402= 0\] The quadratic formula can be used as follows to solve for x: \[ x= \dfrac{-b \pm \sqrt{b^2 - 4ac}}{2a}\] \[ x= \dfrac{-0.1572 \pm \sqrt{(-0.1572)^2 - 4(0.98)(0.171402)}}{2(0.98)}\] \[x = 1.78\ or \; 0.098\] Because there are two possible solutions, each must be checked to determine which is the real solution. They are plugged into the expression in the "Equilibrium" row for \([C_{2}H_{4}]_{Eq}\): \[ [C_{2}H_{4}]_{Eq} = (0.33-1.78) = -1.45\] \[ [C_{2}H_{4}]_{Eq} = (0.33-0.098) = 0.23\] If \(x = 1.78\) then \([C_{2}H_{4}]_{Eq}\) is negative, which is impossible, therefore, \(x\) must equal 0.098. So: \[ [C_{2}H_{4}]_{Eq} = 0.23\;M\] \[ [H_{2}]_{Eq} = (0.53-0.0981) = 0.43\;M\] \[ [C_{2}H_{6}]_{Eq} = 0.098\;M\] Problems 1. Find the concentration of iodine in the following reaction if the equilibrium constant is 3.76 X 10 3, and 2 mol of iodine are initially placed in a 2 L flask at 100 K. \[ I_{2(g)} {\rightleftharpoons} 2I^-_{(aq)}\] 2. What is the concentration of silver ions in 1.00 L of solution with 0.020 mol of AgCl and 0.020 mol of Cl - in the following reaction? The equilibrium constant is 1.8 x 10 -10. \[AgCl_{(s)} {\rightleftharpoons} Ag^{+}_{(aq)} + Cl^{-}_{(aq)}\] 3. What are the equilibrium concentrations of the products and reactants for the following equilibrium reaction? Initial concentrations: \(\ [HSO_{4}^{-}]_0 = 0.4 \) \([H_{3}O^{+}]_0 = 0.01\) \([SO_{4}^{2-}]_0= 0.07 \) \(K=.012 \) \[HSO_{4}^{-}(aq) + H_{2}O(l) {\rightleftharpoons} H_{3}O^{+}(aq) + SO_{4}^{2-}(aq) \] 4. The initial concentration of HCO 3 is 0.16 M in the following reaction. What is the H + concentration at equilibrium? K c=0.20. \[H_{2}CO_{3} {\rightleftharpoons} H^{+}(aq) + CO_{3}^{2-}(aq)\] 5.The initial concentration of PCl 5 is 0.200 moles per liter and there are no products in the system when the reaction starts. If the equilibrium constant is 0.030, calculate all the concentrations at equilibrium. Solutions 1. \(I_{2}\) \(I^-\) Initial 2mol/2L = 1 M 0 Change \(-x\) \(+2x\) Equilibrium \(1-x\) \(2x\) At equilibrium \(K_c=\dfrac{[I^-]^2}{[I_2]}\) \(3.76 \times 10^3=\dfrac{(2x)^2}{1-x} = \dfrac{4x^2}{1-x}\) cross multiply \(4x^2+3.76.10^3x-3.76 \times 10^3=0\) apply the quadratic formula: \( \dfrac{-b \pm \sqrt{b^2 - 4ac}}{2a} \) with: \(a=4\), \(b=3.76 \times 10^3\) \(c=-3.76 \times 10^3\). The formula gives solutions of of x=0.999 and -940. The latter solution is unphysical (a negative concentration). Therefore, x=0.999 at equilibrium. \[[I^-]=2x=1.99 \, M\] \[[I_2]=1-x=1-.999=0.001M\] 2. \(Ag^{+} \) \(Cl^{-} \) Initial 0 0.02mol/1.00 L = 0.02 M Change \( +x\) \(+x\) Equilibrium \(x\) \(0.02 + x\) \[ K_c = [Ag^-][Cl^-]\] \[ 1.8 \times 10^{-10}= (x)(0.02 + x)\] \[x^2 + 0.02x - 1.8\times 10^{10}=0\] \[ x = 9 \times 10^{-9}\] \[[Ag^-]=x=9 \times 10^{-9}\] \[[Cl^-]=0.02+x=0.020\] 3. \(H_{2}CO_{3} \) \(SO_{4}^{2-} \) \(H_{3}O^+ \) Initial 0.4 0.01 0.07 Change \(-x\) \(+x\) \(+x\) Equilibrium \(0.4-x\) \(0.01+x\) \(0.07+x\) \[K_c = \dfrac{[SO_4^{2-}][H_3O^+]}{H_2CO_3}\] \[0.012 = \dfrac{(0.01 + x)(0.07 + x)}{0.4 -x}\] cross multiply and get: \[x^2 + 0.2x - 0.0041 = 0\] apply the quadratic formula x = 0.0328 [H 2CO 3]=0.4-x=0.4-0.0328=0.3672 [S0 4 2 -]=0.01+x=0.01+0.0328=0.0428 [H 30]=0.07+x=0.07+0.0328=0.1028 4. H 2CO 3 \(H^{+} \) \(CO_{3}^{2-} \) Initial .16 0 0 Change -x +x +x Equilibrium .16-x x x apply the quadratic equation x=0.1049 [H +]=x=0.1049 5. First write out the balanced equation: \(\ PCl_{5}(g) {\rightleftharpoons} PCl_{3}(g) + Cl_{2}(g)\) \(PCl_{5} \) \(PCl_{3} \) \(Cl_{2} \) Initial 0.2 0 0 Change -x +x +x Equilibrium 0.2-x x x \[ K_c = \dfrac{[PC_3][Cl_2]}{[PCl_5]}\] \[0.30 = \dfrac{x^2}{0.2-x}\] Cross multiply: \[x^2 + 0.03 x - 0.006 = 0\] Apply the quadratic formula: x=0.064 [PCl 5]=0.2-x=0.136 [PCl 3]=0.064 [Cl 2]=0.064 References Petrucci, et al. General Chemistry: Principles & Modern Applications. 9th ed. Upper Saddle River, New Jersey: Pearson/Prentice Hall, 2007. Criddle, Craig and Larry Gonick. The Cartoon Guide to Chemistry.New York: HarperCollins Publishers, 2005.
Forgot password? New user? Sign up Existing user? Log in Inside an equilateral △ABC\triangle ABC △ABC lies a point OOO. It is known that ∠AOB=113∘\angle AOB=113^{\circ} ∠AOB=113∘ and ∠BOC=123∘\angle BOC=123^{\circ} ∠BOC=123∘. Find the angles of the triangle whose sides are equal to segments OA,OB,OCOA,OB,OCOA,OB,OC. Note by Vilakshan Gupta 2 years, 6 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: The angle in their correct order are 64 , 53 , 63. Log in to reply please explain the solution Draw AO′AO'AO′ = OAOAOA such that ∠OAO′=60∘\angle OAO' = 60^{\circ}∠OAO′=60∘. Then ΔOAB≅ΔO′AC\Delta OAB \cong \Delta O'ACΔOAB≅ΔO′AC => OBOBOB === O′CO'CO′C . Now, draw (join) OO′.OO'.OO′. Then clearly ΔO′OA\Delta O'OAΔO′OA is equilateral. [=> all ∠\angle∠ = 60∘60^{\circ}60∘] Therefore, OO′=OAOO' = OAOO′=OA Then ΔOO′C\Delta OO'CΔOO′C is the required triangle with the side lengths OA,OBOA , OBOA,OB and OCOCOC respectively. Now, calculating angles, ∠O′OC=∠AOC−∠AOO′\angle O'OC = \angle AOC - \angle AOO'∠O′OC=∠AOC−∠AOO′ = 124∘−60∘=64∘........................124^{\circ} - 60^{\circ} = 64^{\circ}........................124∘−60∘=64∘........................ Calculate the rest angles yourself. @The almighty Knows It All. – Thank You Rohit. U are really good in geometry.Are You Really 14? And it seems that u are an IMO aspirant @Vilakshan Gupta – Yep, 'm an IMO aspirant, hoping for it but it is never going to be easy at all. If you too are an IMO aspirant then would you like to join our RMO / INMO prperation team at Slack. @The almighty Knows It All. – Of Course, I would surely like to join your team .Please Invite me , my email id is- vilakshangupta2002@gmail.com @Vilakshan Gupta – You have been invited on the team. Check your email and join us. @Vilakshan Gupta – Hi guys @Vilakshan Gupta @Satwik Murarka I would also like to join you guy's team. Here is the email imjabitg@gmail.com I had asked @Rohit Camfaron a different forum but he told that he is no more in the team and asked me to ask on this notice board. @Rohit Camfar Can I join the RMO preparation team?I am also an RMO aspirant. ok you can join us by giving your email. @Rohit Camfar satwikmurarka@yahoo.com Ok you are invited you can join us now by going into your Id Problem Loading... Note Loading... Set Loading...
I’m going to make some assumptions in this answer: Two moons have masses $m_1$ and $m_2$ and orbit a planet of mass $m_p$, where $m_1, m_2\ll m_p$.The planet is far enough away from the star that any gravitational/tidal effects from that star are negligible.There are no other planets capable of destabilizing any moons in orbits reasonably close to the planet (i.e. well within its Hill sphere). This, along with the second assumption, means that we can effectively treat the moon system as a miniature planetary system. There are two cases to look at: where $m_1 \gg m_2$, and where $m_1\sim m_2$. 1. $m_1 \gg m_2$ In this scenario, we see the possibility of $m_1$ capturing $m_2$ just as Neptune is thought to have captured Triton, involving a three-body collisionless encounter. $m_2$ would originally have been part of a binary moon system of some sort (see my second section) which then interacted with $m_1$; the other binary partner was ejected and $m_2$ became a satellite of $m_1$ (see Agnor & Hamilton (2006)). Assuming that the ejected binary partner had a mass $m_3$, the three bodies would have had to have interacted at a distance$$r=a\left(\frac{3m_1}{m_2+m_3}\right)^{1/3}\tag{1}$$where $a$ was the semi-major axis of $m_2$ and $m_3$. There shouldn’t be any issues with applying this model to the moon system. 2. $m_1\sim m_2$ This is a standalone scenario, but I realized that it is also needed to explain the formation of the original binary system in the first setup. It has the advantage that no third body is needed (and thus no other binary system is needed), but it has the disadvantage that a relatively narrow class of initial orbits will permit a successful finish. Ochai et al. (2014) apply the phenomenon of tidal energy dissipation to the formation of binary planets (here, we apply it to binary moons, because if $m_1~m_2$, it may be more accurate to refer to the system as a binary system, rather than as a satellite and sub-satellite). Given radii of $R_1$ and $R_2$ and a pericenter distance of $q_{12}$, the energy dissipated after each encounter is$$E=\frac{Gm_1^2}{R_2}\left[\left(\frac{R_2}{q_{12}}\right)^6T_2(\eta_2)+\left(\frac{R_2}{q_{12}}\right)^8T_3(\eta_2)\right]+\frac{Gm_2^2}{R_1}\left[\left(\frac{R_1}{q_{12}}\right)^6T_2(\eta_1)+\left(\frac{R_1}{q_{12}}\right)^8T_3(\eta_1)\right]\tag{2}$$where $\eta_i\equiv[m_i/(m_1+m_2)]^{½}$ and $T_2$ and $T_3$ are fifth degree polynomial functions with coefficients given in Portegies Zwart & Meinen (1993). The authors carried out simulations of three gas giant planets orbiting a star (some simulations involved two, while others had the third play a minor role), and found different results for different values of the inner planet’s semi-major axis: Here, “HJs” stands for “Hot Jupiters”. It is certainly true that long-term (or even short-term!) stability might be problematic, especially because of tidal interactions with the parent planet. However, many setups will lead to the successful formation of a binary planet. All of this, of course, assumes that the less massive moon lies within the Hill sphere of the more massive one.
Let $F_0 \subset F_1 \subset F_2 \subset \cdots$ and $K_0 \subset K_1 \subset K_2 \subset \cdots$ be two towers of fields. Also, let $F = \cup_{i=0}^\infty F_i$ and $K = \cup_{i=0}^\infty K_i$. Now suppose for each $i$ we have injective homomorphisms from $F_i$ to $K_{\sigma(i)}$ and from $K_i$ to $F_{\mu(i)}$ where $i \leq \sigma(i)$ and $i \leq \mu(i)$. In other words, each field $F_i$ is isomorphic to a subfield of some $K_j$ where $j \geq i$ and each field $K_i$ is isomorphic to a subfield of some $F_j$ where $j \geq i$. [Think of the two towers sitting next to each other with arrows pointing diagonally upward.] My question, can we conclude that $F \cong K$? A colleague asked me this question some time ago. I came up with a sketch of a proof for the case when $F_{i+1}$ is an algebraic extension of $F_i$ and $K_{i+1}$ is an algebraic extension of $K_i$ for each $i$. I suspect it's false in general [something to do with the fact that injective and surjective aren't equivalent for maps between infinite dimensional spaces.] Does anybody know a counterexample for the general case? I would also appreciate a reference for the algebraic case (where I'm 99% sure it's true). Thanks!
Geometry and Topology Seminar Contents 1 Fall 2013 2 Fall Abstracts 3 Spring 2014 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2013 date speaker title host(s) September 6 September 13, 10:00 AM in 901! Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Kent September 20 September 27 October 4 October 11 October 18 Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Kent October 25 Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT local November 1 Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Dymarz November 8 Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Kent November 15 Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel Kent November 22 Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. local Thanksgiving Recess December 6 Sean Paul (Wisconsin) (Semi)stable Pairs I local December 13 Sean Paul (Wisconsin) (Semi)stable Pairs II local Fall Abstracts Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Abstract: For a compact surface S, the associated pants graph P(S) consists of vertices corresponding to pants decompositions of S and edges corresponding to elementary moves between pants decompositions. Motivated by the Weil-Petersson geometry of Teichmüller space, Aramayona, Parlier, and Shackleton conjecture that the full subgraph G of P(S) determined by fixing a multicurve is totally geodesic in P(S). We resolve this conjecture in the case that G is a product of Farey graphs. This is joint work with Sam Taylor. Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Abstract: We discuss the notion of gap distributions of various lists of numbers in [0, 1], in particular focusing on those which are associated to certain low-dimensional dynamical systems. We show how to explicitly compute some examples using techniques of homogeneous dynamics, generalizing earlier work on gaps between Farey Fractions. This works gives some possible notions of `randomness' of special trajectories of billiards in polygons, and is based partly on joint works with J. Chaika, J. Chaika and S. Lelievre, and with Y.Cheung. This talk may also be of interest to number theorists. Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT Many problems in differential geometry can be reduced to solving a PDE of form [math] \mu(x)=0 [/math] where [math]x[/math] ranges over some function space and [math]\mu[/math] is an infinite dimensional analog of the moment map in symplectic geometry. In Hamiltonian dynamics the moment map was introduced to use a group action to reduce the number of degrees of freedom in the ODE. It was soon discovered that the moment map could be applied to Geometric Invariant Theory: if a compact Lie group [math]G[/math] acts on a projective algebraic variety [math]X[/math], then the complexification [math]G^c[/math] also acts and there is an isomorphism of orbifolds [math] X^s/G^c=X//G:=\mu^{-1}(0)/G [/math] between the space of orbits of Mumford's stable points and the Marsden-Weinstein quotient. In September of 2013 Dietmar Salamon, his student Valentina Georgoulas, and I wrote an exposition of (finite dimensional) GIT from the point of view of symplectic geometry. The theory works for compact Kaehler manifolds, not just projective varieties. I will describe our paper in this talk; the following Monday Dietmar will give more details in the Geometric Analysis Seminar. Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Abstract: A quasi-regular (QR) mapping between metric manifolds is a branched cover with bounded dilatation, e.g. f(z)=z^2. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that: 1) Every lens space admits a uniformly QR (UQR) mapping f. 2) Every UQR mapping leaves invariant a measurable conformal structure. The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces. Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Abstract: Given a triangulated 3-manifold M a natural question is: Does M admit a hyperbolic structure? While this question can be answered in the negative if M is known to be reducible or toroidal, it is often difficult to establish a certificate of hyperbolicity, and so computer methods have developed for this purpose. In this talk, I will describe a new method to establish such a certificate via verified computation and compare the method to existing techniques. This is joint work with Kazuhiro Ichihara, Masahide Kashiwagi, Hidetoshi Masai, Shin'ichi Oishi, and Akitoshi Takayasu. Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel The proof of the Hausdorff-Banach-Tarski paradox relies on the existence of a nonabelian free group in the group of rotations of [math]\mathbb{R}^3[/math]. To help generalize this paradox, Borel proved the following result on free groups. Borel’s Theorem (1983): Let [math]F[/math] be a free group of rank two. Let [math]G[/math] be an arbitrary connected semisimple linear algebraic group (i.e., [math]G = \mathrm{SL}_n[/math] where [math]n \geq 2[/math]). If [math]\gamma[/math] is any nontrivial element in [math]F[/math] and [math]V[/math] is any proper subvariety of [math]G(\mathbb{C})[/math], then there exists a homomorphism [math]\phi: F \to G(\mathbb{C})[/math] such that [math]\phi(\gamma) \notin V[/math]. What is the class, [math]\mathcal{L}[/math], of groups that may play the role of [math]F[/math] in Borel’s Theorem? Since the free group of rank two is in [math]\mathcal{L}[/math], it follows that all residually free groups are in [math]\mathcal{L}[/math]. In this talk, we present some methods for determining whether a finitely generated group is in [math]\mathcal{L}[/math]. Using these methods, we give a concrete example of a finitely generated group in [math]\mathcal{L}[/math] that is *not* residually free. After working out a few other examples, we end with a discussion on how this new theory provides an answer to a question of Brueillard, Green, Guralnick, and Tao concerning double word maps. This talk covers joint work with Michael Larsen. Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. The celebrated Poincare-Hopf theorem states that a vector field [math]X[/math] on a manifold [math]M[/math] has nonempty zero set [math]Z(X)[/math], provided [math]M[/math] is compact with empty boundary and [math]M[/math] has nonzero Euler characteristic. Surprising little is known about the set of common zeros of two or more vector fields, especially when [math]M[/math] is not compact. One of the few results in this direction is a remarkable theorem of Christian Bonatti (Bol. Soc. Brasil. Mat. 22 (1992), 215–247), stated below. When [math]Z(X)[/math] is compact, [math]i(X)[/math] denotes the intersection number of [math]X[/math] with the zero section of the tangent bundle. [math]\cdot [/math] Assume [math] dim_{\mathbb{R}(M)} ≤ 4[/math], [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Then every analytic vector field commuting with [math]X[/math] has a zero in [math]Z(X)[/math]. In this talk I will discuss the following analog of Bonatti’s theorem. Let [math]\mathfrak{g}[/math] be a Lie algebra of analytic vector fields on a real or complex 2-manifold [math]M[/math], and set [math]Z(g) := \cap_{Y \in \mathfrak{g}} Z(Y)[/math]. • Assume [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Let [math]\mathfrak{g}[/math] be generated by analytic vector fields [math]Y[/math] on [math]M[/math] such that the vectors [math][X,Y]p[/math] and [math]Xp[/math] are linearly dependent at all [math]p \in M[/math]. Then [math]Z(\mathfrak{g}) \cap Z(X) \neq \emptyset [/math]. Related results on Lie group actions, and nonanalytic vector fields, will also be treated. Sean Paul (Wisconsin) (Semi)stable Pairs I Sean Paul (Wisconsin) (Semi)stable Pairs II Spring 2014 date speaker title host(s) January 24 January 31 Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups Kent February 7 February 14 February 21 Ioana Suvaina (Vanderbilt) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach Maxim February 28 Jae Choon Cha (POSTECH, Korea) Universal bounds for the Cheeger-Gromov rho-invariants Maxim March 7 Mustafa Kalafat (Michigan-State and Tunceli) Conformally Kahler Surfaces and Orthogonal Holomorphic Bisectional Curvature March 14 Spring Break March 28 April 4 Matthew Kahle (Ohio) MOVED TO COLLOQUIUM SLOT Dymarz April 11 Yongqiang Liu (UW-Madison and USTC-China) Nearby cycles and Alexander modules of hypersurface complements Maxim April 18 Pallavi Dani (LSU) Large-scale geometry of right-angled Coxeter groups. Dymarz April 25 Jingzhou Sun (Stony Brook) TBA Wang May 2 May 9 Spring Abstracts Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups The beautiful theory developed by Thurston, Fried and McMullen provides a near complete picture of the various ways a hyperbolic 3-manifold M can fiber over the circle. Namely, there are distinguished convex cones in the first cohomology M^1(M;R) whose integral points all correspond to fibrations of M, and the dynamical features of these fibrations are all encoded by McMullen's "Teichmuller polynomial." This talk will describe recent work developing aspects of this picture in the setting of a free-by-cyclic group G. Specifically, I will introduce a polynomial invariant that determines a convex polygonal cone C in the first cohomology of G whose integral points all correspond to algebraically and dynamically interesting splittings of G. The polynomial invariant additionally provides a wealth of dynamical information about these splittings. This is joint work with Ilya Kapovich and Christopher J. Leininger. Ioana Suvaina (Vanderbilt) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach" The talk presents an explicit classification of the ALE Ricci flat Kahler surfaces (M,J,g), generalizing previous classification results of Kronheimer. The manifolds are related to Q-Gorenstein deformations of quotient singularities of type C^2/G, with G a finite subgroup of U(2). Using this classification, we show how these metrics can also be obtained by a construction of Tian-Yau. In particular, we find good compactifications of the underlying complex manifold M. Jae Choon Cha (POSTECH) Universal bounds for the Cheeger-Gromov rho-invariants" Cheeger and Gromov showed that there is a universal bound of their L2 rho-invariants of a fixed smooth closed (4k-1)-manifold, using a deep analytic method. We give a new topological proof of the existence of a universal bound. For 3-manifolds, we give explicit estimates in terms of triangulations, Heegaard splittings, and surgery descriptions. The proof employs interesting ideas including controlled chain homotopy and a geometric reinterpretation of the Atiyah-Hirzebruch bordism spectral sequence. Applications include new results on the complexity of 3-manifolds. Mustafa Kalafat (Michigan-State and Tunceli) Conformally Kahler Surfaces and Orthogonal Holomorphic Bisectional Curvature We show that a compact complex surface which admits a conformally Kahler metric g of positive orthogonal holomorphic bisectional curvature is biholomorphic to the complex projective plane. In addition, if g is a Hermitian metric which is Einstein, then the biholomorphism can be chosen to be an isometry via which g becomes a multiple of the Fubini-Study metric. This is joint work with C.Koca. Matthew Kahle (Ohio) TBA Yongqiang Liu Nearby cycles and Alexander modules of hypersurface complements For a polynomial transversal at infinity, we show that the Alexander modules of the hypersurface complement can be realized by the nearby cycle complex, and we obtain a divisibility result for the associated Alexander polynomial. As an application, we use nearby cycles to recover the mixed Hodge structure on the torsion Alexander modules, as defined by Dimca and Libgober. Pallavi Dani (LSU) A finitely generated group can be endowed with a natural metric whichis unique up to coarse isometries, or quasi-isometries. A fundamentalquestion is to classify finitely generated groups up toquasi-isometry. I will report on the progress on this question in thecase of right-angled Coxeter groups. In particular I will describehow topological features of the visual boundary can be used toclassify a family of hyperbolic right-angled Coxeter groups. I willalso discuss the connection with commensurability, an algebraicproperty which implies quasi-isometry, but is stronger in general.This is joint work with Anne Thomas. Jingzhou Sun(Stony Brook) "On the Demailly-Semple jet bundles of hypersurfaces in the 3-dimensional complex projective space" Let X be a smooth hypersurface of degree d in the 3-dimensional complex projective space. By totally algebraic calculations, we prove that on the third Demailly-Semple jet bundle X_3 of X, the Demailly-Semple line bundle is big for d not ness than 11, and that on the fourth Demailly-Semple jet bundle X_4 of X, the Demailly-Semple line bundle is big for d not ness than 10, improving a recent result of Diverio.
Epsilon naught, $\epsilon_0$ The ordinal $\epsilon_0$, commonly given the British pronunciation "epsilon naught," is the smallest ordinal $\alpha$ for which $\alpha=\omega^\alpha$ and can be equivalently characterized as the supremum $$\epsilon_0=\sup\{\omega,\omega^\omega,\omega^{\omega^\omega},\ldots\}$$ The ordinals below $\epsilon_0$ exhibit an attractive finitistic normal form of representation, arising from an iterated Cantor normal form involving only finite numbers and the expression $\omega$ in finitely iterated exponentials, products and sums. The ordinal $\epsilon_0$ arises in diverse proof-theoretic contexts. For example, it is the proof-theoretic ordinal of the first-order Peano axioms. Epsilon numbers The ordinal $\epsilon_0$ is the first ordinal in the hierarchy of $\epsilon$-numbers, where $\epsilon_\alpha$ is the $\alpha^{\rm th}$ fixed point of the exponential function $\beta\mapsto\omega^\beta$. These can also be defined inductively, as $\epsilon_{\alpha+1}=\sup\{\epsilon_\alpha+1,\omega^{\epsilon_\alpha+1},\omega^{\omega^{\epsilon_\alpha+1}},\ldots\}$, and $\epsilon_\lambda=\sup_{\alpha\lt\lambda}\epsilon_\alpha$ for limit ordinals $\lambda$. The epsilon numbers therefore form an increasing continuous sequence of ordinals. Every uncountable infinite cardinal $\kappa$ is an epsilon number fixed point $\kappa=\epsilon_\kappa$.
In particle physics, scalar potentials have to be bounded from below in order for the physics to make sense. The precise expressions of checking lower bound of scalar potentials are essential, which is an analytical expression of checking copositivity and positive definiteness of tensors given by such scalar potentials. Because the tensors given by general scalar potential are 4th order and symmetric, our work mainly focuses on finding precise expressions to test copositivity and positive definiteness of 4th order tensors in this paper. First of all, an analytically sufficient and necessary condition of positive definiteness is provided for 4th order 2 dimensional symmetric tensors. For 4th order 3 dimensional symmetric tensors, we give two analytically sufficient conditions of (strictly) cpositivity by using proof technique of reducing orders or dimensions of such a tensor. Furthermore, an analytically sufficient and necessary condition of copositivity is showed for 4th order 2 dimensional symmetric tensors. We also give several distinctly analytically sufficient conditions of (strict) copositivity for 4th order 2 dimensional symmetric tensors. Finally, we apply these results to check lower bound of scalar potentials, and to present analytical vacuum stability conditions for potentials of two real scalar fields and the Higgs boson. 本文基于阻尼块反幂法和分支空间投影算法设计了同种求解特征值问题的广义共轭梯度算法, 并且为实现了相应的计算软件包. 下一场对算法和计算过程进行了一连串的优化来提高算法的安定、计算效率和相互可扩展性, 使得本文的算法适合于相互计算环境下求解大规模稀疏矩阵的特征值. 所形成的软件包是根据Matrix-Free和Vector-Free计划的, 可以应用于任意的矩阵向量结构. 针对几种典型矩阵的测试结果表明本文的算法和软件包不但有良好的数值稳定性, 并且相比于SLEPc软件包中的LOBPCG和Jacobi-Davidson消除法器有2-6倍的频率提升. 软件包的网址: https://github.com/pase2017/GCGE-1.0. Based on the point of view of neuroethology and cognition-psychology, general frame of theory for intelligent systems is presented by means of principle of relative entropy minimizing in this paper. Cream of the general frame of theory is to present and to prove basic principle of intelligent systems: entropy increases or decreases together with intelligence in the intelligent systems. The basic principle is of momentous theoretical significance and practical significance .From the basic principle can not only derive two kind of learning algorithms (statistical simulating annealing algorithms and annealing algorithms of mean-field theory approximation) for training large kinds of stochastic neural networks,but also can thoroughly dispel misgivings created by second law of thermodynamics on 'peoplespsychology ,hence make one be fully confident of facing life.Because of Human society, natural world, and even universe all are intelligent systems. 研究了共同工序智能加工系统则式自动引导车(RGV)调度问题. 该问题吗2018年全国大学生数学建模竞赛B写的部分. 系统由一辆轨道式自动引导车和几尊计算机数控机床(CNC)相当部件组成, RGV操控多尊CNC完成多只物料加工, RGV调度方案决定了系统的频率. 因为 RGV的移动路径为决策变量, 因为RGV在CNC达到的操作结束时刻为日节点, 因为物料加工剩余时间为状态变量, 被来了问题的数学模型, 但是模型中的部分参数以决定变量为下标. 通过定义新的变量和约束, 以模型修改为不含变量下标和分段函数的非线性混合整数规划模型. 末了被来了算例, 说明了模型的正确和可操作性. 本文提出一个名接到回归的点击率预测新方法,尝试替代常用的因子分解机(FM)。连到回归用过平面拼接出一个封凸多面体,围绕出正样本,发生直观的若干解释, 能够从任意初始值一次收敛到全局最优解。 拟合出来的曲面Lipschitz一连,变和。在人工设计的星环集、双堆集、双月集达成,连到回归的分类准确性、说性、平滑性全面超过FM。在和量级参数量、计算量 的标准下,连到回归在Avazu聚拢和Criteo聚拢上的AUC超越FM。 The three frameworks for theories of consciousness taken moust seriously by neuroscientists are that consciousness is a biological state of the brain,the global workspace perspective,and the perspective of higher state. Consciousness is discussed from viewpoint of theory of Entropy—partition of complex system in present article. Human brain’s system self-organizably and adaptively implements partition、aggregation and integration, and consciousness emerges. Abstract. In studying of a class of random neural network, some of relative researchers have proposed Markov model of neural network. Wherein Markov property of the neural network is based on “assuming”. To reveal mechanism of generating of Markov property in neural network, it is studied how infinite-dimensional random neural network (IDRNN) forms inner Markov representation of environment information in this paper.Because of equivalence between markov property and Gibbsian our conclusion is that knowledge is eventually expressed by extreme Gibbs probability measure—ergodic Gibbs probability measure in IDRNN. This conclusion is also applicable to quantum mechanical level of IDRNN. Hence one can see “ concept “- “ consciousness” is generated at particle(ion) level in the brain and is experienced at the level of the neurons; We have discussed also ergodicity of IDRNN with random neural potential. 本文系地探索了霍奇星算子与他微分算符作用于任意微分形式场时两者的相似组合规律。首先,找到了保障微分形式场的差不变的少只结合算符,连通过双方的线性组合得到了一个新算符。其次,当由任意数目的霍奇星算子与他微分算符进行结合时,作者导出了有形式上相互互异的结合算符的合并表达式,这些表达式由单个霍奇星算子与他微分算符以及两岸的任选两只之非零组合成。在这个基础上,分析了有算符之间的相互作用关系,连根据这些算符对微分形式的差的改变情况,针对它进行了实际分类。末了,作为一个用,作者详细讨论了怎样由坏相同的微分形式的线性组合来构造电磁场的麦克斯韦方程。 为打根本上除存在于数学基础中的各种悖论,如果数学建筑于高度可靠的基础上,察觉形式逻辑只能用于同一律,矛盾律和排中律这三大规律还建立的议论地区 (称可行域) 外,否则即会发生包括悖论在内的各种错误,如果在形式逻辑的适用范围就可行域内,如果前提可靠,推导严格,悖论是不存在的。根据该结论,分析了说谎者悖论和理发师悖论等部分历史上比著名的悖论的形成原因,并且指出了数学基础中皮亚诺公理的使用和康托尔定理、区间套和对角线法证明中的一些逻辑错误,提出了能够避免这些错误的合并的定义自然数、有理数和无理数的建议。 The aim of this paper is to study the heterogeneous optimization problem \begin{align*} \mathcal {J}(u)=\int_{\Omega}(G(|\nabla u|)+qF(u^+)+hu+\lambda_{+}\chi_{\{u>0\}} )\text{d}x\rightarrow\text{min}, \end{align*} in the class of functions $ W^{1,G}(\Omega)$ with $ u-\varphi\in W^{1,G}_{0}(\Omega)$, for a given function $\varphi$, where $W^{1,G}(\Omega)$ is the class of weakly differentiable functions with $\int_{\Omega}G(|\nabla u|)\text{d}x<\infty$. The functions $G$ and $F$ satisfy structural conditions of Lieberman's type that allow for a different behavior at $0$ and at $\infty$. Given functions $q,h$ and constant $\lambda_+\geq 0$, we address several regularities for minimizers of $\mathcal {J}(u)$, including local $C^{1,\alpha}-$, and local Log-Lipschitz continuities for minimizers of $\mathcal {J}(u)$ with $\lambda_+=0$, and $\lambda_+>0$ respectively. We also establish growth rate near the free boundary for each non-negative minimizer of $\mathcal {J}(u)$ with $\lambda_+=0$, and $\lambda_+>0$ respectively. Furthermore, under additional assumption that $F\in C^1([0,+\infty); [0,+\infty))$, local Lipschitz regularity is carried out for non-negative minimizers of $\mathcal {J}(u)$ with $\lambda_{+}>0$.
Although we find currency risk particularly interesting, it is not often the case with many investors for whom it is no more than a necessary inconvenience. As such, they tend to neglect it, accepting undesirable non-remunerated risks and missing potential opportunities. To prevent this, in this post we will try to provide an understanding of the mechanics of currency risk. For the sake of simplicity, we will consider the case of foreign equities investment, although results can be easily generalized. A foreign investment example Foreign investment is a very popular method of reducing a portfolio’s risk. Indeed, there is little sense in having a highly diversified portfolio across assets classes if they all depend on the performance of a single economy. As sensible as it is, foreign investment also entails certain risks, specially when securities are denominated in a currency other than ours as we will see below. Let’s imagine we are a European investor seeking exposure to the Japanese equity market. In order to purchase stocks we will need yen, which we can buy in exchange for euros at the \(EURJPY\) exchange rate. With an investment amount of \(M\) euros, we will obtain \(M*EURJPY_0\) yen which we will use to buy stocks. At the end of our investment, we will do the opposite process, selling our equities for \(M*EURJPY_0*(1+r_e)\) yen (where \(r_e\) is the return experienced by our stocks) and exchanging them for \(\frac{M*EURJPY_0*(1+r_e)}{EURJPY_f}\) euros. As a result, the total return of our investment would be: $$R=\frac{EURJPY_0*(1+r_e)}{EURJPY_f}-1$$ Now, note that the \(\frac{EURJPY_0}{EURJPY_f}\) is intimately linked with the return of the exchange rate. Indeed, for convenience we can consider the spot on the opposite direction \(JPYEUR=\frac{1}{EURJPY}\) which reflects the price of a yen in euros. This way, the return on the spot is: $$r_s=\frac{JPYEUR_f}{JPYEUR_o}-1=\frac{EURJPY_0}{EURJPY_f}-1$$ With this in mind, we can take our total return expression and substitute the term \(\frac{EURJPY_0}{EURJPY_f}\) by \(1+r_s\) obtaining the following: $$R=(1+r_s)(1+r_e)-1=r_s+r_e+r_s*r_e$$ Leverage out of the blue! If we work with daily returns, both \(r_e\) and \(r_s\) will be in the order of magnitude of \(10^{-2}\). Therefore, \(r_e*r_s\) will be in the order of magnitude of \(10^{-4}\) which is basically negligible. As a side note, this does not hold for yearly returns but it is still a good enough approximation. Anyway, the expression for our investment returns is now: $$R=r_e+r_s$$ Note that this means we will experience 100% of the spot returns plus 100% of our stocks returns and therefore our market exposure is 200%. In other words, we have unconsciously entered a leveraged investment! This idea of leverage provides a great intuition for understanding the risks involved with currency exchange. It is not uncommon to hear that being exposed to the currency spot is another source of diversification, implying that this accordingly brings risk reduction. The reason this is unlikely to hold is that such diversification is based on the premise of leverage, so the risk mitigation provided by the former is frequently overwhelmed by the latter. We can better understand this by looking at the resulting volatility (measured through variance) as follows: $$Var(r_e+r_s)=Var(r_e)+Var(r_s)+2Corr(r_e,r_s)\sqrt{Var(r_e)Var(r_s)}$$ From this expression, we can obtain the necessary conditions under which the spot exposure reduces risk. $$Var(r_e+r_s)\leq Var(r_e)\Rightarrow Corr(r_e,r_s)\leq-0.5\sqrt{\frac{Var(r_s)}{Var(r_e)}}$$ In summary, for diversification to offset leverage we need a negative correlation between the spot and our investment. As we can see with the examples below, this is rather unlikely; only in the SMI there is a relatively consistent risk reduction. Risk in exchange for nothing So it is clear that leverage is leading to a risk increase; however, this is not necessarily a problem. Indeed, leverage is often employed by investors who are willing to accept greater risks in exchange for a higher expected return, as long as that expected rate of return surpasses the cost of leveraging (cost of funding). Generally speaking and specially among developed countries, this is not the case with currency spots, which do not show consistent movements in one direction in the long run. An intuitive explanation is that the exchange rate is bounded by the economic interdependencies between the involved economies; as a result, it can neither grow nor drop indefinitely. On the other hand, other asset classes, such as equities, generate consistent profits under normal economic conditions. Although they also experience downturns, these are less significant so we can expect a positive return in the long run. Below we illustrate this contrast between equities and exchange rates with a few examples. Wrapping up In conclusion, the concept of leverage helps to gain intuition on the nature of currency risk. Since exposure to this risk doesn’t imply an increase in the portfolio’s rate of return, the foreign investor is accepting risk in a non-efficient way. In future posts, we will discuss how such an investor can get rid of this risk or try to use it to their advantage. As you can imagine, this hardly ever comes for free. See you then! This post was written in collaboration with Jorge Sánchez.
Why doesn't amplitude increase when the frequency of external periodic force increases above the natural frequency of the vibrating object? In one sense this is a consequence of the meaning of resonance. Resonance is identified by a maximum in amplitude. If amplitude increased above (or below) the resonant frequency then we have not found a maximum in amplitude. Perhaps what you are really asking is, why is there a maximum in amplitude when driving frequency $f$ equals the natural frequency $f_0$? This was answered by What is the qualitative cause for a driven oscillator to have a max. amplitude during resonance? At the simplest level this is because at the natural frequency $(f=f_0)$ the driving force is always in phase with the natural motion of the system. The driving force is then always adding energy to the system, which will increase indefinitely unless there is some form of damping (eg friction) which removes energy from the system at a faster rate as amplitude increases. At all other frequencies the driving force is sometimes in phase and sometimes out of phase with the natural motion of the system. Sometimes it adds energy, sometimes it takes energy away. However, this assumes that both the natural motion and the driving force are both sinusoidal. If the driving force is applied as an intermittent impulse or "kick" then sub-harmonic frequencies of $f_0/n$, where $n$ is an integer, will also result in resonance. For example, if the system oscillates once every second, giving it a "kick" in the right direction once every 2 or 3 or 4 etc seconds will also increase the amplitude of the system, provided that the energy imparted with each kick is not outweighed by the loss of energy in between. See Non-resonant but efficient frequencies. A simple physical explanation follows directly from Newton's second law, $F = ma$. Suppose you keep the amplitude $F_0$ of the external force $F = F_0 \cos \omega t$ constant, and increase the frequency $\omega$. The amount of time that the force is acting in one direction is inversely proportional to the frequency - it is a quarter of the vibration period, or $(1/4)(2 \pi / \omega) = \pi / ( 2\omega)$. So as the frequency increases, the structure doesn't have enough time to move far, before the force reverses and accelerates it in the opposite direction. If $\omega$ is much higher than the resonant frequency, the stiffness of the structure is not very important, because it never gets time to move far, and $Kx$ is always small compared with $F$. There are situations where the amplitude of $F$ is not constant, but proportional to $\omega^2$. One example is the force acting on a rotating shaft which is not perfectly balanced - the radial force on a small element of material $dm$ of the rotating structure is $r \omega^2\, dm$. In that situation, the amplitude of vibration is approximately constant for frequencies well above the resonant frequency. In fact the shaft is trying to rotate about its mass center, not its geometrical center, and the vibration amplitude is simply the distance between those two points.
Assume $$X\sim \frac{e^{-\beta Nf(x)}}{Z_{\beta}}$$ where $$ f(x) = -hx -x^2 + \frac{1}{\beta N}(1-x)\ln(1-x) + (1+x)\ln(1+x) $$ and $Z_{\beta}$ is the appropriate normalization factor. The support of $X$ is the lattice $\Gamma_N = \{-1,-1+2/N,\ldots,1-2/N,1\}$ and $N$ is some large positive integer and $\beta>1$. I would like to know if there's any way to characterize the first three moments of $X$, that is $$ \mathbb{E}[X], \mathbb{E}[X^2],\mathbb{E}[X^3]$$ as functions of $\beta$ and whatever else comes out, but in particular I'm interested in the overall behaviour when $\beta$ grows large. If you are familiar with statistical physics this is the Curie Weiss model (or a mean field Ising model).
I'm reading "Microelectronics" by Millman, Grabel. Premise: The book developed, for a pn step-graded junction, the contact potential as $$ V_0 = V_T \text{ln} \frac{p_{p0}}{p_{n0}} = - V_T \text{ln} \frac{n_{p0}}{n_{n0}} \qquad (1) $$ where $p_{p0}$ is the hole concentration in the p-side, $p_{n0}$ is the hole concentration in the n-side, $n_{p0}$ the electron concentration in the p-side and $n_{n0}$ the electron in the n-side. To develop the previous equation, it made use of the following $$J_p = q \mu_p p \mathscr{E} - q D_p \frac{\text{d} p}{\text{d} x} \\ D_p = \mu_p V_T \\ \mathscr{E}(x) = - \frac{\text{d}V}{\text{d} x} $$ where $\mu$ is the mobility, $\mathscr{E}$ the electric field, $D$ the diffusion constant and $p$ ($n$) the concentration of holes (electrons). For electrons, the formulas are very similar, substituting $p$ with $n$ and changing the sign of the diffusion term in $J_n$. I was wondering if one could compute the contact potential of a metal-semiconductor junction (terminals of the diode) with the same formula $(1)$. The derivation of the formula $(1)$ does not make any particular assumption regarding the conduction medium (at least in my opinion), provided the Einstein relation is valid for a metal. Is the Einstein relation $\frac{D_n}{\mu_n} = V_T$ valid for a metal conductor? What is a typical value for mobility of electrons in a metal conductor? In the Example 1.1 the book worked out the electron concentration for a conducting line of an IC chip, that is $4.38 \times 10^{21} ~ \text{cm}^{-3}$. Let's say that the electron concentration of the p-type silicon is a typical $4.2 \times 10^{5}$. Using the formula $(1)$ $$ V_0 = − 0,0259× \text{ln} \frac{4,2×10^5}{4,38×10^{21}} = 0,955 \text{V} $$ It seems to be an extremely big number considering that the cut-in voltage for a diode is $0.6 \text{V}$ and that the contact potential for a metal-semiconductor junction should be negligible. In the same example it states that the mobility of electrons in the conducting line is $500 ~ \text{cm}^2 / (V \cdot s)$. How can be that small considering that the electron mobility in silicon is $1500 ~ \text{cm}^2 / (V \cdot s)$? Thanks, Luca
For the full list: https://meta.stackexchange.com/a/216607/360277 Possible uses 1) Elliptic curves: I know $dG$, how can I find $d$ 2) Amounts paid: Alice has $5$ dollars 3) Tables 4) Selfish mining formulae: $\sum_{k=0}^\infty\frac{\lambda^k e^{-\lambda}}{k!}\cdot\begin{cases}(q/p)^{z-k} & k \leq z\1 & k >z\end{cases}$ Note: If you're going to enable it, don't enable $, we need that as dollars. \$ would be great. Answers: What can be added to the list above?
I searched about the difference between state vector and basis vector in Quantum mechanics but couldn't find any clear explanation. Can someone please give a simple and clear explanation of this? Basis vectors are a special set of vectors that have two properties: The vectors in the set are linearly independent (meaning you cannot write one vector as the linear combination of other vectors in the set) Every vector in the vector space can be written as a linear combination of these basis vectors Basis vectors are widely used in linear algebra and are not unique to quantum mechanics. When we start talking about state vectors in QM, like $|\psi\rangle$, we can choose to express this state vector in terms of any basis we want. In other words, for a discrete basis: $$|\psi\rangle=\sum_i c_i|a_i\rangle$$ where $|a_i\rangle$ represents basis vector $i$, and $c_i$ is a coefficient saying "how much of $|a_i\rangle$ is in $|\psi\rangle$ Now it could be that $|\psi\rangle$ is equal to one of our basis vectors, say $|a_j\rangle$, so that $c_i=\delta_{i,j}$ and $$|\psi\rangle=\sum_i \delta_{i,j}|a_i\rangle=|a_j\rangle$$ But this is a unique case. We could even choose express this example in some other basis: $$|\psi\rangle=|a_j\rangle=\sum_id_i|b_i\rangle$$ So to answer the question: basis vectors are just a special set of vectors with the two properties listed above. Each basis vector could be a state vector, if the system is purely in that state, but it does not have to be that way. You can get the entire picture by being more general: state vectors can be expressed as linear combinations of basis vectors of whatever basis we choose to work in. This then covers the case for when our state vector is one of our basis vectors, since this is still the case of a linear combination. The choice of basis is completely subjective though (although some bases are better to work in than others for certain problems).
DefinitionLet $K$ be a quadratic number field.Let $\mathcal{O}_K$ be the ring of algebraic integers in $K$.By Lemma 4 in my answer to this question, there exists a unique square-free integer $m$ such that $K = \mathbb{Q}(\sqrt m)$. If $m \equiv 1$ (mod $4$), let $\omega = (1 + \sqrt m)/2$. If $m \equiv 2, 3$ (mod $4$), let $\omega = \sqrt m$. By Lemma 6 in my answer to this question, $1, \omega$ is a basis of $\mathcal{O}_K$ as a $\mathbb{Z}$-module.We call $1, \omega$ the canonical integral basis of $K$. Lemma 1Let $K$ be a quadratic number field.Let $1, \omega$ be the canonical integral basis of $K$.Let $f \gt 0$ be a positive integer.Then $R = \mathbb{Z} + \mathbb{Z}f\omega$ is an order of $K$. Proof:Let $\mathcal{O}_K$ be the ring of integers in $K$.It suffices to prove that $R$ is a subring of $\mathcal{O}_K$. $(f\omega)^2 = f^2 \omega^2 \in f^2\mathcal{O}_K \subset R$. Hence $f\omega R \subset R$.Hence $R^2 \subset R$.Hence $R$ is a subring of $\mathcal{O}_K$. Lemma 2Let $K$ be a quadratic number field.Let $1, \omega$ be the canonical integral basis of $K$.Let $R$ be an order of $K$.Then there exists an integer $f \gt 0$ such that $1, f\omega$ is a basis of $R$ as a $\mathbb{Z}$-module. Proof:Let $\mathcal{O}_K$ be the ring of integers in $K$.Let $n$ be the order of the abelian group $\mathcal{O}_K/R$.Then $n$ is finite and $n\mathcal{O}_K \subset R$.Hence $n\omega \in R$.Let $f$ be the least positive integer such that $f\omega \in R$.Let $\alpha$ be an element of $R$.Since $R \subset \mathcal{O}_K$, there exist rational integers $a, b$ such that $\alpha = a + b\omega$.Let $b = fq + r$, where $q$ and $r$ are rational integers and $0 \le r \lt f$.Then $\alpha - qf\omega = a + r\omega$.Hence $r\omega \in R$.Hence $r = 0$ by the assmption on $f$.Hence $\alpha \in \mathbb{Z} + \mathbb{Z}f\omega$.Hence $R \subset \mathbb{Z} + \mathbb{Z}f\omega$.The other inclusion is clear. PropositionLet $K$ be a quadratic number field.Let $\mathcal{O}_K$ be the ring of algebraic integers in $K$.Let $f \gt 0$ be a positive integer.Then there exists a unique order $R$ of $K$ such that $f$ is the order of the group $\mathcal{O}_K/R$.Moreover, the discriminant of $R$ is $f^2 d$, where $d$ is the discriminant of $K$. Proof:Let $1, \omega$ be the canonical integral basis of $K$.By Lemma 1, $R = \mathbb{Z} + \mathbb{Z}f\omega$ is an order of $K$.Clearly $f$ is the order of the group $\mathcal{O}_K/R$. Next we will prove the uniqueness of $R$.Let $S$ be an order of $K$ such that $f$ is the order of the group $\mathcal{O}_K/S$.By Lemma 2, there exists an integer $g \gt 0$ such that $1, g\omega$ is a basis of $S$ as a $\mathbb{Z}$-module. Since $g$ is the order of the group $\mathcal{O}_K/S$, $f = g$.Hence $R = S$. Let $\omega'$ be the conjugate of $\omega$.Then the discriminant of $R$ is $(f\omega - f\omega')^2 = f^2(\omega - \omega')^2 = f^2 d$.
Journal of Symbolic Logic J. Symbolic Logic Volume 55, Issue 3 (1990), 1059-1089. The Interpretability Logic of Peano Arithmetic Abstract PA is Peano arithmetic. The formula $\operatorname{Interp}_\mathrm{PA}(\alpha, \beta)$ is a formalization of the assertion that the theory $\mathrm{PA} + \alpha$ interprets the theory $\mathrm{PA} + \beta$ (the variables $\alpha$ and $\beta$ are intended to range over codes of sentences of PA). We extend Solovay's modal analysis of the formalized provability predicate of PA, $\mathrm{Pr}_\mathrm{PA}(x)$, to the case of the formalized interpretability relation $\operatorname{Interp}_\mathrm{PA}(x, y)$. The relevant modal logic, in addition to the usual provability operator `$\square$', has a binary operator `$\triangleright$' to be interpreted as the formalized interpretability relation. We give an axiomatization and a decision procedure for the class of those modal formulas that express valid interpretability principles (for every assignment of the atomic modal formulas to sentences of PA). Our results continue to hold if we replace the base theory PA with Zermelo-Fraenkel set theory, but not with Godel-Bernays set theory. This sensitivity to the base theory shows that the language is quite expressive. Our proof uses in an essential way earlier work done by A. Visser, D. de Jongh, and F. Veltman on this problem. Article information Source J. Symbolic Logic, Volume 55, Issue 3 (1990), 1059-1089. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183743406 Mathematical Reviews number (MathSciNet) MR1071315 Zentralblatt MATH identifier 0725.03037 JSTOR links.jstor.org Citation Berarducci, Alessandro. The Interpretability Logic of Peano Arithmetic. J. Symbolic Logic 55 (1990), no. 3, 1059--1089. https://projecteuclid.org/euclid.jsl/1183743406
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order" (→Over an abelian group) (→Over the integers) (23 intermediate revisions by the same user not shown) Line 1: Line 1: + + + + + Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. Line 19: Line 24: ===Over the integers=== ===Over the integers=== − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & \qquad q = 0 \\\end{array}\right.</math> + + + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots + + + + \\ \mathbb{Z}, & \qquad q = 0 \\\end{array}\right.</math> The first few homology groups are given below: The first few homology groups are given below: Line 35: Line 46: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: − <math> + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Line 58: Line 69: | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + sq/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + sq/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> |} |} + ==Cohomology groups for trivial group action== ==Cohomology groups for trivial group action== Line 67: Line 79: <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> + + + + The first few cohomology groups are given below: The first few cohomology groups are given below: Line 87: Line 103: These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. − − ===Important case types for abelian groups=== ===Important case types for abelian groups=== Line 97: Line 111: | <math>M</math> is uniquely <math>p</math>-divisible, i.e., every element of <math>M</math> can be divided by <matH>p</math> uniquely. This includes the case that <math>M</math> is a field of characteristic not 2. || all zero groups || all zero groups | <math>M</math> is uniquely <math>p</math>-divisible, i.e., every element of <math>M</math> can be divided by <matH>p</math> uniquely. This includes the case that <math>M</math> is a field of characteristic not 2. || all zero groups || all zero groups |- |- − | <math>M</math> is <math>p</math>-torsion-free, i.e., no nonzero element of <math>M</math> multiplies by <math>p</math> to give zero. || <math>(M/pM)^{(q- + | <math>M</math> is <math>p</math>-torsion-free, i.e., no nonzero element of <math>M</math> multiplies by <math>p</math> to give zero. || <math>(M/pM)^{(q-)/2}</math> || <math>(M/pM)^{(q+2)/2}</math> |- |- | <math>M</math> is <math>p</math>-divisible, but not necessarily uniquely so, e.g., <math>M = \mathbb{Q}/\mathbb{Z}</math> || <math>(\operatorname{Ann}_M(p))^{(q+3)/2}</math> || <math>(\operatorname{Ann}_M(p))^{q/2}</math> | <math>M</math> is <math>p</math>-divisible, but not necessarily uniquely so, e.g., <math>M = \mathbb{Q}/\mathbb{Z}</math> || <math>(\operatorname{Ann}_M(p))^{(q+3)/2}</math> || <math>(\operatorname{Ann}_M(p))^{q/2}</math> Line 107: Line 121: | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q - 1)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q - 1)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> |} |} + + + + + + + + + + + + + + + + + + + + + + + + + + Latest revision as of 21:34, 24 October 2011 Contents This article gives specific information, namely, group cohomology, about a family of groups, namely: elementary abelian group of prime-square order. View group cohomology of group families | View other specific information about elementary abelian group of prime-square order Particular cases Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups below can be computed using the homology groups for the group of prime order (see group cohomology of finite cyclic groups) and combining it with the Kunneth formula for group homology. The even and odd cases can be combined giving the following alternative description: The first few homology groups are given below: rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group The homology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology. Important case types for abelian groups Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the integers are given as below: The odd and even cases can be combined as follows: The first few cohomology groups are given below: 0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group The cohomology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology. Important case types for abelian groups Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Tate cohomology groups for trivial group action PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Growth of ranks of cohomology groups Over the integers With the exception of the zeroth homology group and cohomology group, the homology groups and cohomology groups over the integers are all elementary abelian -groups. For the homology groups, the rank (i.e., dimension as a vector space over the field of elements) is a function of that is a sum of a linear function (of slope 1/2) and a periodic function (of period 2). The same is true for the cohomology groups, although the precise description of the periodic function differs. For homology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . For cohomology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . Note that: The intercept for the cohomology groups is 1/4, as opposed to the intercept of 3/4 for the homology groups. This is explained by the somewhat slower start of cohomology groups on account of being torsion-free. The periodic parts for homology groups and cohomology groups are negatives of each other, indicating an opposing pattern that is explained by looking at the dual universal coefficients theorem for group cohomology. Over the prime field If we take coefficients in the prime field , then the ranks of the homology and cohomology groups both grow as linear functions of . The linear function in both cases is . Note that in this case, the homology groups and cohomology groups are vector spaces over and the cohomology group is the vector space dual of the homology group. Note that there is no periodic part when we are working over the prime field.
You are quite right, there is no objective reason why we should restrict ourselves to canonical transformations. As a matter of fact, the Hamilton equations can be formulated in a coordinate-independent way (by coordinates I mean the coordinates on phase space, i.e. the configuration variables and conjugate momenta), so that the very need of choosing coordinates does not arise. The coordinate-independent formulation is given by the following equation: $$\omega(X,\cdot)=dH$$ Here $H$ is your Hamiltonian and $dH$ is differential, $X$ is the tangent vector to the dynamical trajectory on phase space and $\omega$ is called a symplectic form. It is a 2-form (i.e. it is an antisymmetric tensor that accepts two vectors as its arguments) which moreover is closed, in the sense that $d\omega=0$, where the (exterior) differential of $\omega$ is defined in arbitrary coordinates $x^{I}$ (both configuration and momentum variables) as $$(d\omega)_{IJK}=\frac{\partial \omega_{JK}}{\partial x^{I}}+\frac{\partial \omega_{KI}}{\partial x^{J}}+\frac{\partial \omega_{IJ}}{\partial x^{K}}$$ The vector $X$ is defined in the same coordinates as $$X^{I}=\frac{dx^{I}}{dt}$$ Now, there is a theorem due to Darboux that says that if the phase space that you are considering admits a symplectic form, then locally one can find (non-unique) coordinates $(q^{i},p_{i})$ (the number of $I$ indices is double the number of $i$ indices) such that $$\omega=dq^{i}\wedge d{p}_{i}$$ where $\wedge$ is the wedge product, i.e. the antisymmetric part of the regular tensor product multiplied by two. Equivalently, $$\omega(X,Y)=X^{i}Y_{i}-X_{i}Y^{i}$$ where $X^{i}$ and $X_{i}$ are the components of the vector $X$ with respect to the given coordinates (the same goes for $Y$). If $X$ is the tangent vector to a curve on phase space, then $$X^{i}=\frac{dq^{i}}{dt}\qquad X_{i}=\frac{dp_{i}}{dt}$$ Canonical transformations are precisely those that keep $\omega$ invariant in the form given above. There is also a theorem (rather, a definition) that endows the phase spaces that you derive from Lagrangian mechanics with a canonical symplectic form, so that the Darboux theorem always holds in such phase spaces (they are called cotangent bundles on configuration spaces). Let's now see what the Hamilton equations given in the coordinate-free form translate into with respect to the Darboux coordinates. We have $$dH=\frac{\partial H}{\partial q^{i}}\,dq^{i}+\frac{\partial H}{\partial p_{i}}\,dp_{i}$$ and $$\omega(X,\cdot)=dq^{i}(X)dp_{i}-dp_{i}(X)dq^{i}=X^{i}dp_{i}-X_{i}dq^{i}=\frac{dq^{i}}{dt}\,dp_{i}-\frac{dp_{i}}{dt}\,dq^{i}$$ Hence $$\frac{dq^{i}}{dt}\,dp_{i}-\frac{dp_{i}}{dt}\,dq^{i}=\frac{\partial H}{\partial q^{i}}\,dq^{i}+\frac{\partial H}{\partial p_{i}}\,dp_{i}$$ or $$\frac{dq^{i}}{dt}=\frac{\partial H}{\partial p_{i}}\qquad \frac{dp_{i}}{dt}=-\frac{\partial H}{\partial q^{i}}$$ which are the usual Hamilton equations. However, since $\omega(X,\cdot)=dH$ does not make reference to any specific coordinates, the Darboux coordinates are as good as any other set of coordinates. Alternatively, one may just start from the usual form of the Hamilton equations and define arbitrary changes of coordinates, as you have figured out. So what is the use of coordinate transformations? There are two answers to this question. The first one is that the Darboux coordinates, which are transformed into one another by canonical transformations, are the ones in which the Hamilton equations take arguably the simplest form. But you may not care about this simplification. Then there is a second, more relevant answer: it can be shown that any transformation on the configuration space (the $q^{i}$'s) which is a symmetry of the Lagrangian action of your system lifts automatically to a canonical transformation on the phase space of the Hamiltonian formulation, meaning that to any symmetry transformation of the configuration space is associated a canonical transformation of the phase space. Therefore, if you focus on symmetries rather than on the Hamilton equations, you find that the canonical transformations on phase space are the ones that realize a symmetry transformation on the system: solutions of the dynamics which are related to one another through a symmetry transformation are also related by a canonical transformation.
Club sets and stationary sets Closed and unbounded subsets of ordinals, more commonly referred to as club sets, play a prominent role in modern set theory. We intuitively think of clubs as the "large" subsets of $\kappa$ and the stationary subsets as the "not small" subsets of $\kappa$, though this is sort of a boring way to look at them. They arise from considering the natural topology on the class of ordinals and often exhibit substantial reflection properties. Given an ordinal $\kappa$, the basic open intervals are pairs of ordinals $(\alpha, \beta)=\{\gamma : \alpha <\gamma < \beta\}$ where $\beta <\kappa$. Closed intervals are defined similarly, so closed intervals are topologically closed. Considering a typical example of an interval of ordinals $[\lambda, \lambda+1, \lambda+2, \dots)$ it appears there are more successor ordinals than limits, but club (and also stationary) sets favor limit ordinals in the sense that they concentrate on them. Hence the opposite view-point is more useful when considering club sets, i.e., there are "more" limit ordinals. Club sets Although the definition can be applied to all infinite ordinals, we assume $\kappa >\omega$ is a regular cardinal for this and subsequent sections. A set $C\subseteq \kappa$ is closed unbounded or club in $\kappa$ if and only if $C$ is unbounded in $\kappa$: for every $\alpha <\kappa$ there is some $\beta \in C$ with occurring above $\alpha$ in the natural ordering; and $C$ is also closed: if $B\subseteq C$ is bounded in $\kappa$ (i.e., there is some $\gamma\in \kappa$ with $\beta\leq \gamma$ for each $\beta\in B$), then $sup(B)\in C$. If $\lambda < \kappa$ and $ \lambda$ a limit with $C\cap\lambda$ unbounded in $\lambda$, then $\lambda\in C$. Typical examples club sets include the collection of limit ordinals below $\kappa$, the collection of limits of limit ordinals below $\kappa$, and also all "tails" in $\kappa$: $\{\lambda : \alpha\leq \lambda <\kappa\}$ for each $\alpha <\kappa$. It is fairly straightforward to construct a club subset of $\kappa$. Given a sequence $\langle\gamma_\alpha\rangle_{\alpha < \kappa}$ of ordinals smaller than $\kappa$ arbitrarily pick $\gamma_{\alpha +1}$ also smaller than $\kappa$. At limit stages, take the supremum of the sequence already constructed. It is clear that club subsets of $\kappa$ all have size $\kappa$ and their enumeration functions $f:\kappa\rightarrow\kappa$ are all continuous and increasing. The intersection of two club subsets of $\kappa$ is also club in $\kappa$. In fact, given any sequence of fewer than $\kappa$-many club subsets of $\kappa$, their intersection is also club in $\kappa$. Further, any collection of fewer than $\kappa$-many club subsets is also closed under diagonal intersections, a fact used in characterizing the stationary subsets of $\kappa$. In particular, the club subsets of $\kappa$ form a filter over $\kappa$. Note that the intersection of $\kappa$-many clubs might be empty, so this filter is not an ultrafilter in general. Stationary sets A set $S\subseteq \kappa$ is stationary in $\kappa$ if $S$ intersects all club subsets of $\kappa$. As mentioned above, one intuitively thinks of the collection of stationary subsets of $\kappa$ as the ``not small" subsets of $\kappa$. Several facts about stationary sets are immediate: all club subsets of $\kappa$ are also stationary in $\kappa$; the supremum of a stationary subset of $\kappa$, is $\kappa$; the intersection of a club set with a stationary set is stationary; if $S$ is a stationary set and also the union of less than $\kappa$-many sets $S_\alpha$, then at least one such set is also stationary, in other words, stationary subsets of $\kappa$ cannot be partitioned into a small number of small sets. For a given regular cardinal $\kappa$, particular stationary sets Fodor's Lemma (improving upon Alexandrov-Urysohn, 1929) is the basic, fundamental result concerning the concept of stationarity. Call a function $f:\kappa\to\kappa$ regressive if $f(\alpha) < \alpha$ for all non-zero ordinals smaller than $\kappa$. Fodor's lemma reads: If $f$ is a regressive function with domain a stationary subset $S$ of $\kappa$, then there is some subset $S'$ of $S$ on which $f$ is constant. Using Fodor's lemma, Solovay proved that each stationary subset of $\kappa$ can be split into two, in fact $\kappa$-many disjoint stationary sets. Another application of Fodor's lemma is used to prove a result concerning families of sets that are as different as possible, i.e., any two distinct sets in the family have the same intersection. The result is more popularly known as the $\Delta$-system Lemma (originally established by Marczewski): Given a family of finite sets (infinite sets usually require CH), of size $\kappa$ there is a subfamily of size $\kappa$ which forms a $\Delta$-system. Generalized notions References This article is a stub. Please help us to improve Cantor's Attic by adding information.
7.Product and Process Comparisons 7.2. Comparisons based on data from one process Testing proportion defective is based on the binomialdistribution The proportion of defective items in a manufacturing processcan be monitored using statistics based on the observed number ofdefectives in a random sample of size \(N\)from a continuousmanufacturing process, or from a large population or lot. Theproportion defective in a sample follows thebinomial distributionwhere \(p\)is the probability of an individual item being found defective.Questions of interest for quality control are: Hypotheses regarding proportion defective The corresponding hypotheses that can be tested are: Test statistic based on a normal approximation Given a random sample of measurements \(Y_1, \, \ldots, \, Y_N\)from a population, the proportion of items that are judgeddefective from these \(N\)measurements is denoted \(\hat{p}\).The test statistic,$$ \largez = \frac{\hat{p} - p_0}{\sqrt{\frac{p_0(1-p_0)}{N}}} \, , $$ Restriction on sample size Because the test is approximate, \(N\) needs to be large for the test to be valid. One criterion is that \(N\) should be chosen so that $$ \mbox{min }\{ N_{p_0}, \, N(1-p_0)\} \ge 5 \, . $$ For example, if \(p_0\) = 0.1, then \(N\) should be at least 50 and if \(p_0\) = 0.01, then \(N\) should be at least 500. Criteria for choosing a sample size in order to guarantee detecting a change of size \(\delta\) are discussed on another page. One and two-sided tests for proportion defective Tests at the \(1-\alpha\)confidence level corresponding to hypotheses (1), (2), and (3) areshown below. For hypothesis (1), the test statistic, \(z\),is compared with \(z_{1-\alpha/2}\),thecritical value fromthe normal distribution that is exceeded with probability \(\alpha/2\)and similarly for (2) and (3). If Example of a one-sided test for proportion defective After a new method of processing wafers was introduced into a fabrication process, two hundred wafers were tested, and twenty-six showed some type of defect. Thus, for \(N\) = 200, the proportion defective is estimated to be \(\hat{p}\) = 26/200 = 0.13. In the past, the fabrication process was capable of producing wafers with a proportion defective of at most 0.10. The issue is whether the new process has degraded the quality of the wafers. The relevant test is the one-sided test (3) which guards against an increase in proportion defective from its historical level. Calculations for a one-sided test of proportion defective For a test at significance level \(\alpha\)= 0.05,the hypothesis of no degradation is validated if the test statistic \(z\)is less than the critical value, \(z_{0.95}\)= 1.645. The test statistic is computed to be$$ \largez = \frac{\hat{p} - p_0}{\sqrt{\frac{p_0(1-p_0)}{N}}} = \frac{\hat{0.13} - 0.10}{\sqrt{\frac{0.10(0.90)}{200}}} = 1.414 \, . $$ Interpretation Because the test statistic is less than the critical value (1.645), we cannot reject hypothesis (3) and, therefore, we cannot conclude that the new fabrication method is degrading the quality of the wafers. The new process may, indeed, be worse, but more evidence would be needed to reach that conclusion at the 95 % confidence level.
5. Classical Reasoning¶ If we take all the rules of propositional logic we have seen so far and exclude reductio ad absurdum, or proof by contradiction, we have what is known as intuitionistic logic. In intuitionistic logic, it is possible to view proofs in computational terms: a proof of \(A \wedge B\) is a proof of \(A\) paired with a proof of \(B\), a proof of \(A \to B\) is a procedure which transforms evidence for \(A\) into evidence for \(B\), and a proof of \(A \vee B\) is a proof of one or the other, tagged so that we know which is the case. The ex falso rule makes sense only because we expect that there is no proof of falsity; it is like the empty data type. Proof by contradiction does not fit in well with this world view: from a proof of a contradiction from \(\neg A\), we are supposed to magically produce a proof of \(A\). We will see that with proof by contradiction, we can prove the following law, known as the law of the excluded middle: \(\forall A, A \vee \neg A\). From a computational perspective, this says that for every \(A\) we can decide whether or not \(A\) is true. Classical reasoning does introduce a number of principles into logic, however, that can be used to simplify reasoning. In this chapter, we will consider these principles, and see how they follow from the basic rules. 5.1. Proof by Contradiction¶ Remember that in natural deduction, proof by contradiction is expressed by the following pattern: The assumption \(\neg A\) is canceled at the final inference. In Lean, the inference is named by_contradiction, and since it is a classical rule, we have to use the command open classical before it is available. Once we do so, the pattern of inference is expressed as follows: open classicalvariable (A : Prop)example : A :=by_contradiction (assume h : ¬ A, show false, from sorry) One of the most important consequences of this rule is a classical principle that we mentioned above, namely, the law of the excluded middle, which asserts that the following holds for all \(A\): \(A \vee \neg A\). In Lean we denote this law by em. In mathematical arguments, one often splits a proof into two cases, assuming first \(A\) and then \(\neg A\). Using the elimination rule for disjunction, this is equivalent to using \(A \vee \neg A\), which is the excluded middle principle for this particular \(A\). Here is a proof of em, in natural deduction, using proof by contradiction: Here is the same proof rendered in Lean: open classicalvariable (A : Prop)example : A ∨ ¬ A :=by_contradiction (assume h1 : ¬ (A ∨ ¬ A), have h2 : ¬ A, from assume h3 : A, have h4 : A ∨ ¬ A, from or.inl h3, show false, from h1 h4, have h5 : A ∨ ¬ A, from or.inr h2, show false, from h1 h5) The principle is known as the law of the excluded middle because it says that a proposition A is either true or false; there is no middle ground. As a result, the theorem is named em in the Lean library. For any proposition A, em A denotes a proof of A ∨ ¬ A, and you are free to use it any time classical is open: open classicalexample (A : Prop) : A ∨ ¬ A :=or.elim (em A) (assume : A, or.inl this) (assume : ¬ A, or.inr this) Or even more simply: open classicalexample (A : Prop) : A ∨ ¬ A :=em A In fact, we can go in the other direction, and use the law of the excluded middle to justify proof by contradiction. You are asked to do this in the exercises. Proof by contradiction is also equivalent to the principle \(¬ ¬ A ↔ A\). The implication from right to left holds intuitionistically; the other implication is classical, and is known as double-negation elimination. Here is a proof in natural deduction: And here is the corresponding proof in Lean: open classicalexample (A : Prop) : ¬ ¬ A ↔ A :=iff.intro (assume h1 : ¬ ¬ A, show A, from by_contradiction (assume h2 : ¬ A, show false, from h1 h2)) (assume h1 : A, show ¬ ¬ A, from assume h2 : ¬ A, h2 h1) In the next section, we will derive a number of classical rules and equivalences. These are tricky to prove. In general, to use classical reasoning in natural deduction, we need to extend the general heuristic presented in Section 3.3 as follows: First, work backward from the conclusion, using the introduction rules. When you have run out things to do in the first step, use elimination rules to work forward. If all else fails, use a proof by contradiction. Sometimes a proof by contradiction is necessary, but when it isn’t, it can be less informative than a direct proof. Suppose, for example, we want to prove \(A \wedge B \wedge C \to D\). In a direct proof, we assume \(A\), \(B\), and \(C\), and work towards \(D\). Along the way, we will derive other consequences of \(A\), \(B\), and \(C\), and these may be useful in other contexts. If we use proof by contradiction, on the other hand, we assume \(A\), \(B\), \(C\), and \(\neg D\), and try to prove \(\bot\). In that case, we are working in an inconsistent context; any auxiliary results we may obtain that way are subsumed by the fact that ultimately \(\bot\) is a consequence of the hypotheses. 5.2. Some Classical Principles¶ We have already seen that \(A \vee \neg A\) and \(\neg \neg A \leftrightarrow A\) are two important theorems of classical propositional logic. In this section we will provide some more theorems, rules, and equivalences. Some will be proved here, but most will be left to you in the exercises. In ordinary mathematics, these are generally used without comment. It is nice to know, however, that they can all be justified using the basic rules of classical natural deduction. If \(A \to B\) is any implication, the assertion \(\neg B \to \neg A\) is known as the contrapositive. Every implication implies its contrapositive, and the other direction is true classically: Here is another example. Intuitively, asserting “if A then B” is equivalent to saying that it cannot be the case that A is true and B is false. Classical reasoning is needed to get us from the second statement to the first. Here are the same proofs, rendered in Lean: open classicalvariables (A B : Prop)example (h : ¬ B → ¬ A) : A → B :=assume h1 : A,show B, from by_contradiction (assume h2 : ¬ B, have h3 : ¬ A, from h h2, show false, from h3 h1)example (h : ¬ (A ∧ ¬ B)) : A → B :=assume : A,show B, from by_contradiction (assume : ¬ B, have A ∧ ¬ B, from and.intro ‹A› this, show false, from h this) Notice that in the second example, we used an anonymous assume and an anonymous have. We used the brackets \f< and \f> to write ‹A›, referring back to the first assumption. The first use of the word this refers back to the assumption ¬ B, while the second one refers back to the have. Knowing that we can prove the law of the excluded middle, it is convenient to use it in classical proofs. Here is an example, with a proof of \((A \to B) \vee (B \to A)\): Here is the corresponding proof in Lean: open classicalvariables (A B : Prop)example : (A → B) ∨ (B → A) :=or.elim (em B) (assume h : B, have A → B, from assume : A, show B, from h, show (A → B) ∨ (B → A), from or.inl this) (assume h : ¬ B, have B → A, from assume : B, have false, from h this, show A, from false.elim this, show (A → B) ∨ (B → A), from or.inr this) Using classical reasoning, implication can be rewritten in terms of disjunction and negation: The forward direction requires classical reasoning. The following equivalences are known as De Morgan’s laws: \(\neg (A \vee B) \leftrightarrow \neg A \wedge \neg B\) \(\neg (A \wedge B) \leftrightarrow \neg A \vee \neg B\) The forward direction of the second of these requires classical reasoning. Using these identities, we can always push negations down to propositional variables. For example, we have A formula built up from \(\wedge\), \(\vee\), and \(\neg\) in which negations only occur at variables is said to be in negation normal form. In fact, using distributivity laws, one can go on to ensure that all the disjunctions are on the outside, so that the formulas is a big or of and’s of propositional variables and negated propositional variables. Such a formula is said to be in disjunctive normal form. Alternatively, all the and’s can be brought to the outside. Such a formula is said to be in conjunctive normal form. An exercise below, however, shows that putting formulas in disjunctive or conjunctive normal form can make them much longer. 5.3. Exercises¶ Show how to derive the proof-by-contradiction rule from the law of the excluded middle, using the other rules of natural deduction. In other words, assume you have a proof of \(\bot\) from \(\neg A\). Using \(A \vee \neg A\) as a hypothesis, but withoutusing the rule RAA, show how you can go on to derive \(A\). Give a natural deduction proof of \(\neg (A \wedge B)\) from \(\neg A \vee \neg B\). (You do not need to use proof by contradiction.) Construct a natural deduction proof of \(\neg A \vee \neg B\) from \(\neg (A \wedge B)\). You can do it as follows: First, prove \(\neg B\), and hence \(\neg A \vee \neg B\), from \(\neg (A \wedge B)\) and \(A\). Use this to construct a proof of \(\neg A\), and hence \(\neg A \vee \neg B\), from \(\neg (A \wedge B)\) and \(\neg (\neg A \vee \neg B)\). Use this to construct a proof of a contradiction from \(\neg (A \wedge B)\) and \(\neg (\neg A \vee \neg B)\). Using proof by contradiction, this gives you a proof of \(\neg A \vee \neg B\) from \(\neg (A \wedge B)\). Give a natural deduction proof of \(P\) from \(\neg P \to (Q \vee R)\), \(\neg Q\), and \(\neg R\). Give a natural deduction proof of \(\neg A \vee B\) from \(A \to B\). You may use the law of the excluded middle. Give a natural deduction proof of \(A \to ((A \wedge B) \vee (A \wedge \neg B))\). You may use the law of the excluded middle. Put \((A \vee B) \wedge (C \vee D) \wedge (E \vee F)\) in disjunctive normal form, that is, write it as a big “or” of multiple “and” expressions. Prove ¬ (A ∧ B) → ¬ A ∨ ¬ Bby replacing the sorry’s below by proofs. open classical variables {A B C : Prop} -- Prove ¬ (A ∧ B) → ¬ A ∨ ¬ B by replacing the sorry's below -- by proofs. lemma step1 (h₁ : ¬ (A ∧ B)) (h₂ : A) : ¬ A ∨ ¬ B := have ¬ B, from sorry, show ¬ A ∨ ¬ B, from or.inr this lemma step2 (h₁ : ¬ (A ∧ B)) (h₂ : ¬ (¬ A ∨ ¬ B)) : false := have ¬ A, from assume : A, have ¬ A ∨ ¬ B, from step1 h₁ ‹A›, show false, from h₂ this, show false, from sorry theorem step3 (h : ¬ (A ∧ B)) : ¬ A ∨ ¬ B := by_contradiction (assume h' : ¬ (¬ A ∨ ¬ B), show false, from step2 h h') Also do these: open classical variables {A B C : Prop} example (h : ¬ B → ¬ A) : A → B := sorry example (h : A → B) : ¬ A ∨ B := sorry
I'm trying to find the expectationvalue for $p^2$ where $p = i\sqrt{\frac{hmw}{2}}(a_{+} - a_{-})$ and i end up with the following result \begin{align*} \langle \psi_0|p^2|\psi_0\rangle &= -\frac{\hbar mw}{2}\langle\psi_0|(a_{+} - a_{-})^2|\psi_0\rangle\\ &= -\frac{\hbar mw}{2}\langle\psi_0|a_{+}^2 - a_{+}a_{-} - a_{-}a_{+} + (a_{-})^2|\psi_0\rangle\\ \Rightarrow \langle\psi_0|p^2|\psi_0\rangle &= -\frac{\hbar mw}{2}(\langle\psi_0|a_{+}^2|\psi_0\rangle -\langle\psi_0|a_{-}a_{+}|\psi_0\rangle)\\ &= -\frac{\hbar mw}{2}(\langle a_{+}\psi_0|a_{+}\psi_0\rangle - \langle a_{-}\psi_0|a_{+}\psi_0\rangle)\\ &= -\frac{\hbar mw}{2}(\langle\psi_1|\psi_1\rangle - \langle0|0\rangle)\\ &= -\frac{\hbar m w}{2} \end{align*} Where i've used the fact that $a_{+}a_{-}\psi_0 = 0$ and $a_{-}a_{+}\psi_0 = \psi_0$. I can see that the minus sign appears because to the imaginary number, but i must be missing something because the result is not supposed to be negative. closed as off-topic by Martin, Kyle Kanos, Qmechanic♦ Feb 26 '16 at 13:01 This question appears to be off-topic. The users who voted to close gave this specific reason: "Homework-like questions should ask about a specific physics conceptand show some effortto work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Martin, Kyle Kanos, Qmechanic Your mistake is that $$ \langle 0|\hat{a}^{\dagger} \neq \langle 1 | $$ In fact, $$ \langle 0 |\hat{a}^{\dagger} = (\hat{a}|0\rangle)^{\dagger} = 0 $$
The "universe" is a sphere with a radius of $10^{25}$ m the medium temperature is 3K, how many photons there are in the universe? $$n_\gamma = \int_{0}^{\infty} \frac{8h\pi\nu^3} {{c^3}{}} \frac{1} {{e^\frac{h\nu} {KT} -1}{}}d\nu = 2.4\frac{8\pi} {c^3} (\frac{KT} {h})^3 \simeq 1.64* 10^{17} photons$$ but according to previous answers and other references... the number is much bigger $$10^{89} $$ where is the problem in my tentative? As you can see in my tentative, I would be better if the answer is based on "classical thermodynamic" using plack distribution, and a boltzmann-like point of view. Estimation based on cosmological facts are also welcome. When you solve the integral while you do some substitution you have to solve an integral like this $$ \int_{0}^{\infty} \frac{x^2} {{e^x -1}{}} dx \simeq2.4$$
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates When Functions Are Equal to Their Taylor Series So far we have assumed that we could find a power series representation for functions. However, We note that $f(x)$ is equal to it's Taylor series if $\displaystyle\lim_{n\to\infty}T_n(x)=f(x)$, i.e. the series converges to the limit of its partial sums. We define the We have some theorems to help determine if this remainder converges to zero, by finding a From this inequality, we can determine that the remainders for $e^x$ and $\sin(x)$, for example, go to zero as $k \to \infty$ (see the video below), so these functions are analytic and are equal to their Taylor series. The Remainder Theorem is similar to Rolle's theorem and the Mean Value Theorem, both of which involve a mystery point between $a$ and $b$. The proof of Taylor's theorem involves repeated application of Rolle's theorem, as is explained in this video.
Bhattacharyya, T and Mohandas, JP (2005) Two-Parameter Uniformaly Elliptic Sturm–Liovviville Problems With Eigenparameter-Dependent Boundary Conditions. In: Proceedings of the Edinburgh Mathematical Society, 48 (3). pp. 531-547. Abstract We consider the two-parameter Sturm–Liouville system $-y_1" + q_1y_1 = (\lambda r_{11} + \mu r_{12})y_1\quad\text{on }[0,1], $ with the boundary conditions $\frac{y_1'(0)}{y_1(0)} = \cot\alpha_1\quad\text{and}\quad\frac{y_1'(1)}{y_1(1)} = \frac{a_1\lambda + b_1}{c_1\lambda + d_1}, $ and $-y_2"+q_2y_2 = (\lambda r_{21} + \mu r_{22})y_2\quad\text{on}[0,1], $ with the boundary conditions $\frac{y_2'(0)}{y_2(0)} = \cot\alpha_2\quad\text{and}\quad\frac{y_2'(1)}{y_2(1)} = \frac{a_2\mu + b_2}{c_2\mu+d_2}, $ subject to the uniform-left-definite and uniform-ellipticity conditions; where $q_{i}$ and $r_{ij}$ are continuous real valued functions on $[0,1]$, the angle $\alpha_{i}$ is in $[0,\pi)$ and $a_{i}$, $b_{i}$, $c_{i}$, $d_{i}$ are real numbers with $\delta_{i} = a_{i}d_{i}-b_{i}c_{i}>0$ and $c_{i}\neq 0$ for $i,j = 1,2$. Results are given on asymptotics, oscillation of eigenfunctions and location of eigenvalues. Item Type: Journal Article Additional Information: Copyright of this article belongs to Cambridge University Press. Keywords: Sturm–Liouville equations;definiteness conditions;eigencurves;oscillation theorems. Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Mr. Ramesh Chander Date Deposited: 22 Jul 2008 Last Modified: 27 Aug 2008 13:37 URI: http://eprints.iisc.ac.in/id/eprint/15153 Actions (login required) View Item
Complex number A complex number is an ordered pair of real numbers. (A real number may take any value from -infinity to +infinity. Real numbers are commonly represented as points on the "real number line", i.e., a straight line of infinite length.) The two components of a complex number (a,b) are the real part (a) and the imaginary part (b). Complex numbers may be represented as points on an infinite two-dimensional plane surface, with the real part as the "X" coordinate and the imaginary part as the "Y" coordinate. The operations of addition and multiplication are defined for complex numbers: (a,b) + (c,d) = (a+c, b+d), and (a,b) x (c,d) = (ac-bd, ad+bc) Complex numbers may also be represented using "i" (or "j" in engineering contexts). The symbol "i" refers the the complex number (0,1). If "i" is interpreted as the square root of -1, we can write complex numbers in the form (a,b) = a + ib The addition and multiplication operators work out in a simple way, if we remember to collect real and imaginary terms and remember that i x i = -1. Thus, (a+ib) x (c+id) = ac+aid+ibc+iibd = ac+i(ad+bc)+(-1)bd = ac-bd + i(ad+bc) Complex numbers are often used in scientific and engineering applications to describe systems where amplitude and phase of a narrow band signal are important. If V = (re, im) is a complex value (say voltage), the amplitude and phase of V are <math>Amp = \sqrt{re^2 + im^2}\,</math> and <math>Phase = \arctan \big(\frac{im}{re}\big)</math> A sinusoidal voltage with frequency <math>\omega = 2 \pi F</math> may be considered to be the real part of a complex voltage <math>V(t) = V_0\, \exp(j \omega t+ j\phi) = V_0\, ( \cos(\omega t+\phi) + j \sin(\omega t+\phi)\,)</math> with amplitude <math>V_0</math> and phase <math>\phi</math>.
Suppose you need to generate all possible bitmasks of length $n$ with $m$ bits set, i.e., all bitmasks with $n$ bits of length having $m$ bits equal to $1$ and $(n-m)$ bits equal to $0$. For small values of $n$, the desired bitmasks can be easily hard-coded, but for general values of $n$ and $m$, this ceases to be a practical way to accomplish the task since the total number $N$ of bitmasks that need to be generated can be very large: $$ \displaystyle N = \left(\matrix{n \\ m}\right) = \frac{n!}{m!(n-m)!} $$ One way to solve this problem is by using generator functions. A generator function is a special type of function which can be used in iterative loops: it behaves like an iterator and serves the purpose of dynamically generating sequences of elements in a simplified way. Before explaining all this in more detail, take a look at the code below; it solves the problem stated above (the values of $m$ and $n$ are assumed to satisfy $0 \leq m \leq n$): from bitarray import bitarray def bitmasks(n,m): if m < n: if m > 0: for x in bitmasks(n-1,m-1): yield bitarray([1]) + x for x in bitmasks(n-1,m): yield bitarray([0]) + x else: yield n * bitarray('0') else: yield n * bitarray('1') for b in bitmasks(4,2): print(b) The output of the code above is shown below: bitarray('1100') bitarray('1010') bitarray('1001') bitarray('0110') bitarray('0101') bitarray('0011') Here we store each bitmask on a special data structure called bitarray, which is appropriate for bitmasks as it contains lots of methods typically used with arrays of bits such as bitwise operations, encoding and decoding etc. On Ubuntu/Debian, support for bitarrays can be added by installing the python-bitarray package (or python3-bitarray for Python 3); for that you just need to open a terminal and run: sudo apt-get install python-bitarray If you wish to print the bitmasks in a cleaner way, apply the following change to the code above: for b in bitmasks(4,2): print(b.to01()) The output now becomes easier to read: 1100 1010 1001 0110 0101 0011 The bitmasks function is a generator function. Instead of returning a value, it returns a generator. Indeed, if you execute: print(bitmasks(4,2)) the output you will get will be similar to the one shown below: <generator object bitmasks at 0x7fa2a0c99f78> As mentioned above, this generator can be used in loops which iterate over a set of values. Roughly speaking, the generator will yield each value on the loop as needed by executing the contents of bitmasks until a yield statement is found, at which point the value from the yield statement is assigned to the loop variable and the function is frozen until the next iteration of the loop. In the example above, the bitmasks of length $n$ are built one by one at each iteration of the for loop and assigned to the loop variable b. Notice that yield is very different from return: yield produces the next value in the sequence and freezes the execution of the function, while return returns the specified value and terminates the execution of the function immediately. One interesting aspect of generator functions is the fact that since values are generated one by one (lazily), less memory is needed than if all elements were first generated and returned in a list. Solution without the bitarray data structure If you do not wish to store the generated bitmasks on bitarrays, you can also store them directly as integers. The solution below implements this: def bitmasks(n,m): if m < n: if m > 0: for x in bitmasks(n-1,m-1): yield (1 << (n-1)) + x for x in bitmasks(n-1,m): yield x else: yield 0 else: yield (1 << n) - 1 # print each value as a 4 bit binary number for b in bitmasks(4,2): print('{:04b}'.format(b)) The implementation above does the same as the previous one and actually generates the bitmasks in the same order. Here is the output produced: 1100 1010 1001 0110 0101 0011