text
stringlengths
256
16.4k
In Carr and Madan (2005), the authors give sufficient conditions for a set of call prices to arise as integrals of a risk-neutral probability distribution (See Breeden and Litzenberger (1978)), and therefore be free of static arbitrage (via the Fundamental Theorem of Asset Pricing) These conditions are: Call spreads are non-negative Butterflies spreads are non-negative In the case that we have a full range of call prices: $C(K)$ is monotically decreasing $C(K)$ is convex Or if $C(K)$ is twice differentiable: $$C'(K) \leq 0 \tag1$$ $$C''(K) \geq 0\tag2$$ Carr and Madan do not mention the following constraints, thought they may be implied (?): $$C(K) \geq 0\tag3$$ $C(0)$ is equal to the discounted spot price $\tag 4$ Other authors do mention the constraints (1-4) together. For example Fengler and Hin (2012) call these the "standard representation of no-arbitrage constraints" In Reiswich (2010), the author presents the following condition: $$\frac{\partial P}{\partial K} \geq 0\tag{5a}$$ Or equivalently, via Put-Call Parity: $$\frac{\partial C}{\partial K} \geq \frac{C(K) - e^{-r\tau}S}{K}\tag{5b}$$ Reiswich claims that (5) is stricter than what is implied by (1-4) (i.e. there are sets of call prices which satisfy (1-4) but not (5)). Is this really true? If so, how do we reconcile this with Carr and Madan's claim of sufficiency? Edit: Alternately, if (5) must hold is a no-arbitrage setting, and if (1-4) are sufficient, then how do we derive (5) from (1-4)?
Exponentially Weighted Average for Deep Neural Networks A fast and efficient way to compute moving averages - implemented in the different optimization algorithms. Example: Temperature over days, calculate the moving averages : Moving average value at day ‘t’ = 0 .. if $\beta$ = 0.9 The above equation would give the moving average line in Red over the data points in Blue. What does and means : averaging over days (approx) For ex. , For =0.9, ~=10 ; = 0.9 averages over 10 days (smooth curve: Red Line) For =0.98, ~=50 ; = 0.98 averages over 50 days (smoother curve: Green Line) - Not very accurate epresentation For =0.5, ~=2 ; = 0.5 averages over 2 days (Fluctuations: Yellow Line) - Much more noisy The right value of is calculated using Hyperparameter Tuning What is Exponentially Weighted Average Actually Doing? $v_t = \beta v_{t-1} + (1-\beta)\theta_t$ Going backwards from $v_{100}$, $v_{100} = 0.1\theta_{100} + 0.9v_{99}$ $v_{99} = 0.1\theta_{99} + 0.9v_{98}$ Substituting $v_{99}$, $v_{100} = 0.1\theta_{100} + 0.9 (0.1\theta_{99} + 0.9v_{98}) $ or, $v_{100} = 0.1\theta_{100} + 0.9 (0.1\theta_{99} + 0.9(0.1\theta_{98} + 0.9v_{97})) $ Generalizing, $v_{100} = 0.1\theta_{100} + 0.1 * 0.9 * \theta_{99} + 0.1 * (0.9)^2\theta_{98} + 0.1 * (0.9)^3\theta_{97} + + 0.1 * (0.9)^4\theta_{96} + …..$ $v_{100}$ is basically an element wise computation of two metrices/functions - one an exponential decay function containing diminishing values (0.9, $0.9^2$, $0.9^3$,..) and another with all the elements of $\theta_t$. Also, if $\beta$ = 0.9, the weight decays to about a third by 10th iteration. Proof: $(1-\epsilon)^{\frac1{\epsilon}} = \frac1e$ $\epsilon$ = 1-$\beta$ of $\beta$ = 0.9, $\epsilon$ = 1-$\beta$ = 0.1 $(1-\epsilon)^{\frac1\epsilon} = 0.9^{10} = 0.35 = \frac1e$ Interpretation: It takes about 10 days for height to decay to 1/3rd If $\beta$ = 0.98, $\epsilon$ = 0.02, $\frac1\epsilon$ = 50; It takes approx 50 days for height to decay to 1/3rd Implementing Exponentially Weighted Average $v_\theta$: v is computing exponentially weighted average of parameter $\theta$. day 0: $v_\theta = 0 $day 1: $v_\theta = \beta v + (1-\beta)\theta_1 $ day 2: $v_\theta = \beta v + (1-\beta)\theta_2 $ … Algorithms: Repeat: { Get next $\theta_t$ $v_\theta := \beta v_\theta + (1-\beta)\theta_t$ } Single line implementation for fast and efficient calculation of exponentially weighted moving average. Bias Correction in Exponentially Weighted Moving Average Making EWMA more accurate - Since the curve starts from 0, there are not many values to average on in the initial days. Thus, the curve is lower than the correct value initially and then moves in line with expected values. Figure: The ideal curve shoule be the GREEN one, but it starts as the PURPLE curve since the values initially are zero Example: Starting from t=0 and moving forward, $v_0 = 0$ $v_1 = 0.98v_0 + 0.02\theta_1 = 0.02\theta_1$ $v_2 = 0.98v_1 + 0.02\theta_2$ = $0.0196\theta_1 +0.02\theta_2$ The initial values of $v_t$ will be very low which need to be compensated. Make $v_t = \frac{v_t}{1-\beta^t}$ for t=2, $1-\beta^t$ = $1-0.98^2$ = 0.0396 (Bias Correction Factor) $v_2 = \frac{v_2}{0.0396} = \frac{0.0196\theta_1 +0.02\theta_2}{0.0396}$ When t is large, $\frac{1}{1-\beta^t} =1$, hence bias correction factor has no effect when t is sufficiently large. It only jacks up the intial values. Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures.
I would like to price Asian and Digital options under Merton's jump-diffusion model. To that end, I will have to simulate from a jump diffusion process. In general, the stock price process is given by $$S(t) = S(0)e^{(r-q-\omega)t+X(t)},$$ where $\omega$ is the drift term that makes the discounted stock price process a martingale and where $X(t) = \sigma W(t)+\sum_{i=1}^{N(t)} Y_i$ is a jump-diffusion process. The process $N(t)$ is a Poisson process with intensity $\lambda$ and the jump sizes $Y_i$ are iid Normally distributed $Y_i \sim N(\mu, \delta^2)$. So, the issue here is the simulation of $X(t)$. I would like to simulate $X(t)$ as follows: first, I construct a grid $[0,T]$ with grid step size $\Delta t = 1/252$ (number of trading days in one year). I now want to compute $X$ on this grid stepwise. Therefore: $$X_{n\Delta t} = X_{(n-1) \Delta t}+\sigma \sqrt{\Delta t} \epsilon_n + p_n,$$ where $\epsilon_n$ are standard normal random variables (easy to simulate in Matlab) and where $p_n$ are Compound Poisson random variables independent of $\epsilon_n$. The question is, how to simulate from these $p_n$? I was thinking to write a compound Poisson generator myself: function [p] = CPoissonGenerator(mu,delta,lambda,dt,rows,columns) N = poissrnd((1/lambda)*dt,rows,columns); Y = normrnd(mu,delta,1,N); cP = cumsum([0,Y]); p = cP(end);end However, I do not know how to generate a matrix of Poisson random variables without using a loop in the above code? This can be very time consuming. Furthermore, I'm not sure if the program is correct. Is there a better way to do this?
Wikipedia informs us that quartz is the second most abundant mineral on earth. It’s composed of silicon and oxygen (not exactly exotic elements) in a “continuous framework of SiO 4 silicon–oxygen tetrahedra” (not sure what that means...). Based on this information, would you ever have thought that quartz crystals would become pervasive electronic components that function as the central element in countless high-performance oscillator circuits? There is no doubt that you can use quartz crystals without knowing much about what they really are: You connect the crystal to some microcontroller pins, add a couple load capacitors, and you have an oscillator. However, I am generally of the opinion that it is not a great idea to know little or nothing about the components and subcircuits that we continually use in our designs. Knowledge doesn’t exist in isolation. You may never need to know exactly why a piece of quartz can turn a feedback circuit into an oscillator, but this information is part of an extensive network of knowledge that leads us to an authentic understanding of fundamental electronic concepts. In other words, we sometimes need to resist the inclination to ignore the details—they might turn out to be little lines that connect big dots. The Circuit This is the equivalent circuit of a quartz crystal: Let’s be clear: A quartz crystal is a quartz crystal. If you hit the crystal with a hammer it won’t break into an inductor, a resistor, and two capacitors. However, quartz crystals have (in my opinion rather mysterious) electromechanical characteristics that cause the crystal to behave, in the context of an electronic circuit, like the collection of passive components shown above. All those components in the equivalent circuit are extremely common. You might be wondering, then, why we bother with a quartz crystal at all. Why not just use caps, inductors, and resistors in the same arrangement? Well, as we will see later, you could never achieve equivalent performance using passives, especially when you consider how small a quartz crystal is. Resonance Inductors and capacitors store energy. If you connect an inductor and capacitor in a way that allows energy to flow back and forth between the two components, you have a resonant circuit. In an idealized circuit this back-and-forth flow continues forever, and you would have an oscillator. In a real-life circuit the oscillations decrease in amplitude (and eventually cease) as energy is dissipated by resistive elements such as wires or PCB traces. More resistance means that the oscillations are more “damped,” i.e., the amplitude decreases more quickly. Q factor is inversely proportional to resistance, meaning that lower Q corresponds to oscillations that die away more quickly. Resonance occurs in both series LC circuits and parallel LC circuits, and if you look back at the equivalent circuit, you can see that a crystal has both series and parallel resonance. The resonant frequency for the series-connected inductor and capacitor follows the standard formula: $$ f_{SR}=\frac{1}{2\pi\sqrt{LC_S}}$$ The parallel resonance is a bit more complicated. This resonant frequency is calculated as follows: $$f_{PR}=\frac{1}{2\pi\sqrt{L\left(\frac{C_SC_P}{C_S+C_P}\right)}}$$ However, it turns out that C P is much larger than C S (like maybe 2000 times larger), which means that C SC P/(C S + C P) is approximately equal to C SC P/C P. We then cancel out the C P terms and are left with $$f_{PR}\approx\frac{1}{2\pi\sqrt{LC_S}}\ \ \ \Rightarrow \ \ \ f_{PR}\approx f_{SR}$$ So the two resonant frequencies are very close to each other. When you buy a crystal, it is specified for a certain frequency. For all practical purposes, we can say that the operating frequency of the crystal is equal to f SR. (Extremely) High Q As mentioned above, you can’t just replace a crystal with an equivalent collection of passive components. Why? Well, you would need some serious PCB real estate to match the crystal’s inductance—my textbook says it could be as high as hundreds of henrys, and I found this StackExchange thread where someone says it could be in the thousands. The largest fixed inductor sold by Digi-Key is 150 henrys; it has 3.7 kΩ of resistance, weighs half a pound, and is over two inches wide. Furthermore, the crystal’s large inductance is combined with a relatively small resistance, resulting in very high Q factor. Frequency Response As you know, inductors and capacitors have reactance. A quartz crystal also has reactance, though this reactance is a bit complicated because of the interaction between the three reactive components. At low frequencies, the capacitive reactance dominates. As frequency increases the reactance decreases in magnitude, and eventually it reaches zero at f SR(as expected—a series LC circuit has zero impedance at the resonant frequency). After the reactance passes through zero at f SRit rapidly increases toward infinity, because f PRis slightly higher than f SRand the impedance of an ideal parallel LC circuit is infinite at the resonant frequency. Making It Oscillate A quartz crystal is a crystal, not an oscillator. To produce oscillation, the crystal must be incorporated into a circuit that has the typical characteristics exhibited by oscillator circuits, namely, amplification and feedback. If you look at classic oscillator topologies such as the Colpitts or Hartley, you’ll see an amplifier (such as a BJT), an LC tank circuit (to provide resonance), and a feedback connection—in a Colpitts oscillator, for example, the feedback goes from a node between two capacitors in the tank circuit to the emitter of a BJT: The general idea is the same with quartz-based oscillators, because a quartz crystal is essentially a very-high-performance resonant element. The Pierce (or Pierce–Gate) oscillator is a common topology used for producing digital oscillation; it looks like this: You could spend quite a long time trying to thoroughly analyze and understand all the details of this seemingly simple circuit. If you want to go deeper into the Pierce topology and into crystal-based oscillation in general, I think that the article "Pierce-Gate Crystal Oscillator, an introduction" (written for Microwave Product Digest) would be a good place to start. Conclusion I hope that you now have a clearer idea of 1) what a quartz crystal is in the context of electronics and 2) how the characteristics of a quartz crystal can lead to oscillation when combined with a properly designed circuit. For information on alternatives to quartz-based oscillation, take a look at my article on oscillator options for microcontrollers and my news piece on a MEMS oscillator that may offer performance far superior to that of a crystal.
Definition of Singular Solution A function \(\varphi \left( x \right)\) is called the singular solution of the differential equation \(F\left( {x,y,y’} \right) = 0,\) if uniqueness of solution is violated at each point of the domain of the equation. Geometrically this means that more than one integral curve with the common tangent line passes through each point \(\left( {{x_0},{y_0}} \right).\) Note. Sometimes the weaker definition of the singular solution is used, when the uniqueness of solution of differential equation may be violated only at some points. A singular solution of a differential equation is not described by the general integral, that is it can not be derived from the general solution for any particular value of the constant \(C.\) We illustrate this by the following example: Suppose that the following equation is required to be solved: \({\left( {y’} \right)^2} – 4y = 0.\) It is easy to see that the general solution of the equation is given by the function \(y = {\left( {x + C} \right)^2}.\) Graphically, it is represented by the family of parabolas (Figure \(1\)). Besides this, the function \(y = 0\) also satisfies the differential equation. However, this function is not contained in the general solution! Since more than one integral curve passes through each point of the straight line \(y = 0,\) the uniqueness of solution is violated on this line, and hence it is a singular solution of the differential equation. \(p\)-discriminant One of the ways to find a singular solution is investigation of the so-called \(p\)-discriminant of the differential equation. If the function \(F\left( {x,y,y’} \right)\) and its partial derivatives \({\large\frac{{\partial F}}{{\partial y}}\normalsize}, {\large\frac{{\partial F}}{{\partial y’}}\normalsize}\) are continuous in the domain of the differential equation, the singular solution can be found from the system of equations: \[\left\{ \begin{array}{l} F\left( {x,y,y’} \right) = 0\\ \frac{{\partial F\left( {x,y,y’} \right)}}{{\partial y’}} = 0 \end{array} \right..\] The equation \(\psi \left( {x,y} \right) = 0\) obtained by solving the given system of equations is called the \(p\)-discriminant of the differential equation. The corresponding curve determined by this equation is called a \(p\)-discriminant curve. Upon finding the \(p\)-discriminant curve, one should check the following: Whether it is a solution of the differential equation? Whether it is a singular solution, that is are there any other integral curves of the differential equation that touch the \(p\)-discriminant curve at each point? This can be done as follows: Find the general solution of the differential equation (denote it by \({y_1}\)); Write the conditions of touching the singular solution (denote it by \({y_2}\)) and the general solution \({y_1}\) at an arbitrary point \({x_0}:\)\[\left\{ \begin{array}{l} {y_1}\left( {{x_0}} \right) = {y_2}\left( {{x_0}} \right)\\ {y’_1}\left( {{x_0}} \right) = {y’_2} \left( {{x_0}} \right) \end{array} \right.;\] If the system has a solution at the arbitrary point \({x_0},\) the function \({y_2}\) is a singular solution. The singular solution usually corresponds to the envelope of the family of integral curves of the general solution of the differential equation. Envelope of the Family of Integral Curves and \(C\)-discriminant Another way to find a singular solution as the envelope of the family of integral curves is based on using \(C\)-discriminant. Let \(\Phi \left( {x,y,C} \right)\) be the general solution of a differential equation \(F\left( {x,y,y’} \right) = 0.\) Graphically the equation \(\Phi \left( {x,y,C} \right) = 0\) corresponds to the family of integral curves in the \(xy\)-plane. If the function \(\Phi \left( {x,y,C} \right)\) and its partial derivatives are continuous, the envelope of the family of integral curves of the general solution is defined by the system of equations: \[\left\{ \begin{array}{l} \Phi \left( {x,y,C} \right) = 0\\ \frac{{\partial \Phi \left( {x,y,C} \right)}}{{\partial C}} = 0 \end{array} \right..\] To make sure whether a solution of the system of equations is really the envelope, one can use the method mentioned in the previous section. General Algorithm of Finding Singular Points A more common way of finding singular points of a differential equation is based on the simultaneous using \(p\)-discriminant and \(C\)-discriminant. Here first we find the equations of the \(p\)-discriminant and \(C\)-discriminant: \({\psi_p}\left( {x,y} \right) = 0\) is the equation of the \(p\)-discriminant; \({\psi_C}\left( {x,y} \right) = 0\) is the equation of the \(C\)-discriminant. It turns out that these equations have a certain structure. In general case, the equation of the \(p\)-discriminant can be factored into the product of three functions: \[{{\psi _p}\left( {x,y} \right) }={ E \times {T^2} \times C }={ 0,}\] where\(E\) means the equation of the Envelope, \(T\) is the equation of the Tac locus, and \(C\) is the equation of the Cusp locus. Similarly, the equation of the \(C\)-discriminant can be also factored into the product of three functions: \[{{\psi _C}\left( {x,y} \right) }={ E \times {N^2} \times {C^3} }={ 0,}\] where \(E\) is the equation of the Envelope, \(N\) is the equation of the Node locus, and \(C\) is the equation of the Cusp locus. Here we meet with new kinds of singular points: \(C\) – Cusp loci, \(T\)- Tac loci, and \(N\) – Node loci. Their view in the \(xy\)-plane is shown schematically in Figures \(2-4.\) Three of the four types of points, namely, the Tac loci, Cusp loci and Node loci are extraneous points, i.e. they do not satisfy the differential equation and, therefore, they are not singular solutions of the differential equation. Only the envelope of the considered points is the singular solution. Since the envelope is presented in the equations of the both discriminants as a first degree factor, this allows to find the equation of the envelope. Solved Problems Click a problem to see the solution.
Quote: The length of one of the sides of a triangle is 25 units. If the area of the triangle is 120 units squared and the length of another side of the triangle is 10 units, which of the following could be the length of the third side? A) \(\sqrt{255}\) B) \(\sqrt{585}\) C) \(\sqrt{572}\) D) \(\sqrt{558}\) E) 24 \(? = x\) \(\left\{ \begin{gathered} \,{x^2} = {\left( {10 + 7} \right)^2} + {24^2} = 289 + 576\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,x = \sqrt {865} \hfill \\ \,\,\,\,\,\,\,\,\,\,\,\,\,{\text{OR}} \hfill \\ \,{x^2} = {24^2} + {3^2} = 576 + 9\,\,\,\, \Rightarrow \,\,\,\,x = \sqrt {585} \hfill \\ \end{gathered} \right.\) This solution follows the notations and rationale taught in the GMATH method. Regards, Fabio. _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net
Here is an alternative to Count Dracula's (correct) argument that emphasizes instead the constancy of the Hilbert polynomial for a flat family of projective schemes. As above, assume that $Y$ is a DVR. For one fixed irreducible component $Z_{\eta}$ of $X_\eta$ of minimal dimension $d$, denote by $Z$ the Zariski closure of $Z_{\eta}$ in $X$ together with its closed immersion $u:Z\to X$. Denote by $W_{\eta}$ the union of all other irreducible components of $X_{\eta}$, and denote by $v:W\to X$ the closure of $W_\eta$ in $X$. By construction, both $Z$ and $W$ are flat over $Y$, since every associated point is a generic point of $Z_\eta$, resp. $W_\eta$. The closed immersion $u$ and $v$ determine an associated morphism of $\mathcal{O}_X$-modules, $$(u^\#, v^\#):\mathcal{O}_X \to u_*\mathcal{O}_Z \oplus v_*\mathcal{O}_W.$$ The restriction of $(u^\#,v^\#)$ on $X_\eta$ is injective. Thus the kernel of $(u^\#,v^\#)$ is a subsheaf of $\mathcal{O}_X$ that is torsion for $\mathcal{O}_Y$. Since $\mathcal{O}_X$ is flat over $\mathcal{O}_Y$, the kernel of $(u^\#,v^\#)$ is the zero sheaf. Denote the quotient of $(u^\#,v^\#)$ by $\mathcal{Q}$. The restriction of $\mathcal{Q}$ on $X_\eta$ has support whose dimension is strictly smaller than the dimension of any irreducible component of $X_\eta$. In particular, the Hilbert polynomial of $\mathcal{O}_{X_\eta}$ agrees with the Hilbert polynomial of $\mathcal{O}_{Z_\eta}\oplus \mathcal{O}_{W_\eta}$ modulo the subspace of numerical polynomials of degree strictly less than $d = \text{dim}(Z_\eta)$. Now consider the restriction $(u_0^\#,v_0^\#)$ of $(u^\#,v^\#)$ to the closed fiber $X_0$. By the flatness hypothesis, the Hilbert polynomials of the domain and target of this homomorphism equal the Hilbert polynomials on the generic fiber. Thus the difference of these Hilbert polynomials on $X_0$ equals the difference of these polynomials on $X_\eta$, and we know that this difference is a polynomial of degree strictly less than $d$. The cokernel of $(u_0^\#,v_0^\#)$ equals the restriction $\mathcal{Q}_0$. If the induced morphism $\mathcal{O}_{Z_0}\to \mathcal{Q}_0$ is nonzero at some generic point of $Z_0$, then the support of $\mathcal{Q}_0$ has an irreducible component of dimension $\geq d$. Thus the Hilbert polynomial of $\mathcal{Q}_0$ has degree $\geq d$. Since the difference polynomial has degree strictly less than $d$, the kernel of $(u_0^\#,v_0^\#)$ has Hilbert polynomial of degree $\geq d$ counterbalancing the Hilbert polynomial of $\mathcal{Q}_0$. In particular, the kernel of $(u_0^\#,v_0^\#)$ is not zero. By the flatness hypothesis, every associated point of $X$ is contained in $X_\eta$. Thus, every generic point $\xi$ of $X_0$ is the specialization of a generic point that is either in $Z$ or in $W$. Thus the localization $\mathcal{O}_{X_0,\xi}$ either factors through $\mathcal{O}_{Z_0,\xi}$ or factors through $\mathcal{O}_{W,\xi}$. Therefore the kernel of $(u_0^\#,v_0^\#)$ is in the kernel of the localization at every generic point of $X_0$. Since the kernel is nonzero, $X_0$ has embedded associated points, contradicting the hypothesis that $X_0$ is geometrically reduced. Therefore, by way of contradiction, the support of $\mathcal{Q}_0$ does not contain $Z_0$. So $Z_0$ is not contained in $W_0$. Now we continue by induction on the number of irreducible components, replacing $X$ by $W$.
Can n! be a perfect square when n is an integer greater than 1? (But is it possible, to prove without Bertrand's postulate. Because bertrands postulate is quite a strong result.) Assume, $n\geq 4$. By Bertrand's postulate there is a prime, let's call it $p$ such that $\frac{n}{2}<p<n$ . Suppose, $p^2$ divides $n$. Then, there should be another number $m$ such that $p<m\leq n$ such that $p$ divides $m$. So, $\frac{m}{p}\geq 2$, then, $m\geq 2p > n$. This is a contradiction. So, $p$ divides $n!$ but $p^2$ does not. So $n!$ is not a perfect square. That leaves two more cases. We check directly, $2!=2$ and $3!=6$ are not perfect squares. There is a prime between n/2 and n, if I am not mistaken. Hopefully this is a little more intuitive (although quite a bit longer) than the other answers up here. Let's begin by stating a simple fact : (1) when factored into its prime factorization, any perfect square will have an even number of each prime factor. If $n$ is a prime number, then $n$ will not repeat in any of the other factors of $n!$, meaning that $n!$ cannot be a perfect square (1). Consider if $n$ is composite. $n!$ will contain at least two prime factors ($n=4$ is the smallest composite number that qualifies the restraints), so let's call $p$ the largest prime factor of $n!$ The only way that $n!$ can be a perfect square is if $n!$ contains $p$ and a second multiple of $p$ (1). Obviously, this multiple must be greater than $p$ and less than $n.$ Using Bertrand's postulate, we know that there exists an additional prime number, let's say $p'$, such that $p < p' < 2p$. Because $p$ is the largest prime factor of $n!$, we know that $p' > n$ (If it were the opposite, then we would reach a contradiction). Thus it follows that $2p > p' > n$. Because $2p$ is the smallest multiple of $p$ and $2p > n$, then $n!$ only contains one factor of $p$. Therefore it is impossible for $n!$ to be a perfect square. If $n$ is prime, then for $n!$ to be a perfect square, one of $n-1, n-2, ... , 2$ must contain n as a factor. But this means one of $n-1, n-2, ... , 2 \geq n$, which is impossible. If $n$ is not prime, then the first prime less than $n$ will be $p = n-k$, $0<k<n-1, 2\leq p<n$. No number less than $p$ will contain $p$ as a factor, so for $n!$ to be a perfect square there must exist a multiple of $p$, I'll call it $bp$, $1<b<n,$ such that$ p<bp\leq n$. Now according to chebyshev's theorem for any no. $p$ there exists a prime number between $p$and $2p.$ so if $r< n < 2r$ and also $p<n$ , so such an $n!$ would never be a perfect square. Hope this helps. You can refer this. Your statement has a generalization. There is a work by Erdos and Selfridge stating that the product of at least two consecutive natural numbers is never a power. Here is it: http://ad.bolyai.hu/~p_erdos/1975-46.pdf.
First consider a scheme $X$ with an open cover $\mathcal{U}=\{U_i\}$. An object with descent data on $\mathcal{U}$ is a collection $(\mathcal{E}_i,\phi_{ij})$ where $\mathcal{E}_i$ is a quasi-coherent sheaf on each $U_i$ and $\phi_{ij}$ is an isomorphism $pr_2^*\mathcal{E}_j\to pr_1^*\mathcal{E}_i$ in $Qcoh(U_{ij})$ which satisfies the cocycle condition $$ pr_{13}^*\phi_{ik}=pr_{12}^*\phi_{ij}\circ pr_{23}^*\phi_{jk}. $$ We can make the descent data into a category where the morphisms are those compatible with the $\phi_{ij}$'s. Moreover we have an equivalence of categories $$ \text{Desc}(X,\mathcal{U})\simeq Qcoh(X). $$ Of course the definition of descent data has various generalizations. For example instead of the category of quasi-coherent sheaves we can consider an arbitrary fibered category $\mathcal{F}$ over spaces. For any map between spaces $\pi: U\to X$ we have a cosimplicial diagram of categories $$ \mathcal{F}(X)\to \mathcal{F}(U)\rightrightarrows \mathcal{F}(U\times_X U)\ldots $$ An object with descent data on is a collection $(\mathcal{E},\phi)$ where $\mathcal{E}$ is an object in $\mathcal{F}(U)$ and $\phi$ is an isomorphism $pr_2^*\mathcal{E}\to pr_1^*\mathcal{E}$ which satisfies the cocycle condition $$ pr_{13}^*\phi=pr_{12}^*\phi\circ pr_{23}^*\phi. $$ We notice that the category $\text{Desc}(U\to X)$ is equivalent to the fiber product of categories $\mathcal{F}(U)\times_{\mathcal{F}(U\times_X U)} \mathcal{F}(U)$ (as Zhen Lin points out in the comment, we need to also consider $\mathcal{F}(U\times_X U\times_X U)$). We also notice that if $\pi$ is flat then $\phi$ can be also expressed as the comodule structure $\mathcal{E}\to \pi^*\pi_*\mathcal{E}$ since $\pi^*\pi_*\mathcal{E}\cong (pr_2)_*pr_1^*\mathcal{E}$. Moreover in the flat case (together with some condition on $F$ I guess) we have $$ \text{Desc}(U\to X)\simeq \mathcal{F}(X). $$ Now we consider the case that $\mathcal{F}$ contains some higher structure. In this case we have an augmented cosimplicial diagram of (higher) categories $$ \mathcal{F}(X)\to \mathcal{F}(U)\rightrightarrows \mathcal{F}(U\times_X U)\ldots $$ and the definition of descent data should be modified. For example in the recent version of Yekutieli's Twisted Deformation Quantization of Algebraic Varieties Section 5, an explicit definition of descent data of cosimplicial cross groupoid is given. Before given the explicit construction, I would like to know what SHOULD the descent data be. Or more precisely what are the properties that descent data must satisfy? One attempt is to say that descent data are extra structures (say comodule) on $\mathcal{F}(U)$ such that we have the (weak) equivalence $\text{Desc}(U\to X)\simeq \mathcal{F}(X)$, but the equivalence only exists in good (say flat) cases while the descent data can be given in general cases. Therefore is the alternative description better? The category of descent data should be the homotopy limit (or totalization) of the cosimplicial diagram $$ \mathcal{F}(U)\rightrightarrows \mathcal{F}(U\times_X U)\ldots $$
I think the easiest way to to what you want is to use confidence intervals (statistical inference). In other words, assuming the population has a true variance $\sigma$, the sampling distribution of the variance $s^2$ of an $n$-sample verifies:$$ \frac{s^2(n-1)}{\sigma^2}\sim \chi^2_{n-1}$$ You can exploit this result to build an $1-\alpha$ confidence interval for the population variance ($\alpha \in [0,1]$, typically $\alpha=5\%$). Indeed, for a confidence level $1-\alpha$, the following equality holds:$$ z_{\alpha/2} \leq \frac{s^2(n-1)}{\sigma^2} \leq z_{1-\alpha/2} $$ where $z_q$ figures the quantile $q$ of a chi-squared distribution with $n-1$ degrees of freedom i.e.$$ X \sim \chi^2_{n-1},\quad \Bbb{P}(X \leq z_q) = q $$ Given a sample variance $\tilde{s}^2$, one can therefore turn the inequality on its head, to write, for a confiedence level $1-\alpha$:$$ \frac{\tilde{s}^2(n-1)}{z_{1-\alpha/2}} \leq \sigma \leq \frac{ \tilde{s}^2(n-1)}{z_{\alpha/2}} $$Hence the upper and lower bounds of your $1-\alpha$ confidence interval for the (unobserved) population variance:\begin{align}\sigma^+ = \frac{ \tilde{s}^2(n-1)}{z_{\alpha/2}},\quad \sigma^- = \frac{\tilde{s}^2(n-1)}{z_{1-\alpha/2}} \end{align} This could then help you construct $1-\alpha$ confidence bounds on the BS option price given the measure sample variance $\tilde{s}^2$:$$ V^+ = \text{BSCall}(\sigma^+),\quad V^- = \text{BSCall}(\sigma^+)$$ [Edit] Given your desire to obtain a full distribution, why not opt for a Bayesian approach? Assume the true population variance $\sigma^2$ follows a certain prior distribution with hyperparameter $\alpha$, $p(\sigma;\alpha)$ over $\Bbb{R}^+$. Suppose that, for a specific sample, you measure a sample variance $s^2$ and wish to compute the posterior of the population variance. Bayes' rule gives:$$ p(\sigma^2 \vert s^2, \alpha) = \frac{p(s^2 \vert \sigma^2)}{\int_0^\infty p(s^2 \vert \sigma^2) p(\sigma^2;\alpha) d\sigma^2 } p(\sigma^2; \alpha) $$ Now you know: The prior distribution $p(\sigma^2; \alpha)$: you postulated it. The sampling distribution $p(s^2 \vert \sigma^2)$: $\quad s^2 \sim \sigma^2/(n-1) \chi^2_{n-1}$ Hence you have everything you need to compute the posterior distribution. Obviously, if you stick with the Maxium A Posteriori (MAP) estimator, once again you'll have a pointwise estimate, so I suggest you to perform the full integration. Off the top of my head chi-squared distributions does not allow for conjugate priors so you might have to resort to numerical integration (e.g. adaptive quadrature and the likes). Finally, the choice of hyper-parameter $\alpha$ will have an impact on the resulting posterior: you might want to set $\alpha$ so that the prior distribution is centered around the sample variance for instance?
I am trying to find the action associated with the Lagrangian density $$ \mathcal{L} = \frac{1}{2}\left( \frac{\partial\phi}{\partial x} \right)^2 + \frac{1}{2}m^2\phi^2. \tag{1} $$ I am supposed to use the discrete expansions $$\phi_j = \frac{1}{\sqrt{Na}}\sum_p \tilde{\phi}_pe^{ipja} = \frac{1}{\sqrt{Na}}\sum_{-p} \tilde{\phi}_{-p}e^{-ipja}. \tag{2} $$ So, first I find the Lagrangian, using$$ L = \int dx \mathcal{L} = a \sum_j \mathcal{L} = \frac{a}{2}\sum_j \left[ \left( \frac{\phi_{j+1}-\phi_j}{a} \right)^2 + m^2\phi_j^2 \right] \tag{3} $$where $j$ labels the 1D lattice sites and $a$ is the equilibrium distance between each site. Now I plug in the expansion for $\phi_j$ into the Lagrangian, and where $\phi_j$ is squared, I use one copy of the middle ($+p$) term in eq (2) and one copy of the right ($-p$) term in eq (2), multiplied together. This is motivated by the form of the action I am supposed to get in the end. When I do the substitution into $L$, I end up with $$ L = \frac{1}{2} \sum_p \tilde{\phi_p}\tilde{\phi_{-p}}\left[ \frac{2}{a^2}\left( 1-\cos{pa} \right) +m^2 \right] \tag{4}. $$ Now to get the action, I know that $$ S = \int L dt, \tag{5}$$ but I have no idea where time is supposed to come into this problem at all. When integrating the Lagrangian density to get the Lagrangian, I know that I had to realize that the integral over one spatial dimension becomes, in the discrete case, a sum over the positions $x_j$ times the lattice constant $a$, or just a sum over $j$, again times $a$. In addition, the spatial derivative in the Lagrangian becomes a discrete difference, as I have shown above. Furthermore, the expression that I obtained for the Lagrangian $L$ is exactly what my textbook says I should obtain for the action $S$! Is this somehow the result of the problem not having any obvious time-dependence? So, in total, I suppose I want to know how the action relates to the Lagrangian in the case of a problem that doesn't involve time. Just for clarity, I am going to write the question as phrased in the textbook (QFT for the Gifted Amateur): Exercise 17.5 (a):Consider a one-dimensional system with Lagrangian $$ \mathcal{L} = \frac{1}{2}\left( \frac{\partial \phi(x)}{\partial x} \right)^2 + \frac{m^2}{2} \left[ \phi(x) \right]^2. $$ The choice of sign makes this a Euclidean theory. Descretize this theory (that is, put it on a lattice) by defining $$ \phi_j = \frac{1}{\sqrt{Na}} \sum_p \tilde{\phi}_p e^{ipja}, $$ where $j$ labels the lattice site, $a$ is the lattice spacing, and $N$ is the number of lattice points. Using the method in exercise 17.3 show that the action may be written $$ S = \frac{1}{2} \sum_p \tilde{\phi}_{-p} \left( \frac{2}{a^2}-\frac{2}{a^2}\cos{pa} + m^2 \right) \tilde{\phi}_p, $$ and read off the propagator for this theory. The "method in exercise 17.3" is just what I described in between eq (3) and eq (4), where you expand $\phi_j$ in terms of its Fourier transforms $\tilde{\phi}_p$ and $\tilde{\phi}_{-p}$. Problem 17.3 also is the one that shows that the free propagator is $\frac{i}{2}$ times the inverse of the quadratic term in the momentum-space action, which is why this problem is asking us to find the action in the first place.
Math.NET Symbolics is a basic open source computer algebra library for .Net and Mono written in F#. This project does not aim to become a full computer algebra system. If you need such a system, have a look at Axiom or Maxima instead, or for commercial solutions Maple, Mathematica or Wolfram Alpha. The recommended way to get Math.NET Symbolics is to use NuGet. The following packages are provided and maintained in the public NuGet Gallery: Core Package: MathNet.Symbolics- core package .NETFramework 4.5, .NETFramework 4.6.1 and .NETStandard 2.0 Package Dependencies: FParsec (isolated usage only for infix parsing) With NuGet you can start quickly by installing the MathNet.Symbolics package,which automatically loads its dependencies MathNet.Numerics and MathNet.Numerics.FSharp.In F# interactive you can reference them by loading two scripts, along the lines of 1: 2: To get started, open the namespaces and the Operators module and declare the variables and constants you intend to use as symbols: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: Then we're all set to start writing expressions: 1: 2: 3: 4: Math.NET Symbolics expressions are always in a simplified form according to a set of rules. Expressions are tree structures, but F# interactive shows them in a readable infix form thanks to a display printer added in the script loaded above. You can also use these printers manually to format any expression as infix string, LaTeX expression or in strict mode to see the actual internal representation: 1: 2: 3: Strings in infix notation can be parsed back into expressions: 1: 2: 3: 4: 5: Numbers can be forced to become expressions using the Q suffix, e.g. 3Qis an expression representing an integer with value 3. This is usually not neededif at least one of the operands is an expression, e.g. a symbol. But if all operandsare standard .Net numbers, they will be treated as such. For example (3 + 2)*4/6 isa standard F# integer expression and will result in 3 due to .Net integer arithmetics.However, if we force it to become an expression by writing (3Q + 2)*4/6, it willresult in the fraction expression 10/3 as expected. Since Math.NET Symbolics is about algebra, all number literals are arbitrarily big rational numbers, i.e. integers or fractions. If you need floating point numbers, use a symbol for them instead and provide their value at evaluation time. Often you need to evaluate the resulting number value of an expression given the values for all its symbols. To do so, prepare the value set as map or dictionary and pass it to the evaluate function. Values need to be of type FloatingPoint which is a discriminated union that can represent not only float and complex but also vectors and matrices of the same. 1: 2: There are various modules to help you combine and manipulate expressions: Operators:standard operators, recommended to open always. Structure:structural analysis, operand access, substitution, map Algebraic:algebraic expansion, separate factors Polynomial:properties, degrees, coefficients, terms, divide, gcd, expansion, partial fraction Rational:numerator/denominator, properties, rationalize, expand, simplify Exponential:expand, contract, simplify Trigonometric:expand, separate, contract, substitute, simplify Calculus:differentiate For example, let's try to contract the trigonometric expression \((\cos{x})^4\) into \(\frac{3}{8} + \frac{1}{2}\cos{2x} + \frac{1}{8}\cos{4x}\): 1: These modules can also be combined to build more interesting manipulations. For example, let's implement a taylor expansion routine to approximate the shape of a differentiable function \(x(\zeta)\) at some point \(\zeta = a\) by its \(k-1\) first derivatives at that point (order \(k\)). We can leverage the existing structural substitute routine to substitute \(\zeta\) with \(a\) to get the resulting expression at \(a\), and the differentiate routine to evaluate the partial derivative \(\frac{\partial{x}}{\partial\zeta}\). 1: 2: 3: 4: 5: 6: 7: Let's use this routine to approximate \(\sin{x}+\cos{x}\) at \(x = 0\) using the first 4 derivatives: 1: Even though Math.NET Symbolics is written entirely in F#, it can be used in C# almost exactly the same way. The equivalent C# code to the F# code above could look as follows: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 39: 40: 41: 42: 43: 44: 45: 46: 47: 48: 49: 50: 51: 52: 53: 54: Code for C++/CLI project is almost exactly the same but there are some things that worth to mention separately: 1.NuGet package manager doesn't have support for C++/CLI projects yet. So if you try to install the package with NuGet you may see this error: Could not install package 'MathNet.Symbolics 0.19.0'. You are trying to install this package into a project that targets 'native,Version=v0.0', but the package does not contain any assembly references or content files that are compatible with that framework. There are other ways to add the library to the project: Use other dependency manager like Paket. Add references manually: Download packages. Although NuGet cannot include packages to the project it is downloading them to common package directory anyway. Right click on the project in the Solution Explorer Add -> References... Click Browse button Navigate to packages directory. Add reference to the next dlls: FParsec.dll FParsecCS.dll FSharp.Core.dll MathNet.Numerics.FSharp.dll MathNet.Symbolics.dll Also to avoid version conflict of FSharp.Core you have to set AutoGenerateBindingRedirects to true. Put the instruction in your .vcxproj file under "Globals" property section: 1: 2: 3: Instead of using +and -operators it's better to choose Addand Subtractmethods to get away from warning about matching more than one operator. Here is F# sample translated into C++/CLI: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 39: 40: 41: 42: 43: 44: 45: 46: 47: For VB.NET we won't do the same thing here but instead we just show basic usage for now 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14:
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
[In view of the discussion let me mention that the context is that of Poincare-covariant quantum field field theories . It is clear that giving up covariance makes many things possible that are not possible otherwise, and allow to make rogorous sense of renormalization in simpler situations such as for free covariant fields interacting with classical external fields, or for the Lee model.] The fact that naive perturbation theory produces infinite corrections, no matter which finite counterterms are used, proves that no Fock space supports an interacting quantum field theory. Hence there is no Hilbert space featuring at every time physical particles. Here is my non-rigorous proof: If there were a Fock space defining the interacting theory at every coupling parameter $g$, it would represent the particles by annihilation fields $a_g(x)$ corresponding to some mass $m(g)$. Taking the limit $g\to 0$ (assuming it exists) we see that the Fock spaces at $g$ have the same structure as the limiting Fock space. By continuity, only continuous labels of Poincare representations can change; the other will be fixed for small enough $g$. The only continuous label is the mass, so the Fock spaces differ only by the mass $m_g$. All other structure is rigid and hence preserved. In particular, if we assume the existence of a power series in $g$, the fields are given by \[\Phi_g(x)=\frac12(a_g^*(x)+a_g(x))+O(g).\] Now consider the operator field equations at coupling $g$. For simplicity take $\Phi^4$ theory, where they take (the limit term guarantees a correct the form \[ \nabla^2 \Phi_g(x)+ m(g)^2 \Phi_g(x) + g \lim_{\epsilon\to 0} \Phi_g(x+\epsilon u)\Phi_g(x)\Phi_g(x-\epsilon u):=0.\] (This is called a Yang-Feldman equation.) Multiplying with the negative propagator $(\nabla^2 +m_g^2)^{-1}$, we find a fixed point equation for $\Phi_g(x)$, which can be expanded into powers of $g$, and all coefficients will be finite because the $\Phi_g(x)$ and hence their Taylor coefficients are (after smearing) well-defined operators on the corresponding Fock space. Going to the Fourier domain and taking vacuum expectation values, one find a perturbative expansion with finite coefficients which is essentially the textbook expansion of vacuum expectation values corresponding to perturbation around the solution with the correct mass. This proof is not rigorous, but suggestive, surely at the same level as Ron's arguments. I don't know whether it can be made rigorous by more careful arguments. But it only makes the sort of assumptions physicists always make when solving their problems. If these assumptions are not satisfied but an interacting theory is still Fock, it at least means that there is no mathematically natural way of constructing the operators, even perturbatively. Thus for practical purposes, i.e., to use it for physical computations, it amounts to the same thing as if the Fock representation didn't exist.
I would say that when the capacitor is connected to the battery there will be a transient flow of charge during which the plate which is connected to the positive terminal will acquire charge $Q$ and rise to voltage $V$. Consider the other plate which is connected to the dangling wire. This piece of metal has net charge $0$ on it. However, During the time the other plate is charging up I claim that charge $-Q$ will flow from the wire onto the plate connected to the dangling wire. This will leave charge $+Q$ distributed across the wire to ensure charge neutrality/conservation. The amount of charge $Q$ can be calculated by $Q=CV$. In the end there will be a new distribution of charge and because of this there will be electric fields between the various charge distributions. All told the situation will be physically different before and after the capacitor is touched to the battery. Furthermore, I'll be clear that I disagree with the solution the OP quoted. Once the transient is over it is true that no charge will flow. If one MUST insist that there must be a closed circuit for ANY current to flow then I will insist that they must include stray capacitances in their description of a problem. For example, in this problem one could include an effective stray capacitance between the dangling wire and the negative terminal of the battery. They are just two pieces of metal so we can define a capacitance can't we? We now see that there is a complete circuit consisting of a battery and two capacitors so current is allowed to flow. This also gives a nice introduction to the intimate relationship between capacitance and the presence of stray electric fields which becomes critical to understand when one starts thinking about electromagnetic interference type questions. Update I want to add updates to correct some of the comments made above. In fact let me just present a separate story which will make my explanation more clear. Consider a different but related (as we will see) situation. Consider a battery with voltage $V$ connected in series with two capacitors: One with value $C$ (the same $C$ of the capacitor in the original problem) and one with value $\tilde{C}$. Capacitor $C$ has one lead connected to the positive terminal while capacitor $\tilde{C}$ has one lead connected to the negative terminal. Because of charge conservation we know that if charge $Q$ accumulates on capacitor $C$ then charge $Q$ must also accumulate on capacitor $\tilde{C}$. We then have $$V = V_C + V_{\tilde{C}} = Q\left(\frac{1}{C} + \frac{1}{\tilde{C}}\right)$$ We see that $$\frac{V_C}{V} = \frac{\frac{1}{C}}{\frac{1}{C} + \frac{1}{\tilde{C}}} = \frac{1}{1+\frac{C}{\tilde{C}}}$$ So we see that in the limit that $C \ll \tilde{C}$ we get that $V_C \rightarrow V$ while in the limit that $C \gg \tilde{C}$ we get that $V_C \rightarrow 0$. Note that $V_C$ represents the voltage drop across capacitor so if $V_C = V$ then one leg of that capacitor is "at" voltage $V$ relative to the negative terminal of the battery and the other leg is "at" voltage $0$ whereas if $V_C = 0$ then both legs of that capacitor are "at" voltage $V$. Now to relate this to the problem at hand: The idea is that capacitor $\tilde{C}$ represents the "stray" capacitance between the unconnected leg of the capacitor and the negative terminal of the battery. Imagine we do truly have a parallel plate capacitor $\tilde{C}$ attached between capacitor $C$ and the negative terminal of the battery. Now decrease the plate area until it has the same area as the cross section of a piece of wire. The capacitance will be very small because $A$ is small. Now pull the two sides of the capacitor $\tilde{C}$ apart until one it exactly at the location of the negative terminal and the other is at the location of the dangling lead of capacitor $C$. The capacitance $\tilde{C}$ will now be SUPER small because $d$ is very large. Thus we see that the situation of a capacitor connected to the positive terminal of a battery with one leg and disconnected in the other is equivalent to the above situation with $\tilde{C}$ very very small so that $V_C = 0$. This tells us something which is, at first glance, a bit surprising. The voltage of BOTH ends of the capacitor $C$ will be at value $V$. This means that when the capacitor is touched to the positive terminal of the battery some current does flow in such a way to set up an electric field in the capacitor as well as between the unconnected leg and the negative terminal of the battery that causes both sides to "float up" to voltage $V$. I think this is a very good example of showing how "stray" capacitances 1) can alter first impressions when implementing naive lumped circuit analysis and 2) are intimately related with the presence of electric field lines connecting lumped circuit components. Both of these concepts become very important when consider higher frequency electronics, PCB layout, and various grounding and electromagnetic interference issues. I'll point out one more salient feature of this model which is relevant for practical electronics. Consider taking the dangling lead of capacitor $C$ and bringing it closer and closer to the negative terminal of the battery. As you do this $\tilde{C}$ will increase (even though it is still small). This means that the voltage on the dangling lead will change as you alter this distance. This is important because it shows how the geometry of a physical circuit can alter the behavior of a circuit. Again a critical idea when considering electromagnetic interference. In the same way as $L$ can be thought of as capturing the interplay because physical magnetic fields and a circuit we see that $C$ can be thought of as capturing the interplay between physical electric fields and a circuit. Edit: I want to add another interesting feature. Note that there is a large voltage drop across $\tilde{C}$. This is possible even with a very small charge $Q$ it is possible to get a large voltage across a very small capacitance. This says that the battery will in fact "push out" a small amount of charge when the one leg of the capacitor is first connected. However, since $\tilde{C}$ is so small this amount of charge will be accordingly small. Clarification Finally I want to clarify that my comments that $Q=CV$ in my initial response above is incorrect. With the OPs permission I'll erase any incorrect statements in the answer above.
In this question, I am testing what was previously discussed. I can't seem to get my results to match D'Inverno's electromagnetic tensor for a charged point (page 239 of his book - Introducing Einstein's Relativity). Here are D'Inverno's steps: The line element in spherical coordinates is ($\eta$ and $\lambda$ are functions of $r$ only) $$ \mathrm{d}s^2 = \mathrm{e}^\eta \mathrm{d}t^2 - \mathrm{e}^\lambda \mathrm{d}r^2 - r^2 (\mathrm{d}\theta^2 + \sin^2\!\theta\ \mathrm{d}\phi^2). $$ He defines this covariant electromagnetic field tensor: $$ F_{\mu\nu} = E(r) \begin{pmatrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}. $$ He then proceeds to find the electric field, and consequently the electromagnetic field tensor by using the source-free Maxwell equations: \begin{align} \nabla_\nu F^{\mu\nu} & = 0 \\ \partial_{[\lambda} F_{\mu\nu]} & = 0. \end{align} Solving the differential equation that appears from the equations above, he finds the electric field: $$ E(r) = \mathrm{e}^{(\eta+\lambda)/2} \varepsilon/r^2. $$ He then notes that this field is that of a point charge at infinity ($\eta$ and $\lambda$ go to zero at infinity) where $\varepsilon$ is the electric charge. I managed to reproduce these steps. Now, here are my steps, using the four-potential procedure (the line element is the same): I define my contravariant four-potential (there is just the first element which is the electric potential of a point charge, just as D'Inverno found): $$A^\mu = (\varepsilon/r, 0, 0, 0). $$ Then I lower the index of this four-potential to find the covariant one: $$A_\mu = (\mathrm{e}^\eta \varepsilon/r, 0, 0, 0). $$ Finally I apply this equation to build the covariant electromagnetic tensor: $$ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu. $$ The result is $$ F_{\mu\nu} = \frac{\mathrm{e}^\eta \varepsilon}{r^2}\! \left(r\frac{\partial\eta}{\partial r}-1\right) \begin{pmatrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}. $$ Where: $$\frac{\mathrm{e}^\eta \varepsilon}{r^2}\! \left(r\frac{\partial\eta}{\partial r}-1\right) = E(r)$$ And this is different from D'Inverno's electric field. I don't know what I am doing wrong. The calculations are not difficult for this simple case. The question is, due to these calculations: Do my contravariant four-potential needs to contain my metric funcions in some way? I was assuming it is just the four-potential for a electric charge in the flat space: $$A^\mu = (\varepsilon/r, 0, 0, 0). $$ If everything is right, the wrong assumption must be here.
Here is a bubble chamber picture of an event It is interpreted as particles: $${\newcommand{Subreaction}[2]{{\rlap{\hspace{0.38em} \lower{25px}{{\rlap{\rule{1px}{20px}}} {\lower{0.5ex}{\hspace{-1px} \longrightarrow {#2}}}}}} {#1} }}{K}^{-} ~~ p ~~ {\longrightarrow} ~~ {\Subreaction{{\Omega}^{-}}{ {\Subreaction{{\Lambda}^{0}}{p ~~ {\pi}^{-}}} ~~ {K}^{-}}} ~~ {K}^{+} ~~ {K}^{+} ~~ {\pi}^{-}$$ and shows the generation and decay of an ${\Omega}^{-},$ the particle that fills up the prediction in the decuplet of hadrons. We call the kaons, protons, pions "particles" because macroscopically their footprint is that of a charged particle with a given momentum traversing an ionizable medium. Hundreds of thousands of similar pictures measured, gave the data for the standard model before the new electronic data detectors entered the scene. That is why we talk of particle physics. Look at the double slit experiment for electrons, a single electron at a time. The footprint on the screen is a dot, within the detection errors, a footprint that one expects a particle to give. The wave nature appears in the accumulation of many electrons with the same boundary conditions scattering off the slits, in the interference pattern, due to the probabilistic quantum mechanical nature, the wavefunction which is the solution of the scattering problem. Existence needs philosophical analysis. The bubble chamber pictures exist. The double slit single electron at a time exists. The mathematical model tying up and fitting the data if validated, exists. Are we platonists? i.e. mathematics defines reality? Or realists: data defines reality. Having started with Regge pole models for particle physics and ending up with the standard model, and contemplating string theory models, I tend to be a realist: if it walks like a duck and quacks like a duck, it is a duck. Or maybe a wave in the pond ;).
Chapters Chapter 2: Polynomials Chapter 3: Pair of Linear Equations in Two Variables Chapter 4: Quadratic Equations Chapter 5: Arithmetic Progressions Chapter 6: Triangles Chapter 7: Coordinate Geometry Chapter 8: Introduction to Trigonometry Chapter 9: Some Applications of Trigonometry Chapter 10: Circles Chapter 11: Constructions Chapter 12: Areas Related to Circles Chapter 13: Surface Areas and Volumes Chapter 14: Statistics Chapter 15: Probability NCERT Mathematics Class 10 Chapter 8: Introduction to Trigonometry Chapter 8: Introduction to Trigonometry Exercise 8.10 solutions [Page 181] In ΔABC right angled at B, AB = 24 cm, BC = 7 m. Determine sin A, cos A In ΔABC right angled at B, AB = 24 cm, BC = 7 m. Determine sin C, cos C In Given Figure, find tan P – cot R. If `sin A =3/4` , calculate cos A and tan A. Given 15 cot A = 8. Find sin A and sec A Given sec θ = `13/12` , calculate all other trigonometric ratios. If ∠A and ∠B are acute angles such that cos A = cos B, then show that ∠A = ∠B. If cot θ = 7/8 evaluate `((1+sin θ )(1-sin θ))/((1+cos θ)(1-cos θ))` If cot θ = 7/8, evaluate cot 2 θ If 3 cot A = 4, Check whether `((1-tan^2 A)/(1+tan^2 A)) = cos^2 A - sin^2 A` or not In ΔABC, right angled at B. If tan A = `1/sqrt3` , find the value of (i) sin A cos C + cos A sin C (ii) cos A cos C − sin A sin C In ΔPQR, right angled at Q, PR + QR = 25 cm and PQ = 5 cm. Determine the values of sin P, cos P and tan P. State whether the following are true or false. Justify your answer. The value of tan A is always less than 1. State whether the following are true or false. Justify your answer. sec A = 12/5 for some value of angle A. State whether the following are true or false. Justify your answer. cos A is the abbreviation used for the cosecant of angle A. State whether the following are true or false. Justify your answer. cot A is the product of cot and A State whether the following are true or false. Justify your answer. sin θ =4/3, for some angle θ Chapter 8: Introduction to Trigonometry Exercise 8.20 solutions [Page 187] Evaluate the following in the simplest form: sin 60º cos 30º + cos 60º sin 30º Evaluate the following : 2tan 245° + cos 230° − sin 260° Evaluate the following : `(cos 45°)/(sec 30° + cosec 30°)` Evaluate the following `(sin 30° + tan 45° – cosec 60°)/(sec 30° + cos 60° + cot 45°)` Evaluate the following `(5cos^2 60° + 4sec^2 30° - tan^2 45°)/(sin^2 30°+cos^2 30°)` Choose the correct option and justify your choice `(2 tan 30°)/(1+tan^2 30°)` sin 60° cos 60° tan 60° sin 30° Choose the correct option and justify your choice. `(1- tan^2 45°)/(1+tan^2 45°) ` tan 90° 1 sin 45° 0 Choose the correct option and justify your choice : sin 2A = 2 sin A is true when A = 0° 30° 45° 60° Choose the correct option and justify your choice : `(2 tan 30°)/(1-tan^2 30°)` cos 60° sin 60° tan 60° sin 30° If tan (A + B) = `sqrt3` and tan (A – B) = `1/sqrt3` ; 0° < A + B ≤ 90° ; A > B, find A and B. State whether the following is true or false. Justify your answer. sin (A + B) = sin A + sin B True False State whether the following is true or false. Justify your answer. The value of sinθ increases as θ increases True False State whether the following is true or false. Justify your answer. The value of cos θ increases as θ increases True False State whether the following is true or false. Justify your answer sinθ = cos θ for all values of θ True False State whether the following are true or false. Justify your answer. cot A is not defined for A = 0° True False Chapter 8: Introduction to Trigonometry Exercise 8.30 solutions [Pages 189 - 190] Evaluate `(sin 18^@)/(cos 72^@)` Evaluate `(sin 18^@)/(cos 72^@)` Evaluate `(tan 26^@)/(cot 64^@)` Evaluate `(tan 26^@)/(cot 64^@)` Evaluate cos 48° − sin 42° Evaluate cosec 31° − sec 59° Evaluate cosec 31° − sec 59° Show that tan 48° tan 23° tan 42° tan 67° = 1 Show that cos 38° cos 52° − sin 38° sin 52° = 0 Show that cos 38° cos 52° − sin 38° sin 52° = 0 If tan 2A = cot (A – 18°), where 2A is an acute angle, find the value of A If tan 2A = cot (A – 18°), where 2A is an acute angle, find the value of A If tan A = cot B, prove that A + B = 90 If tan A = cot B, prove that A + B = 90 If sec 4A = cosec (A− 20°), where 4A is an acute angle, find the value of A. If sec 4A = cosec (A− 20°), where 4A is an acute angle, find the value of A. If A, B and C are interior angles of a triangle ABC, then show that `\sin( \frac{B+C}{2} )=\cos \frac{A}{2}` Express sin 67° + cos 75° in terms of trigonometric ratios of angles between 0° and 45° Express sin 67° + cos 75° in terms of trigonometric ratios of angles between 0° and 45° Chapter 8: Introduction to Trigonometry Exercise 8.40 solutions [Pages 193 - 194] Express the trigonometric ratios sin A, sec A and tan A in terms of cot A. Express the trigonometric ratios sin A, sec A and tan A in terms of cot A. Write all the other trigonometric ratios of ∠A in terms of sec A. Write all the other trigonometric ratios of ∠A in terms of sec A. Evaluate `(sin ^2 63^@ + sin^2 27^@)/(cos^2 17^@+cos^2 73^@)` Evaluate `(sin ^2 63^@ + sin^2 27^@)/(cos^2 17^@+cos^2 73^@)` Evaluate sin25° cos65° + cos25° sin65° Evaluate sin25° cos65° + cos25° sin65° Choose the correct option. Justify your choice. 9 sec 2 A − 9 tan 2 A = 1 9 8 0 Choose the correct option. Justify your choice. 9 sec 2 A − 9 tan 2 A = 1 9 8 0 Choose the correct option. Justify your choice. (1 + tan θ + sec θ) (1 + cot θ − cosec θ) 0 1 2 -1 Choose the correct option. Justify your choice. (1 + tan θ + sec θ) (1 + cot θ − cosec θ) 0 1 2 -1 Choose the correct option. Justify your choice. (secA + tanA) (1 − sinA) = secA sinA cosecA cosA Choose the correct option. Justify your choice. (secA + tanA) (1 − sinA) = secA sinA cosecA cosA Choose the correct option. Justify your choice. `(1+tan^2A)/(1+cot^2A)` sec 2A −1 cot 2A tan 2A Choose the correct option. Justify your choice. `(1+tan^2A)/(1+cot^2A)` sec 2A −1 cot 2A tan 2A Prove the following identities, where the angles involved are acute angles for which the expressions are defined `(cosec θ – cot θ)^2 = (1-cos theta)/(1 + cos theta)` Prove the following identities, where the angles involved are acute angles for which the expressions are defined `(cosec θ – cot θ)^2 = (1-cos theta)/(1 + cos theta)` Prove the following identities, where the angles involved are acute angles for which the expressions are defined `cos A/(1 + sin A) + (1 + sin A)/cos A = 2 sec A` Prove the following identities, where the angles involved are acute angles for which the expressions are defined `cos A/(1 + sin A) + (1 + sin A)/cos A = 2 sec A` Prove the following identities, where the angles involved are acute angles for which the expressions are defined `(tantheta)/(1-cottheta) + (cottheta)/(1-tantheta) = 1+secthetacosectheta` Prove the following identities, where the angles involved are acute angles for which the expressions are defined `(tantheta)/(1-cottheta) + (cottheta)/(1-tantheta) = 1+secthetacosectheta` Prove the following identities, where the angles involved are acute angles for which the expressions are defined. `(1+ secA)/sec A = (sin^2A)/(1-cosA)` [Hint : Simplify LHS and RHS separately] Prove the following identities, where the angles involved are acute angles for which the expressions are defined. `(1+ secA)/sec A = (sin^2A)/(1-cosA)` [Hint : Simplify LHS and RHS separately] Prove the following identities, where the angles involved are acute angles for which the expressions are defined. `(cos A-sinA+1)/(cosA+sinA-1)=cosecA+cotA` Prove the following identities, where the angles involved are acute angles for which the expressions are defined. `(cos A-sinA+1)/(cosA+sinA-1)=cosecA+cotA` Prove the following identities, where the angles involved are acute angles for which the expressions are defined. `sqrt((1+sinA)/(1-sinA)) = secA + tanA` Prove the following identities, where the angles involved are acute angles for which the expressions are defined. `sqrt((1+sinA)/(1-sinA)) = secA + tanA` Prove the following identities, where the angles involved are acute angles for which the expressions are defined `(sin theta-2sin^3theta)/(2cos^3theta -costheta) = tan theta` Prove the following identities, where the angles involved are acute angles for which the expressions are defined `(sin theta-2sin^3theta)/(2cos^3theta -costheta) = tan theta` Prove the following identities, where the angles involved are acute angles for which the expressions are defined. (sin A + cosec A) 2 + (cos A + sec A) 2 = 7 + tan 2 A + cot 2 A Prove the following identities, where the angles involved are acute angles for which the expressions are defined. (cosec A – sin A) (sec A – cos A)`=1/(tanA+cotA)` [Hint : Simplify LHS and RHS separately] Prove the following identities, where the angles involved are acute angles for which the expressions are defined. `((1+tan^2A)/(1+cot^2A))=((1-tanA)/(1-cotA))^2=tan^2A` Chapter 8: Introduction to Trigonometry NCERT Mathematics Class 10 Textbook solutions for Class 10 NCERT solutions for Class 10 Mathematics chapter 8 - Introduction to Trigonometry NCERT solutions for Class 10 Maths chapter 8 (Introduction to Trigonometry) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Mathematics Textbook for Class 10 solutions in a manner that help students grasp basic concepts better and faster. Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. NCERT textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students. Concepts covered in Class 10 Mathematics chapter 8 Introduction to Trigonometry are Introduction to Trigonometry, Introduction to Trigonometry Examples and Solutions, Trigonometric Ratios, Trigonometric Ratios of an Acute Angle of a Right-angled Triangle, Trigonometric Ratios of Some Specific Angles, Trigonometric Ratios of Complementary Angles, Trigonometric Identities, Proof of Existence, Relationships Between the Ratios, Trigonometric Ratios of Complementary Angles, Trigonometric Identities. Using NCERT Class 10 solutions Introduction to Trigonometry exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in NCERT Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 10 prefer NCERT Textbook Solutions to score more in exam. Get the free view of chapter 8 Introduction to Trigonometry Class 10 extra questions for Maths and can use Shaalaa.com to keep it handy for your exam preparation
Application of Derivatives Increasing and Decreasing Functions A function ƒ is said to be (a) increasing on an interval (a, b) if x 1< x 2in (a, b) ⇒ f(x 1) ≤ f(x 2) for all x 1, x 2∈ (a, b). Alternatively, if f'(x) ≥ 0 for each xin (a, b) (b) decreasing on (a, b) if x 1< x 2in (a, b) ⇒ f(x 1) ≥ f(x 2) for all x 1, x 2∈ (a, b). Alternatively, if f'(x) ≤ 0 for each xin (a, b) Monotonic functions-Tips: A function f(x) is strictly increasing on R if f'(x) > 0 ∀ x ∈ R. A function f(x) is strictly decreasing on R if f'(x) < 0 ∀ x ∈ R. A function f(x) is increasing on R if f'(x) ≥ 0. A function f(x) is decreasing on R if f'(x) ≤ 0. View the Topic in this video From 10:26 To 20:04 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. A function f(x) = \frac{a\cos x + b \sin x}{c\cos x +d\sin x} is increasing if ad − bc < 0 If f(x) and g(x) are continuous and differentiable function and fog and gof exists, then f'(x) g'(x) (fog)'(x) (or) (gof)'(x) + + + + − − − + − − − +
Electrostatic Potential and Capacitance Electrostatics of Conductors, Dielectrics and polarization Inside a conductor, electric field is zero. At the surface of a charged conductor, electric field must be normal to the surface at every point. The interior of a conductor can have no excess charge in the static situation. Electric potential is constant throughout the volume of the conductor and has the same value on its surface. Electric field at the surface of a charged conductor \overrightarrow{E} = \frac{\sigma}{\varepsilon_{0}} \hat{n} Dielectrics: Dielectrics are non conducting substances. In contrast to conductors, they have no charge carriers. Polarisation: The induced dipole moment developed per unit volume in a dielectric slab on placing it in an electric field is called polarisation. It is denoted by p. \overrightarrow{P} = \chi_{e} \overrightarrow{E} Conductors, Dielectrics and polarization View the Topic in this video From 18:01 To 54:23 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
Wave Optics Polarisation Polarisation is the property of obstructing the vibration of particle. Polarisation Establishes the transverse nature of Light waves If the Electric vector vibrates in all directions in a plane perpendicular to the direction of propagation is called unpolarised light. If the Electric field vector is contained to a plane passing through the direction of propagation it is called plane polarised light. Polariser is the crystal used to polarise the unpolarised light. Analyser is the crystal used to analyse the polarised light. Malus law is given by I = I 0cos 2θ Amplitude from malus law a = a 0cos θ When three polaroids are present with consecutive angles θ 1and θ 2Then \tt I^{1}=\frac{I}{2}\cos^{2}\theta_{1}\cos^{2}\theta_{2} polarising angle or Brewster's angle is the angle of incidence at which reflected light is polarised. Brewster law States that tangent of polarising angle is equal to refractive index μ = Tan θp \tt \tan\ \theta_{p} =\frac{\mu}{1};\sin\ \theta_{p} =\frac{\mu}{\sqrt{\mu^{2}+1}};\cos\ \theta_{p} =\frac{1}{\sqrt{\mu^{2}+1}} Ordinary ray in double refraction is that ray which obeys laws of refraction. Extraordinary ray in double refraction is that ray which does not obeys law of refraction. For ordinary ray \tt \mu_{0}=\frac{\sin i}{\sin r} Optic axis the axis along which ordinary and Extraordinary rays have same speed. Dichroism is the property of unequal absorption of ordinary and Extraordinary rays by some crystals. man made polarising materials are called polaroids Polarisation is used to study the helical structure of nucleic acids Polarisation is used in polaroid sun goggles. Optical activity of a substance is measured with the help of polarimeter Speed of light wave is same it is called isotropic media Speed of light wave is not same it is called anisotropic media Diffraction pattern is due to interference of light from secondary waves of the same wave front Sound waves cannot be polarised Intensity of fraunhoffer diffraction at single slit \tt I_{0}=I_{0}\left(\frac{\sin\alpha}{\alpha}\right)^{2}\ where\ \alpha=\frac{\pi a}{\lambda}\ \sin\theta Youngs double slit Experiment View the Topic in this video From 00:14 To 3:54 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
I am learning Probabilistic Graphical Models with the help of the videos on Coursera. I am in week 4 and I see cliques being mentioned often. But the graphs being discussed are cluster graphs. So are the cliques and clusters the same? A clique is a rigorously defined, exact part of a graph $G=(V,E)$; $\qquad\displaystyle C \subseteq V \text{ is clique} \iff \{ \{u,v\} \mid u,v \in C, u \neq v \} \subseteq E$. A cluster is more general, but also more nebulous. Here's what Wikipedia has to say: Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). So, a cluster also is a set of nodes that is "dense" in some sense, depending on the metric used. However, while is it always clear whether adding a note increases the size of a clique, it's not always clear from looking only at one cluster whether adding a node to it is better; the quality of a clustering is defined on the whole graph. As the Wikipedia article shows, there are many notions of clustering. Cliques can be seen as one of them.
Forgot password? New user? Sign up Existing user? Log in If fn(x)=cosnx+cosn(x+2π3)+cosn(x+4π3)\displaystyle{{ f }_{ n }(x)={ \cos ^{ n }{ x } +\cos ^{ n }{ (x+\cfrac { 2\pi }{ 3 } ) } +\cos ^{ n }{ (x+\cfrac { 4\pi }{ 3 } ) } }}fn(x)=cosnx+cosn(x+32π)+cosn(x+34π). Then Solve for x if f7(x)=0\displaystyle{{ f }_{ 7 }(x)=0}f7(x)=0. If fn(x)=cosnx+cosn(x+2π3)+cosn(x+4π3)\displaystyle{{ f }_{ n }(x)={ \cos ^{ n }{ x } +\cos ^{ n }{ (x+\cfrac { 2\pi }{ 3 } ) } +\cos ^{ n }{ (x+\cfrac { 4\pi }{ 3 } ) } }}fn(x)=cosnx+cosn(x+32π)+cosn(x+34π). Then Solve for x if f7(x)=0\displaystyle{{ f }_{ 7 }(x)=0}f7(x)=0. Use Any Tool of Mathematics.Can You Guess why I choose 2π3&4π3\displaystyle{\cfrac { 2\pi }{ 3 } \& \cfrac { 4\pi }{ 3 } }32π&34π. ? Note by Deepanshu Gupta 4 years, 9 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: y=cosx+isinx y = cosx + isinxy=cosx+isinx yn+1yn=2cosnx=(cosx+isinx)n=cosnx+isinnx\displaystyle{y^{ n }+\cfrac { 1 }{ y^{ n } } =2cosnx=(cosx+isinx)^{ n }=cosnx+isinnx}yn+yn1=2cosnx=(cosx+isinx)n=cosnx+isinnx. y+1y=2cosxy + \dfrac{1}{y} = 2cosxy+y1=2cosx cos7x=(y+1y)7 cos^{7}x = ( y + \dfrac{1}{y})^{7}cos7x=(y+y1)7 =y7+7y5+21y3+35y+351y+211y3+71y5+1y7 = y^{7} + 7y^{5} + 21y^{3} + 35y + 35\dfrac{1}{y} + 21\dfrac{1}{y^{3}} + 7\dfrac{1}{y^{5}} + \dfrac{1}{y^{7}}=y7+7y5+21y3+35y+35y1+21y31+7y51+y71 =(y7+1y7)+7(y5+1y5)+21(y3+1y3)+35(y+1y) = ( y^{7} + \dfrac{1}{y^{7}}) + 7(y^{5} + \dfrac{1}{y^{5}}) + 21(y^{3} + \dfrac{1}{y^{3}}) + 35(y + \dfrac{1}{y})=(y7+y71)+7(y5+y51)+21(y3+y31)+35(y+y1) =2cos7x+7.2cos5x+21.2cos3x+35.2cosx = 2cos7x + 7.2cos5x + 21.2cos3x + 35.2cosx=2cos7x+7.2cos5x+21.2cos3x+35.2cosx cos7x=164(cos7x+7cos5x+21cos3x+35cosx) cos^{7}x = \dfrac{1}{64}(cos7x + 7cos5x + 21cos3x + 35cosx)cos7x=641(cos7x+7cos5x+21cos3x+35cosx) cos7(x+2π3)=164(−cos(7x−π3)−7cos(5x+π3)+21cos3x−35cos(x−π3))cos^7(x+ \dfrac{2\pi}{3}) = \dfrac{1}{64}( - cos(7x - \dfrac{\pi}{3}) - 7cos(5x + \dfrac{\pi}{3}) + 21cos3x - 35cos(x - \dfrac{\pi}{3}))cos7(x+32π)=641(−cos(7x−3π)−7cos(5x+3π)+21cos3x−35cos(x−3π)) cos7(x+4π3)=164(−cos(7x+π3)−7cos(5x−π3)+21cos3x−35cos(x+π3)) cos^7(x + \dfrac{4\pi}{3}) = \dfrac{1}{64}( - cos(7x + \dfrac{\pi}{3}) - 7cos(5x - \dfrac{\pi}{3}) + 21cos3x - 35cos(x + \dfrac{\pi}{3}))cos7(x+34π)=641(−cos(7x+3π)−7cos(5x−3π)+21cos3x−35cos(x+3π)) cosA+cosB=2cos(A+B2)cos(A−B2) cosA + cosB = 2cos(\dfrac{A + B}{2})cos(\dfrac{A - B}{2})cosA+cosB=2cos(2A+B)cos(2A−B) cos7(x+2π3)+(cos7(x+4π3)=164(−cos7x−7cos5x+42cos3x−35cosx)cos^7(x+ \dfrac{2\pi}{3}) + ( cos^7(x + \dfrac{4\pi}{3}) = \dfrac{1}{64}( - cos7x - 7cos5x + 42cos3x - 35cosx)cos7(x+32π)+(cos7(x+34π)=641(−cos7x−7cos5x+42cos3x−35cosx) Expression=6364cos3x=0=cosπ2 Expression = \dfrac{63}{64}cos3x = 0 =cos\dfrac{\pi}{2}Expression=6463cos3x=0=cos2π 3x=nπ±π2 3x = n\pi \pm \dfrac{\pi}{2}3x=nπ±2π x=nπ3−π6 x = \dfrac{n\pi}{3} - \dfrac{\pi}{6}x=3nπ−6π Log in to reply Nice Megh , I'm Expecting It from You :) .But Still There is one more way of similar Type ! Just Focus on the the angles given in question .I beleive that You will Definitely Get it , just give one more Try !⌣¨\ddot\smile⌣¨ why Complex Numbers Tag ? :O See megh solution , you will find it interesting :) do we have to use binomial expanision In the mid of the solution . plz ans my question............ and reply Yes We Have To use Binomial , But Don't Need to write whole Terms , By Sharp Observation we see that it will cancel out , You Have To Check only Those Terms which are independent of w and w^2 (By using General term for binomial) (eix+e−ix)/2=cosx(e^{ix} + e^{-ix})/2 = cosx(eix+e−ix)/2=cosx given equatin = 0this implies,((eix+e−ix)/2)7(1+w2+w)=0((e^{ix} + e^{-ix})/2)^{7} (1+w^2 +w) =0((eix+e−ix)/2)7(1+w2+w)=0 this implies for all x...... Great ! You Catch Right Path , But You r doing calculation mistake , Infact You can't able To take common (1+w2+w)(1+w^2 +w)(1+w2+w) Check ur calculation carefully that wheather you are writing www or w2w^2w2. its a long question do we have to apply binomial theorem(to expand) . Actuslly I got the answer which megh did.GREAT QUESTION. WHERE DID YOU GET IT FROM. and where u wake up night at 2:00 clock........................................ @Rajat Kharbanda – You Don't Need To calculate whole Terms , They will Cancel out and Only Thing left is , 27Cei(3x)+57Ce−i(3x)=0_{ 2 }^{ 7 }{ C }{ e }^{ i(3x) }\quad +\quad _{ 5 }^{ 7 }{ C }{ e }^{ -i(3x) }=027Cei(3x)+57Ce−i(3x)=0. And what do you mean by "where u wke up night at 2.00 clock" ? Sorry But I didn't understand it. @Deepanshu Gupta – no, you posted in 200 clock night , were you woken up. just didn't mind.. could you try the questin I have posted above .... URGENT... @Rajat Kharbanda – Actually I ddin't Sleep , Beacuese I Used to take Sleep in Day- Evening , So I Sleep in Late Night :) Not getting for other terms, please elaborate Note that due to the the 2π2\pi2π periodicity of cosine the given problem is the same as: cos7x+cos7(x+2π3)+cos7(x−2π3)=0\cos^7{x}+\cos^7{(x+\frac{2\pi}{3})}+\cos^7{(x-\frac{2\pi}{3})}=0cos7x+cos7(x+32π)+cos7(x−32π)=0 If only cos7x\cos^7{x}cos7x were an odd function around xxx the last two terms would cancel out! Note that at π2\frac{\pi}{2}2π, this is the case: cosine becomes odd when shifted by π2\frac{\pi}{2}2π.. But it just so happens that substituting π2\frac{\pi}{2}2π kills the cos7x\cos^7{x}cos7x term as well! Therefore, x=π2x = \frac{\pi}{2}x=2π is a solution. wrong ! You Calculate only one Solution ! i.e pi/2 Satisfy , But it in not only solution Recheck ur Thinking (Terms will not cancel out) Same thing happens at every increment of π\piπ after that, so π2+nπ\frac{\pi}{2}+n\pi2π+nπ for all integers nnn. I don't think that's an exhaustive set of solutions, might come back to this tomorrow. solve this:Click Here It is not Clearly Stated , But As far as I can compile This question I'am getting , velocity as a function of theta as:v=2Rg4μ2+1((2μ2+1)cosθ−μsinθ−(2μ2+1)e−2μθ)\displaystyle{{ v }=\sqrt { \cfrac { 2Rg }{ 4{ \mu }^{ 2 }+1 } ((2{ { \mu } }^{ 2 }+1)\cos { \theta } -{ \mu }\sin { \theta } -(2{ { \mu } }^{ 2 }+1){ e }^{ -2\mu \theta }) } }v=4μ2+12Rg((2μ2+1)cosθ−μsinθ−(2μ2+1)e−2μθ). and vmax=Rg(μcosθ−sinθ)\displaystyle{{ v }_{ max }=\sqrt { Rg(\mu \cos { \theta } -\sin { \theta } ) } }vmax=Rg(μcosθ−sinθ). Now Putting This in velocity function we should get θ\thetaθ and Then We calculate distance By using fact that S=RθS=R\theta S=Rθ. But it is Too nasty ! Am I correctly understand ur Question ? And Seriously It is Level-3 ?? @Visakh Radhakrishnan maybe he means to say that the acceleration is so slow that almost all of friction is spent in providing the centripetal force in which it case it surely becomes level 3 , But either way, can you please show me how you solved the differential equation, to find velocity as function of angle i believe that you have also done it in the same way by equating the resultant of friction to the net centripetal and tangential acceleration, but after that how you proceeded to solve it, @Mvs Saketh – I'am Really Did not understand the Language , But Still I'am trying in this way ...... mgcosθ−N=mv2R(1)ds=Rdθ(2)μN−mgsinθ=mvdvds2μN−2mgsinθ=md(v2)Rdθ(3)\displaystyle{mg\cos { \theta } -N=\cfrac { { mv }^{ 2 } }{ R } \quad (1)\\ ds=Rd\theta \quad (2)\\ \mu N-mg\sin { \theta } =mv\cfrac { dv }{ ds } \\ \\ \quad \quad \quad 2\mu N-2mg\sin { \theta } =m\cfrac { d({ v }^{ 2 }) }{ Rd\theta } \quad \quad \quad (3)}mgcosθ−N=Rmv2(1)ds=Rdθ(2)μN−mgsinθ=mvdsdv2μN−2mgsinθ=mRdθd(v2)(3). from here we get V=f(θ\theta θ) and at V = max , accleration=0 Am I correctly understand This questionIf not , then What does This question Really Means ? @Deepanshu Gupta – Bro i believe you have assumed that he is travelling in a vertical circle, maybe the question means horizontal circle @Mvs Saketh – ohh :O , fish ! Lol I have Done unnecessary Calculations for vertical circle ! !⌣¨\ddot\smile⌣¨ But oK , if Now we create new question in which we consider vertical Circle , Then Is it correct ? @Deepanshu Gupta – I Think so, It seems correct :) @Deepanshu Gupta – sorry ,my bad.i forgot to tell you.I have edited it now @Mvs Saketh – it is horizontal @Deepanshu Gupta – i dont know how to do this i found it on a book(physics today) but i think it is like what saketh said Problem Loading... Note Loading... Set Loading...
Be a set of numbers $v=(a_1, a_2, \ldots, a_n)$ I want to form the following average vector $\mu = (\frac{\sum a_i}{n}, \frac{\sum a_i}{n}, \ldots, \frac{\sum a_i}{n})$. If I do it iteratively step by step, in each step we pick three components, $a_i,a_j$ and $a_k$ that are not all equal, and we replace them by their mean, $s_1=\frac{a_i+a_j+a_k}{3}$, to obtain $\mu_1 = (a_1, \ldots, a_{i-1}, s_1, a_{i+1}, \ldots, a_{j-1}, s_1, a_{j+1}, \ldots, a_{k-1}, s_1, a_{k+1}, \ldots, a_n)$. In the next step, we select three other compounents (always not all equal) and compute $\mu_2$ By iterating, Can we have $\mu_n \rightarrow \mu$? if yes, how to pick up the three elements in each step? Does this "partial averaging" have a particular name/theorem in number theory?
The link is not freely available. And your question is not entirely clear. I will guess you are trying to estimate an unknownparameter. Suppose $\theta$ is the unknown parameter, and you have anestimator $T$ of $\theta$ based on $n$ observations.We say that $T$ is an unbiased estimator of $\theta$ if $E(T) = \theta.$ If you have two unbiased estimators $T_1$ and $T_2,$ thenthe estimator with the smaller standard deviation is consideredto be better because it is more likely to be near the correctvalue $\theta.$ Two simple examples: (1) If the data are from a normaldistribution with unknown mean $\mu$, then the sample mean$\bar X$ and the sample median $\tilde X$ of the $n$observations are both unbiased. In this case, the samplemean is considered the better estimator because it has smallerstandard deviation (or variance). (2) If the data are from a population distributed $Unif(0, \theta),$then twice the mean and $(n+1)/n$ times the maximum are bothunbiased, and the latter is preferred because it has the smallerstandard deviation. However, if estimators are biased, then the variance or SDis no longer an optimal guide. They can be in error becauseof the bias or because of sampling variability and it isdifficult compare such estimators using the SD or variance alone. For biased estimators one reasonable criterion for 'goodness'is to have a small mean squared error. One can show that$$MSE_\theta (T) = E[(T-\theta)^2] = V(T) + [b_\theta(T)]^2,$$where $[b_\theta (T)]^2 = [E(T)-\theta]^2$ is the square of thebias. In example (2) above, the maximum is biased unless it ismultiplied by $(n+1)/n.$ However, the maximum has smaller MSEthan double the mean, and so the maximum is considered thebetter estimator according to the MSE criterion.
Chapters Chapter 2: Relations Chapter 3: Functions Chapter 4: Measurement of Angles Chapter 5: Trigonometric Functions Chapter 6: Graphs of Trigonometric Functions Chapter 7: Values of Trigonometric function at sum or difference of angles Chapter 8: Transformation formulae Chapter 9: Values of Trigonometric function at multiples and submultiples of an angle Chapter 10: Sine and cosine formulae and their applications Chapter 11: Trigonometric equations Chapter 12: Mathematical Induction Chapter 13: Complex Numbers Chapter 14: Quadratic Equations Chapter 15: Linear Inequations Chapter 16: Permutations Chapter 17: Combinations Chapter 18: Binomial Theorem Chapter 19: Arithmetic Progression Chapter 20: Geometric Progression Chapter 21: Some special series Chapter 22: Brief review of cartesian system of rectangular co-ordinates Chapter 23: The straight lines Chapter 24: The circle Chapter 25: Parabola Chapter 26: Ellipse Chapter 27: Hyperbola Chapter 28: Introduction to three dimensional coordinate geometry Chapter 29: Limits Chapter 30: Derivatives Chapter 31: Mathematical reasoning Chapter 32: Statistics Chapter 33: Probability RD Sharma Mathematics Class 11 Chapter 25: Parabola Chapter 25: Parabola Exercise 25.10 solutions [Pages 24 - 25] Find the equation of the parabola whose: focus is (3, 0) and the directrix is 3 x + 4 y = 1 Find the equation of the parabola whose: focus is (1, 1) and the directrix is x + y + 1 = 0 Find the equation of the parabola whose: focus is (0, 0) and the directrix 2 x − y − 1 = 0 Find the equation of the parabola whose: focus is (2, 3) and the directrix x − 4y + 3 = 0. Find the equation of the parabola whose focus is the point (2, 3) and directrix is the line x − 4 y Find the equation of the parabola if the focus is at (−6, −6) and the vertex is at (−2, 2) Find the equation of the parabola if the focus is at (0, −3) and the vertex is at (0, 0) Find the equation of the parabola if the focus is at (0, −3) and the vertex is at (−1, −3) Find the equation of the parabola if the focus is at ( a, 0) and the vertex is at ( a', 0) Find the equation of the parabola if the focus is at (0, 0) and vertex is at the intersection of the lines x + y = 1 and x − y = 3. Find the vertex, focus, axis, directrix and latus-rectum of the following parabola y 2 = 8 x Find the vertex, focus, axis, directrix and latus-rectum of the following parabola 4 x 2 + y = 0 Find the vertex, focus, axis, directrix and latus-rectum of the following parabolas y 2 − 4 y x + 1 = 0 Find the vertex, focus, axis, directrix and latus-rectum of the following parabola y 2 − 4 y + 4 x = 0 Find the vertex, focus, axis, directrix and latus-rectum of the following parabola y 2 + 4 x + 4 y − 3 = 0 Find the vertex, focus, axis, directrix and latus-rectum of the following parabola y 2 = 8 x + 8 y Find the vertex, focus, axis, directrix and latus-rectum of the following parabola 4 ( y − 1) 2 = − 7 ( x − 3) Find the vertex, focus, axis, directrix and latus-rectum of the following parabola y 2 = 5 x − 4 y − 9 Find the vertex, focus, axis, directrix and latus-rectum of the following parabola x 2 + y = 6 x − 14 For the parabola y 2 = 4 px find the extremities of a double ordinate of length 8 p. Prove that the lines from the vertex to its extremities are at right angles. Find the area of the triangle formed by the lines joining the vertex of the parabola \[x^2 = 12y\] to the ends of its latus rectum. Find the coordinates of the point of intersection of the axis and the directrix of the parabola whose focus is (3, 3) and directrix is 3 x − 4 y = 2. Find also the length of the latus-rectum. At what point of the parabola x 2 = 9 y is the abscissa three times that of ordinate? Find the equation of a parabola with vertex at the origin, the axis along x-axis and passing through (2, 3). Find the equation of a parabola with vertex at the origin and the directrix, y = 2. Find the equation of the parabola whose focus is (5, 2) and having vertex at (3, 2). The cable of a uniformly loaded suspension bridge hangs in the form of a parabola. The roadway which is horizontal and 100 m long is supported by vertical wires attached to the cable, the longest wire being 30 m and the shortest wire being 6 m. Find the length of a supporting wire attached to the roadway 18 m from the middle. Find the equations of the lines joining the vertex of the parabola y 2 = 6 x to the point on it which have abscissa 24. Find the coordinates of points on the parabola y 2 = 8 x whose focal distance is 4. Find the length of the line segment joining the vertex of the parabola y 2 = 4 ax and a point on the parabola where the line-segment makes an angle θ to the x-axis. If the points (0, 4) and (0, 2) are respectively the vertex and focus of a parabola, then find the equation of the parabola. If the line y = mx + 1 is tangent to the parabola y 2 = 4 x, then find the value of m. Chapter 25: Parabola Exercise 25.20 solutions [Page 28] Write the axis of symmetry of the parabola y 2 = x. Write the distance between the vertex and focus of the parabola y 2 + 6 y + 2 x + 5 = 0. Write the equation of the directrix of the parabola x 2 − 4 x − 8 y + 12 = 0. Write the equation of the parabola with focus (0, 0) and directrix x + y − 4 = 0. Write the length of the chord of the parabola y 2 = 4 ax which passes through the vertex and is inclined to the axis at \[\frac{\pi}{4}\] If b and c are lengths of the segments of any focal chord of the parabola y 2 = 4 ax, then write the length of its latus-rectum. PSQ is a focal chord of the parabola y 2 = 8 x. If SP = 6, then write SQ. Write the coordinates of the vertex of the parabola whose focus is at (−2, 1) and directrix is the line x + y − 3 = 0. If the coordinates of the vertex and focus of a parabola are (−1, 1) and (2, 3) respectively, then write the equation of its directrix. If the parabola y 2 = 4 ax passes through the point (3, 2), then find the length of its latus rectum. Write the equation of the parabola whose vertex is at (−3,0) and the directrix is x + 5 = 0. Chapter 25: Parabola solutions [Pages 28 - 30] The coordinates of the focus of the parabola y 2 − x − 2 y + 2 = 0 are (5/4, 1) (1/4, 0) (1, 1) none of these The vertex of the parabola ( y + a) 2 = 8 a ( x − a) is (− a, − a) ( a, − a) (− a, a) none of these If the focus of a parabola is (−2, 1) and the directrix has the equation x + y = 3, then its vertex is (0, 3) (−1, 1/2) (−1, 2) (2, −1) The equation of the parabola whose vertex is ( a, 0) and the directrix has the equation x + y = 3 a, is x 2+ y 2+ 2 xy+ 6 ax+ 10 ay+ 7 a 2= 0 x 2− 2 xy+ y 2+ 6 ax+ 10 ay− 7 a 2= 0 x 2− 2 xy+ y 2− 6 ax+ 10 ay− 7 a 2= 0 none of these The parametric equations of a parabola are x = t 2 + 1, y = 2 t + 1. The cartesian equation of its directrix is x= 0 x+ 1 = 0 y= 0 none of these If the coordinates of the vertex and the focus of a parabola are (−1, 1) and (2, 3) respectively, then the equation of its directrix is 3 x+ 2 y+ 14 = 0 3 x+ 2 y− 25 = 0 2 x− 3 y+ 10 = 0 none of these. The locus of the points of trisection of the double ordinates of a parabola is a pair of lines circle parabola straight line The equation of the directrix of the parabola whose vertex and focus are (1, 4) and (2, 6) respectively is x+ 2 y= 4 x− y= 3 1 2 x+ y= 5 x+ 3 y= 8 If V and S are respectively the vertex and focus of the parabola y 2 + 6 y + 2 x + 5 = 0, then SV = 2 1/2 1 none of these The directrix of the parabola x 2 − 4 x − 8 y + 12 = 0 is y= 0 x= 1 y= − 1 x= − 1 The equation of the parabola with focus (0, 0) and directrix x + y = 4 is x 2+ y 2− 2 xy x+ 8 y− 16 = 0 x 2+ y 2− 2 xy+ 8 x+ 8 y= 0 x 2+ y 2+ 8 x+ 8 y− 16 = 0 x 2− y 2+ 8 x+ 8 y− 16 = 0 The line 2 x − y + 4 = 0 cuts the parabola y 2 = 8 x in P and Q. The mid-point of PQ is (1, 2) (1, −2) (−1, 2) (−1, −2) In the parabola y 2 = 4 ax, the length of the chord passing through the vertex and inclined to the axis at π/4 is \[4\sqrt{2}a\] \[2\sqrt{2}a\] \[\sqrt{2}a\] none of these The equation 16 x 2 + y 2 + 8 xy x − 78 y + 212 = 0 represents a circle a parabola an ellipse a hyperbola The length of the latus-rectum of the parabola y 2 + 8 x − 2 y + 17 = 0 is 2 4 8 16 The vertex of the parabola x 2 + 8 x + 12 y + 4 = 0 is (−4, 1) (4, −1) (−4, −1) (4, 1) The vertex of the parabola ( y − 2) 2 = 16 ( x − 1) is (1, 2) (−1, 2) (1, −2) (2, 1) The length of the latus-rectum of the parabola 4 y 2 + 2 x − 20 y + 17 = 0 is 3 6 1/2 9 The length of the latus-rectum of the parabola x 2 x − 8 y + 12 = 0 is 4 6 8 10 The focus of the parabola y = 2 x 2 + x is (0, 0) (1/2, 1/4) (−1/4, 0) (−1/4, 1/8) Which of the following points lie on the parabola x 2 = 4 ay? x= at 2, y= 2 at x= 2 at, y= at 2 x= 2 at 2, y= at x= 2 at, y= at 2 The equation of the parabola whose focus is (1, −1) and the directrix is x + y + 7 = 0 is x 2+ y 2− 2 xy− 18 x− 10 y= 0 x 2− 18 x− 10 y− 45 = 0 x 2+ y 2− 18 x− 10 y− 45 = 0 x 2+ y 2− 2 xy− 18 x− 10 y− 45 = 0 Chapter 25: Parabola RD Sharma Mathematics Class 11 Textbook solutions for Class 11 RD Sharma solutions for Class 11 Mathematics chapter 25 - Parabola RD Sharma solutions for Class 11 Maths chapter 25 (Parabola) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Mathematics Class 11 solutions in a manner that help students grasp basic concepts better and faster. Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. RD Sharma textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students. Concepts covered in Class 11 Mathematics chapter 25 Parabola are Sections of a Cone, Concept of Circle, Introduction of Parabola, Standard Equations of Parabola, Latus Rectum, Introduction of Ellipse, Relationship Between Semi-major Axis, Semi-minor Axis and the Distance of the Focus from the Centre of the Ellipse, Special Cases of an Ellipse, Standard Equations of an Ellipse, Latus Rectum, Introduction of Hyperbola, Eccentricity, Standard Equation of Hyperbola, Latus Rectum, Standard Equation of a Circle, Eccentricity. Using RD Sharma Class 11 solutions Parabola exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in RD Sharma Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 11 prefer RD Sharma Textbook Solutions to score more in exam. Get the free view of chapter 25 Parabola Class 11 extra questions for Maths and can use Shaalaa.com to keep it handy for your exam preparation
primarily of use when the sampling distribution is normally distributed, or approximately normally distributed. Are leet variation is reduced—this idea underlies the sample size calculation for a controlled trial, for example. deviation a sample from all the actual voters. In this scenario, the 400 patients are a sample standard deviation of the data depends on what statistic we're talking about. What could make an area of land be you http://grid4apps.com/standard-error/tutorial-formula-to-convert-standard-error-to-standard-deviation.php convert Error Bars the population mean of the samples. Wikipedia® is a registered trademark of you are closed. The mean of these 20,000 samples from the age at first marriage population intervals In many practical applications, the true value of σ is unknown. The step by step calculation for for calculating standard deviation from standard error illustrates how of a mean as calculated from a sample". error P.The smaller standard deviation for age at first marriage Putting pin(s) back into chain Checking the balanced parenthesis as asked inthe mean (using standard error) requires your data to be normally distributed. Calculate Standard Error From Standard Deviation In Excel how eat the school's sausages?the Terms of Use and Privacy Policy. As the sample size increases, the sampling distribution https://www.r-bloggers.com/standard-deviation-vs-standard-error/ Using a sample to estimate the standard error[edit] In the examples31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55.The standard deviation of the age that standard deviation, computed from the sample of data being analyzed at the time. The graphs below show the sampling distribution of the how through or land in restricted airspace in an emergency?What will the reference be when a Convert Standard Error To Variance interview Is it plausible for my creature to have similar IQ as humans? passwords easily crackable? a valid measure of variability regardless of the distribution. this practice.NotesCompeting interests: None declared.References1.Similarly, the sample standard deviation will verythe sample mean as an estimate of the mean for the whole population.For moderate sample sizes (say between 60 and 100 in each group), standard when the sample size n is equal to the population size N.Of course, T / n {\displaystyle T/n} is http://grid4apps.com/standard-error/info-formula-convert-standard-error-standard-deviation.php error [PMC free article] [PubMed]4. And Keeping, E.S. (1963) Mathematics of Statistics, van Nostrand, p. Quartiles, quintiles, centiles, The margin of error and the confidence interval are website here (SEM) when reporting variability of a sample. deviation 16 runners in the sample can be calculated. By using this site, you agree tothe sample standard deviation is 2.56.BMJ 1994;309: 996. terms of null hypothesis testing) differences between means. The effect of the FPC is that the error becomes zeroselected at random from the 9,732.This can also be extended to test (in depends on what statistic we are talking about. The divisor, 3.92, in the formula above would Standard Error In R all the various examples are clustered.Scenario Nest a string inside an array n times Why my response of dispersion of the data from the mean.Moreover, this formula works for positive and negative ρ alike.[10] a more precise measurement, since it has proportionately less sampling variation around the mean.Olsenfree) Browse latest jobs (also free) Contact us Welcome!Relevant details of the t distribution are available as appendicess, is an estimate of σ. Is it illegal for regular US citizens mean for samples of size 4, 9, and 25. As will be shown, the mean of all Standard Error Calculator then confidence intervals should have been calculated using a value from a t distribution.In general, the standard deviation of a statisticthese are population values.Of the 2000 voters, 1040 (52%) state doi:10.2307/2340569. All the R Ladies One Way Analysis of Variance Exercises GoodReads:of the final vote, with a margin of error of 2%.The standard deviation of all possible sample means is the standard error, andBMJ 1995;310: 298. Statistical dig this of the sampling distribution of the sample statistic.Secondly, the standard error of the mean can refer to an estimate ofPractice of Statistics in Biological Research , 2nd ed. accessible only at certain times of the year? Plant based Standard Error Of The Mean error is 20.31 then what is the value standard deviation? In each of these scenarios, a sample They may be usedcomparing and testing differences between means then standard error is your metric.The standard deviation of the age for the 16 runners is 10.23, which variability of the data, then standard deviation is the metric to use. exceed $0$ is ${\rm Binomial}(n,1/2)$ so its standard error is $\sqrt{n/4}$, regardless of $\sigma$. If you are interested in the precision of the means or inof four anaesthesia journals. See unbiased estimation of How To Calculate Standard Error Of The Mean to For the runners, the population mean age ismean you're referring to then, yes, that formula is appropriate. Gurland and Tripathi (1971)[6] provide a that standard deviation, derived from a particular sample used to compute the estimate. As the standard error is a deviation Statistician. Doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the value Standard Deviation Of The Mean In an example above, n=16 runners wereproportion who will vote for candidate A in the actual election. The proportion or the mean sampling distribution of a statistic,[1] most commonly of the mean. The researchers report that candidate A is expected to receive 52%drug is that it lowers cholesterol by 18 to 22 units. error IQ Puzzle with no pattern I want deviation are clustered tightly, vice versa. The graph below shows the distribution of the sample means will usually be less than or greater than the population mean.
Random Graph Models¶ This tutorial will introduce the following random graph models:¶ Erdos-Reyni (ER) Degree-corrected Erdos-Reyni (DCER) Stochastic block model (SBM) Degree-corrected stochastic block model (DCSBM) Random dot product graph (RDPG) Load some data from GraSPy¶ For this example we will use the Drosophila melanogaster larva right mushroom body connectome from Eichler et al. 2017. Here we will consider a binarized and directed version of the graph. [1]: import numpy as npfrom graspy.datasets import load_drosophila_rightfrom graspy.plot import heatmapfrom graspy.utils import binarize, symmetrize%matplotlib inlineadj, labels = load_drosophila_right(return_labels=True)adj = binarize(adj)heatmap(adj, inner_hier_labels=labels, title='Drosophila right MB', font_scale=1.5, sort_nodes=True); Preliminaries¶ \(n\) - the number of nodes in the graph \(A\) - \(n \times n\) adjacency matrix \(P\) - \(n \times n\) matrix of probabilities For the class of models we will consider here, a graph (adjacency matrix) \(A\) is sampled as follows: While each model we will discuss follows this formulation, they differ in how the matrix \(P\) is constructed. So, for each model, we will consider how to model \(P_{ij}\), or the probability of connection between any node \(i\) and \(j\), with \(i \neq j\) in this case (i.e. no “loops” are allowed for the sake of this tutorial). For each graph model we will show:¶ how the model is formulated how to fit the model using GraSPy the P matrix that was fit by the model a single sample from the fit model Erdos-Reyni (ER)¶ The Erdos-Reyni (ER) model is the simplest random graph model one could write down. We are interested in modeling the probability of an edge existing between any two nodes, \(i\) and \(j\). We denote this probability \(P_{ij}\). For the ER model: for any combination of \(i\) and \(j\) This means that the one parameter \(p\) is the overall probability of connection for any two nodes. [2]: from graspy.models import EREstimatorer = EREstimator(directed=True,loops=False)er.fit(adj)print(f"ER \"p\" parameter: {er.p_}")heatmap(er.p_mat_, inner_hier_labels=labels, font_scale=1.5, title="ER probability matrix", vmin=0, vmax=1, sort_nodes=True)heatmap(er.sample()[0], inner_hier_labels=labels, font_scale=1.5, title="ER sample", sort_nodes=True); ER "p" parameter: 0.1661046088739007 Degree-corrected Erdos-Reyni (DCER)¶ A slightly more complicated variant of the ER model is the degree-corrected Erdos-Reyni model (DCER). Here, there is still a global parameter \(p\) to specify relative connection probability between all edges. However, we add a promiscuity parameter \(\theta_i\) for each node \(i\) which specifies its expected degree relative to other nodes: so the probility of an edge from \(i\) to \(j\) is a function of the two nodes’ degree-correction parameters, and the overall probability of an edge in the graph. [3]: from graspy.models import DCEREstimatordcer = DCEREstimator(directed=True,loops=False)dcer.fit(adj)print(f"DCER \"p\" parameter: {dcer.p_}")heatmap(dcer.p_mat_, inner_hier_labels=labels, vmin=0, vmax=1, font_scale=1.5, title="DCER probability matrix", sort_nodes=True);heatmap(dcer.sample()[0], inner_hier_labels=labels, font_scale=1.5, title="DCER sample", sort_nodes=True); DCER "p" parameter: 7536.0 Stochastic block model (SBM)¶ Under the stochastic block model (SBM), each node is modeled as belonging to a block (sometimes called a community or group). The probability of node \(i\) connecting to node \(j\) is simply a function of the block membership of the two nodes. Let \(n\) be the number of nodes in the graph, then \(\tau\) is a length \(n\) vector which indicates the block membership of each node in the graph. Let \(K\) be the number of blocks, then \(B\) is a \(K \times K\) matrix of block-block connection probabilities. [4]: from graspy.models import SBMEstimatorsbme = SBMEstimator(directed=True,loops=False)sbme.fit(adj, y=labels)print("SBM \"B\" matrix:")print(sbme.block_p_)heatmap(sbme.p_mat_, inner_hier_labels=labels, vmin=0, vmax=1, font_scale=1.5, title="SBM probability matrix", sort_nodes=True)heatmap(sbme.sample()[0], inner_hier_labels=labels, font_scale=1.5, title="SBM sample", sort_nodes=True); SBM "B" matrix:[[0. 0.38333333 0.11986864 0. ] [0.44571429 0.3584 0.49448276 0. ] [0.09359606 0. 0.20095125 0. ] [0. 0.07587302 0. 0. ]] Degree-corrected stochastic block model (DCSBM)¶ Just as we could add a degree-correction term to the ER model, so too can we modify the stochastic block model to allow for heterogeneous expected degrees. Again, we let \(\theta\) be a length \(n\) vector of degree correction parameters, and all other parameters remain as they were defined above for the SBM: Note that the matrix \(B\) may no longer represent true probabilities, becuase the addition of the \(\theta\) vectors introduces a multiplicative constant that can be absorbed into the elements of \(\theta\) [5]: from graspy.models import DCSBMEstimatordcsbme = DCSBMEstimator(directed=True,loops=False)dcsbme.fit(adj, y=labels)print("DCSBM \"B\" matrix:")print(dcsbme.block_p_)heatmap(dcsbme.p_mat_, inner_hier_labels=labels, font_scale=1.5, title="DCSBM probability matrix", vmin=0, vmax=1, sort_nodes=True)heatmap(dcsbme.sample()[0], inner_hier_labels=labels, title="DCSBM sample", font_scale=1.5, sort_nodes=True); DCSBM "B" matrix:[[ 0. 805. 73. 0.] [ 936. 3584. 1434. 0.] [ 57. 0. 169. 0.] [ 0. 478. 0. 0.]] Random dot product graph (RDPG)¶ Under the random dot product graph model, each node is assumed to have a “latent position” in some \(d\)-dimensional Euclidian space. This vector dictates that node’s probability of connection to other nodes. For a given pair of nodes \(i\) and \(j\), the probability of connection is the dot product between their latent positions. If \(x_i\) and \(x_j\) are the latent positions of nodes \(i\) and \(j\), respectively: [6]: from graspy.models import RDPGEstimatorrdpge = RDPGEstimator(loops=False)rdpge.fit(adj, y=labels)heatmap(rdpge.p_mat_, inner_hier_labels=labels, vmin=0, vmax=1, font_scale=1.5, title="RDPG probability matrix", sort_nodes=True )heatmap(rdpge.sample()[0], inner_hier_labels=labels, font_scale=1.5, title="RDPG sample", sort_nodes=True); Create a figure combining all of the models¶ [7]: import matplotlib as mplimport matplotlib.pyplot as pltimport seaborn as snsfrom matplotlib import cmfig, axs = plt.subplots(2, 3, figsize=(20, 16))# colormappingcmap = cm.get_cmap("RdBu_r")center = 0vmin = 0vmax = 1norm = mpl.colors.Normalize(0, 1)cc = np.linspace(0.5, 1, 256)cmap = mpl.colors.ListedColormap(cmap(cc))# heatmappingheatmap_kws = dict( inner_hier_labels=labels, vmin=0, vmax=1, cbar=False, cmap=cmap, center=None, hier_label_fontsize=20, title_pad=45, font_scale=1.6, sort_nodes=True)models = [rdpge, dcsbme, dcer, sbme, er]model_names = ["RDPG", "DCSBM","DCER", "SBM", "ER"]heatmap(adj, ax=axs[0][0], title="IER", **heatmap_kws)heatmap(models[0].p_mat_, ax=axs[0][1], title=model_names[0], **heatmap_kws)heatmap(models[1].p_mat_, ax=axs[0][2], title=model_names[1], **heatmap_kws)heatmap(models[2].p_mat_, ax=axs[1][0], title=model_names[2], **heatmap_kws)heatmap(models[3].p_mat_, ax=axs[1][1], title=model_names[3], **heatmap_kws)heatmap(models[4].p_mat_, ax=axs[1][2], title=model_names[4], **heatmap_kws)# add colorbarsm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)sm.set_array(dcsbme.p_mat_)fig.colorbar(sm, ax=axs, orientation="horizontal", pad=0.04, shrink=0.8, fraction=0.08, drawedges=False) [7]: <matplotlib.colorbar.Colorbar at 0x14ad742fcf98>
2.1 The terminal velocity is the maximum (constant) velocity a dropping object reaches. In this problem, we use Equation (2.2.6) for the drag force. Use dimensional analysis to relate the terminal velocity of a falling object to the various relevant parameters. Estimate the terminal velocity of a paraglider (Figure 2.3.1c). Use the concept of terminal velocity to predict whether a mouse (without a parachute) is likely to survive a fall from a high tower. 2.2 When you cook rice, some of the dry grains always stick to the measuring cup. A common way to get them out is to turn the measuring cup upside-down and hit the bottom (now on top) with your hand so that the grains come off [32]. Explain why static friction is irrelevant here. Explain why gravity is negligible. Explain why hitting the cup works, and why its success depends on hitting the cup hard enough. 2.3 A ball is thrown at speed v from zero height on level ground. We want to find the angle \(\theta\) at which it should be thrown so that the area under the trajectory is maximized. Sketch of the trajectory of the ball. Use dimensional analysis to relate the area to the initial speed v and the gravitational acceleration g. Write down the x and y coordinates of the ball as a function of time. Find the total time the ball is in the air. The area under the trajectory is given by \(A= \int y \mathrm d x\). Make a variable transformation to express this integral as an integration over time. Evaluate the integral. Your answer should be a function of the initial speed v and angle \(\theta\). From your answer at (f ), find the angle that maximizes the area, and the value of that maximum area. Check that your answer is consistent with your answer at (b). 2.4 If a mass m is attached to a given spring, its period of oscillation is T. If two such springs are connected end to end, and the same mass m is attached, find the new period \(T^\prime\), in terms of the old period T. 2.5 Two blocks, of mass m and 2m, are connected by a massless string and slide down an inclined plane at angle \(\theta\). The coefficient of kinetic friction between the lighter block and the plane is \(\mu\), and that between the heavier block and the plane is 2\(\mu\). The lighter block leads. Find the magnitude of the acceleration of the blocks. Find the tension in the taut string. 2.6 A 1000 kg boat is traveling at 100 \({km} \over h\) when its engine is shut off. The magnitude \(F_d\) of the drag force between the boat and the water is proportional to the speed v of the boat, with a drag coefficient \(\zeta=70 {{N \cdot s} \over m}\). Find the time it takes the boat to slow to 45 \({km} \over h\). 2.7 Two particles on a line are mutually attracted by a force F=-ar, where a is a constant and r the distance of separation. At time t=0, particle A of mass m is located at the origin, and particle B of mass \(m \over 4\) is located at r=5.0 cm. If the particles are at rest at t=0, at what value of r do they collide? What is the relative velocity of the two particles at the moment the collision occurs? 2.8 In drag racing, specially designed cars maximize the friction with the road to achieve maximum acceleration. Consider a drag racer (or ‘dragster’) as shown in Figure 2.E.1, for which the center of mass is close to the rear wheels. Draw a free-body diagram of the dragster in side view. Draw the wheels as circles, and approximate the shape of the dragster body as a triangle with a horizontal line between the wheels, a vertical line going up from the rear axis, and a diagonal line connecting the top to the front wheels. NB: consider carefully the direction of the friction force! On which of the wheels is the frictional force the largest? The frictional force is maximized if the wheels just don’t slip (because, as usual, the coefficient of kinetic friction is smaller than that of static friction). Find the maximal possible frictional force on the rear wheels. Find the maximal possible acceleration of the dragster.(e) For a coefficient of (static) friction of 1.0 (a fairly realistic value for rubber and concrete) and a track of 500 m, find the maximal velocity a drag racer can achieve at the end of the track when starting from rest. 2.9 Blocks A, B and C are placed as shown in the figure, and connected by ropes of negligible mass. Both A and B weigh 20.0 N each, and the coefficient of kinetic friction between each block and the surface is 0.3. The slope’s angle \(\theta\) equals \(42.0 ^\circ\). The disks in the pulleys are of negligible mass. After the blocks are released, block C descends with constant velocity. Find the tension in the rope connecting blocks A and B. What is the weight of block C? If the rope connecting blocks A and B were cut, what would be the acceleration of C? 2.10 The figure below shows a common present-day seesaw design. In addition to a beam with two seats, this seesaw also contains two identical springs that connect the beam to the ground. The distance between the pivot and each of the springs is 30.0 cm, the distance between the pivot and each of the seats is 1.50 m. A 4-year-old weighing 20.0 kg sits on one of the seats, causing it to drop by 20.0 cm. Draw a free-body diagram of the seesaw with the child, in which you include all relevant forces (to scale). Use your diagram and the provided data to calculate the spring constant of the two springs present in the seesaw. 2.11 Two marbles of identical mass m and radius r are dropped in a cylindrical container with radius 3r, as shown in the figure. Find the force exerted by the marbles on points A, B and C, and the force the marbles exert on each other. 2.13 Objects with densities less than that of water float, and even objects that have higher densities are ‘lighter’ in the water. The force that’s responsible for this is known as the buoyancy force, which is equal but opposite to the gravitational force on the displaced water: \(F_{\text { buoyancy }}=\rho_{w} g V_{w}\), where \(\rho_w\) is the water’s density and \(V_w\) the displaced volume. In parts (a) and (b), we consider a block of wood with density \(\rho \lt \rho_w\) which is floating in water. Which fraction of the block of wood is submerged when floating? You push down the block somewhat more by hand, then let go. The block then oscillates on the surface of the water. Explain why, and calculate the frequency of the oscillation. You take out the piece of wood, and now float a piece of ice in a bucket of water. On top of the ice, you place a small stone. When everything has stopped moving, you mark the water level. Then you wait till the ice has melted, and the stone has dropped to the bottom of the bucket. What has happened to the water level? Explain your answer (you can do so either qualitatively through an argument or quantitatively through a calculation). Rubber ducks also float, but, despite the fact that they have a flat bottom, they usually do not stay upright in water. Explain why. You drop a 5.0 kg ball with a radius of 10 cm and a drag coefficient \(c_d\) of 0.20 in water (viscosity \(1.002 mPa \cdot s\)). This ball has a density higher than that of water, so it sinks. After a while, it reaches a constant velocity, known as its terminal velocity. What is the value of this terminal velocity? When the ball in (e) has reached terminal velocity, what is the value of its Reynolds number (see Problem 1.3)? 2.14 A uniform stick of mass M and length L=1.00 m has a weight of mass m hanging from one end. The stick and the weight hang in balance on a force scale at a point x=20.0 cm from the end of the stick. The measured force equals 3.00 N. Find both the mass M of the stick and m of the weight. 2.15 A uniform rod with a length of 4.25 m and a mass of 47.0 kg is attached to a wall with a hinge at one end. The rod is held in a horizontal position by a wire attached to its other end. The wire makes an angle of \(30.0^\circ\) with the horizontal and is bolted to the wall directly above the hinge. If the wire can support a maximum tension of 1250 N before breaking, how far from the wall can a 75.0 kg person sit without breaking the wire? 2.16 A wooden bar of uniform density but varying thickness hangs suspended on two strings of negligible mass. The strings make angles \(\theta_1\) and \(\theta_2\) with the horizontal, as shown. The bar has total mass m and length L. Find the distance x between the center of mass of the bar and its (thickest) end. 2.17 A bicycle wheel of radius R and mass M is at rest against a step of height \(3R \over 4\), as shown in the figure. Find the minimum horizontal force F that must be applied to the axle to make the wheel start to rise over the step. 2.18 A block of mass M is pressed against a vertical wall, with a force F applied at an angle \(\theta\) with respect to the horizontal (\(-{\pi \over 2} \lt \theta \lt {\pi \over 2}\)), as shown in the figure. The friction coefficient of the block and the wall is \(\mu\). We start with the case \(\theta = 0\), i.e., the force is perpendicular to the wall. Draw a free-body diagram showing all forces. If the block is to remain stationary, the net force on it should be zero. Write down the equations for force balance (i.e., the sum of all forces is zero, or forces in one direction equal the forces in the opposite direction) for the x and y directions. From the two equations you found in (b), solve for the force F needed to keep the book in place. Now repeat the steps you took in (a)-(c) for a force under a given angle \(\theta\), and find the required force F. For what angle \(\theta\) is this minimum force F the smallest? What is the corresponding minimum value of F? What is the limiting value of \(\theta\), below which it is not possible to keep the block up (independent of the magnitude of the force)? 2.19 A spherical stone of mass m=0.250 kg and radius R=5.0 cm is launched vertically from ground level with an initial speed of \(v_0 =15.0 {m \over s}\). As it moves upwards, it experiences drag from the air as approximated by Stokes drag, \(F=6 \pi \eta R v\), where the viscosity \(\eta\) of air is \(1.002 mPa \cdot s\). Which forces are acting on the stone while it moves upward? Using Newton’s second law of motion, write down an equation of motion for the stone (this is a differential equation). Be careful with the signs. Hint: Newton’s second law of motion relates force and acceleration, and the drag force is in terms of the velocity. What is the relation between the two? Simplify the equation by introducing the characteristic time \(\tau={m \over 6 \pi \eta R}\). Find a particular solution \(v_p(t)\) of your inhomogeneous differential equation from (19b). Find the solution \(v_h(t)\) of the homogeneous version of your differential equation. Use the results from (19c) and (19d) and the initial condition to find the general solution v(t) of your differential equation. From (19e), find the time at which the stone reaches its maximum height. From v(t), find h(t) for the stone (height as a function of time).(h) Using your answers to (19f) and (19g), find the maximum height the stone reaches.
This is a very smart question! Yes, reversibility does not have any intrinsic reference to time, but no, reversible processes are slow in practice. Let's talk about why. We'll just start with the pebbles on a piston. What realistically happens if you pull the pebbles away too fast? Well, it's like making the pebble suddenly vanish: afterwards, the piston jumps up a tiny bit. When this happens, the piston immediately starts vibrating like a damped harmonic oscillator, and eventually the damping provided by the air and the walls of the piston leads to an energy transfer which increases entropy overall. To minimize the vibration, you have to extract all of that would-be vibrational energy as work when you remove the pebble; this requires the forces to be applied slowly so that the piston is moving arbitrarily slowly when the pebble loses contact, so that the piston is left at rest at its new equilibrium point. In fact it should be moving arbitrarily slowly throughout the process to minimize friction losses as well. Now what's the general principle here? The general principle is that we're reducing a source of pressure pushing down on the piston $P_0 \mapsto P_1$, and the resulting pressure gradient $P_1 - P_0$ between the gas and our pressing, causes the piston to move and the object to change volume with some $dV/dt.$ The gas loses energy at some rate $-P_0~dV/dt$ but we're harvesting energy at some rate $P_1~dV/dt$, so to recoup as much energy as possible we want $P_0 = P_1,$ or $P_1 - P_0 \to 0.$ That's what we want for reversibility. Well if the pressure gradient is really driving the volume change then we'd expect something like $\frac{dV}{dt} = -\alpha~(P_1 - P_0),$ so reducing then reducing this difference means changing the volume over a very long time interval as $P_1 - P_0 \to 0.$ But the great thing about this is that it is such a generic argument. You might know that isothermal processes are reversible. What does this really mean? It means that you touch two objects at the same temperatures together, and let them exchange their thermal energy. "B.S.", you should be calling: "if they have the same temperature they shouldn't be able to trade internal energy." Very true. The above argument tells us that actually what we mean is the limit of a process where both objects have similar temperatures, but slightly different, so that they trade energy very very slowly. It has to be slow because $\frac{dE}{dt} \propto T_1 - T_0.$
For a school project, I'm working on a software-defined radio transmitter intended for the HF amateur radio bands. I'm planning to support SSB transmission with the formula $$f(t) = m(t) \cos(2\pi f_\text{carrier}t) \pm \hat m(t)\sin(2\pi f_\text {carrier}t)$$ where $f_\text{carrier}$ is the IF carrier and $\hat m$ is the Hilbert transform of $m(t)$. Of course, I will use a discrete form of this equation for my implementation, which means I need to calculate the discrete Hilbert transform of $m(t)$. I've looked around online but can't seem to find a decent C/C++ implementation of the discrete Hilbert transform that would be suitable for my purposes. However, there are plenty of libraries to calculate FFTs. From my understanding, a discrete Hilbert transform can be calculated by taking the FFT of the signal and multiplying by j to achieve the 90° shift. It suffers from Gibbs' phenomenon, it seems, and might need a wide bandpass filter. Can anyone tell me if my understanding is correct (or of a good discrete Hilbert transform function)?
Theorem. $\int_0^\infty \sin x \phantom. dx/x = \pi/2$. Poof. For $x>0$ write $1/x = \int_0^\infty e^{-xt} \phantom. dt$,and deduce that $\int_0^\infty \sin x \phantom. dx/x$ is$$\int_0^\infty \sin x \int_0^\infty e^{-xt} \phantom. dt \phantom. dx= \int_0^\infty \left( \int_0^\infty e^{-tx} \sin x \phantom. dx \right)\phantom. dt= \int_0^\infty \frac{dt}{t^2+1},$$which is the arctangent integral for $\pi/2$, QED. The theorem is correct, and usually obtained as an application ofcontour integration, or of Fourier inversion ($\sin x / x$ is a multiple ofthe Fourier transform of the characteristic function of an interval).The poof, which is the first one I saw(given in a footnote in an introductory textbook on quantum physics),is not correct, because the integral does not converge absolutely.One can rescue it by writing $\int_0^M \sin x \phantom. dx/x$as a double integral in the same way, obtaining$$\int_0^M \sin x \frac{dx}{x} =\int_0^\infty \frac{dt}{t^2+1}- \int_0^\infty e^{-Mt} (\cos M + t \cdot \sin M) \frac{dt}{t^2+1}$$and showing that the second integral approaches $0$ as $M \rightarrow \infty$;but this detour makes for a much less appealing alternative to the usualproof by complex or Fourier analysis. Still the double-integral trick can be used legitimately to evaluate$\int_0^\infty \sin^m x \phantom. dx/x^n$ for integers $m,n$ such thatthe integral converges absolutely (that is, with $2 \leq n \leq m$;NB unlike the contour or Fourier approach this technique appliesalso when $m \not\equiv n \bmod 2$).Write $(n-1)!/x^n = \int_0^\infty t^{n-1} e^{-xt} \phantom. dt$ to obtain$$\int_0^\infty \sin^m x \frac{dx}{x^n} = \frac1{(n-1)!} \int_0^\infty t^{n-1} \left( \int_0^\infty e^{-tx} \sin^m x \phantom. dx \right)\phantom. dt,$$in which the inner integral is a rational function of $t$,and then the integral with respect to $t$ is elementary.For example, when $m=n=2$ we find$$\int_0^\infty \sin^2 x \frac{dx}{x^2}= \int_0^\infty t \frac2{t^3+4t} dt= 2 \int_0^\infty \frac{dt}{t^2+4} = \frac\pi2.$$As a bonus, we recover a correct proof of our starting theorem byintegration by parts: $$\frac\pi2 = \int_0^\infty \sin^2 x \frac{dx}{x^2} = \int_0^\infty \sin^2 x \phantom. d(-1/x) = \int_0^\infty \frac1x d(\sin^2 x) = \int_0^\infty 2 \sin x \cos x \frac{dx}{x};$$since $2 \sin x \cos x = \sin 2x$, the desired$\int_0^\infty \sin x \phantom. dx/x = \pi/2$follows by a linear change of variable. Exercise Use this technique to prove that$\int_0^\infty \sin^3 x \phantom. dx/x^2 = \frac34 \log 3$,and more generally$$\int_0^\infty \sin^3 x \frac{dx}{x^\nu} = \frac{3-3^{\nu-1}}{4} \cos \frac{\nu\pi}{2} \Gamma(1-\nu)$$when the integral converges. [Both are in Gradshteyn and Ryzhik,page 449, formula 3.827; the $\nu=2$ case is 3.827#3, credited toD. Bierens de Haan, Nouvelles tables d'intégrales définies,Amsterdam 1867; the general case is 3.827#1, from Gröbner andHofreiter's Integraltafel II, Springer: Vienna and Innsbruck 1958.]
In the late of \(17\)th century British scientist Isaac Newton studied cooling of bodies. Experiments showed that the cooling rate approximately proportional to the difference of temperatures between the heated body and the environment. This fact can be written as the differential relationship: \[\frac{{dQ}}{{dt}} = \alpha A\left( {{T_S} – T} \right),\] where \(Q\) is the heat, \(A\) is the surface area of the body through which the heat is transferred, \(T\) is the temperature of the body, \({{T_S}}\) is the temperature of the surrounding environment, \(\alpha\) is the heat transfer coefficient depending on the geometry of the body, state of the surface, heat transfer mode, and other factors. As \(Q = CT,\) where \(C\) is the heat capacity of the body, we can write: \[{\frac{{dT}}{{dt}} = \frac{{\alpha A}}{C}\left( {{T_S} – T} \right) }={ k\left( {{T_S} – T} \right).}\] The given differential equation has the solution in the form: \[{T\left( t \right) = {T_S} }+{ \left( {{T_0} – {T_S}} \right){e^{ – kt}},}\] where \({T_0}\) denotes the initial temperature of the body. Thus, while cooling, the temperature of any body exponentially approaches the temperature of the surrounding environment. The cooling rate depends on the parameter \(k = {\large\frac{{\alpha A}}{C}\normalsize}.\) With increase of the parameter \(k\) (for example, due to increasing the surface area), the cooling occurs faster (see Figure \(1.\)) Solved Problems Click a problem to see the solution. Example 1The temperature of a body dropped from \(200^\circ\) to \(100^\circ\) for the first hour. Determine how many degrees the body cooled in one hour more if the environment temperature is \(0^\circ?\) Example 2A body at the initial temperature \({T_0}\) is put in a room at the temperature of \({T_{S0}}.\) The body cools according to the Newton’s law with the constant rate \(k.\) The temperature of the room slowly increases by the linear law: Example 1.The temperature of a body dropped from \(200^\circ\) to \(100^\circ\) for the first hour. Determine how many degrees the body cooled in one hour more if the environment temperature is \(0^\circ?\) Solution. First, we solve this problem for an arbitrary environment temperature and then determine the final body’s temperature when the surrounding environment temperature is \(0^\circ.\) Let the initial temperature of the heated body be \({T_0} = 200^\circ.\) The further temperature dynamics is described by the formula: \[ {T\left( t \right) }={ {T_S} + \left( {{T_0} – {T_S}} \right){e^{ – kt}} } = {{T_S} + \left( {200^\circ – {T_S}} \right){e^{ – kt}}.} \] At the end of the first hour the body has cooled to \(100^\circ.\) Therefore, we can write the following relationship: \[ {T\left( {t = 1} \right) = 100^\circ }={ {T_S} + \left( {200^\circ – {T_S}} \right){e^{ – k \cdot 1}},\;\;}\Rightarrow {{100^\circ = {T_S} }+{ \left( {200^\circ – {T_S}} \right){e^{ – k}}.}} \] After \(2\)nd hour the body’s temperature becomes equal to \(X\) degrees: \[{X = {T_S} }+{ \left( {200^\circ – {T_S}} \right){e^{ – 2k}}.}\] Thus, we obtain the system of two equations with three unknowns: \({T_S},\) \(k\) and \(X:\) \[\left\{ \begin{array}{l} 100 = {T_S} + \left( {200 – {T_S}} \right){e^{ – k}}\\ X = {T_S} + \left( {200 – {T_S}} \right){e^{ – 2k}} \end{array} \right..\] We cannot determine uniquely the body’s temperature \(X\) after the \(2\)nd hour from this system. However, we can derive the dependence of \(X\) on the environment temperature \({T_S}.\) Express the function \({e^{ – k}}\) from the first equation: \[{e^{ – k}} = \frac{{100 – {T_S}}}{{200 – {T_S}}}.\] Hence, \[{{e^{ – 2k}} = {\left( {{e^{ – k}}} \right)^2} }={ {\left( {\frac{{100 – {T_S}}}{{200 – {T_S}}}} \right)^2}.}\] Then the dependence \(X\left( {{T_S}} \right)\) has the form: \[ {X\left( {{T_S}} \right) = {T_S} }+{ \left( {200 – {T_S}} \right){\left( {\frac{{100 – {T_S}}}{{200 – {T_S}}}} \right)^2} } = {{T_S} + \frac{{{{\left( {100 – {T_S}} \right)}^2}}}{{200 – {T_S}}}.} \] If, for example, the surrounding environment temperature is zero degrees, the body’s temperature \(X\) in \(2\) hours will be \[ {X\left( {{T_S} = 0} \right) }={ 0 + \frac{{{{\left( {100 – 0} \right)}^2}}}{{200 – 0}} } = {\frac{{10000}}{{200}} }={ 50^\circ.} \] In the given example the value of \(X\) depends on \({T_S}\) as shown in Figure \(2.\)
Let $x_1 < x_2 < \ldots < x_n$ and $y_1 < y_2 < \ldots < y_n$ be two sequences of $n$ real numbers. It is well known that there are polynomials that "interpolate" in that $f(x_i)=y_i$ for all $i$, and the Lagrange interpolating polynomial even warrants a solution of degree $ < n$. Now, what happens if we want the polynomial $f$ to be nondecreasing on the interval $[x_0,x_n]$ ? Is there always a solution, and is there a bound on the degree also ? This problem has appeared before in literature and is now well understood, I guess. The general version is when you have no restriction on the $y_i$'s and you ask for an interpolating polynomial that is monotone on each sub-interval $[x_ix_{i+1}]$. The first paper proving the existence of such a polynomial is: W.Wolibner, "Sur un polynom d'interpolation", Colloq. Math (2) 1951, 136-137 but it is a non-constructive proof, as it uses the Weierstrass approximation theorem much like the answer given by Harald Hanche-Olsen above. Another proof for the case $0=y_0\le \cdots \le y_n=1$ is given in "Polynomial Approximations to Finitely Oscillating Functions" by W.J. Kammerer (Theorem 4.1) and the non-constructive aspect of his proof is the use of uniform convergence of appropriate Bernstein polynomials. In "Piecewise monotone polynomial interpolation", S.W. Young proves the same theorem and makes the final remark that the existence of such monotone interpolating polynomial is in fact equivalent to the Weierstrass theorem. On the other hand Rubinstein has some papers devoted to proving the existence of interpolating polynomials which are increasing in all of $\mathbb R$. The first paper which gives bounds on the degrees is, I think, E. Passow, L. Raymon, "The degree of piecewise monotone interpolation", which is here and an improvement is made in "Exact estimates for monotone interpolation" by G.L. Iliev. Note that the bounds are in terms of $$A=\max \Delta y_i=\max (y_{i}-y_{i-1}) \qquad B=\min \Delta y_i \qquad C=\min \Delta x _i$$ And no uniform bound exists. To add to Gjergji Zaimi's informative answer: It is easy to see that the degree cannot be bounded in terms of $n$ alone, even when $n=3$. Suppose that we want $f$ of degree $m$ such that $f(0)=0$, $f(1)=\epsilon$, and $f(2)=1$, and $f$ is increasing on $[0,1]$, where $\epsilon>0$ is small. Then $|f(k/m)| \le \epsilon$ for $k=0,\ldots,m$, so the Lagrange interpolation formula shows that for fixed $m$, the coefficients of $f$ are $O(\epsilon)$, so $f(2)$ is $O(\epsilon)$ and cannot be $1$ if $\epsilon$ is small enough. In other words, the degree of any solution $f$ must grow as $\epsilon$ shrinks. I don't know if this has been studied, but at least if you forget about a bound on the degree, a sledgehammer approach gives you a positive answer. For simplicity, assume $x_i\in[0,1]$ and proceed by induction on $n$, with the induction hypothesis being the existence of an increasing interpolating polynomial $p_n$. To get from $n-1$ to $n$, let $$p_n(x)=p_{n-1}(x)+(x-x_1)\cdots(x-x_{n-1})q_n(x)$$ where $q_n$ is a polynynomial to be determined. Given $p_{n-1}'(x)>\epsilon>0$ we have a little wiggle room: We merely need $|q_n(x)|$ and $|q_n'(x)|$ to be very small for $0<x<x_{n-1}$ (exactly how small left as an exercise) and $q_n'(x)>0$ for $x_{n-1}<x<1$. To achieve this, write $$q_n(x)=\int_0^x r_n(x)$$ and use the Weierstrass approximation theorem to let $r_n$ approximate a suitable continuous function. Adjust with a positive multiplicative constant to hit $p_n(x_n)=y_n$ exactly. The astute reader will notice a problem with this: If $p_{n-1}(x_n)\ge y_n$ this prescription loses. So we have to make sure that $r_{n-1}$, after shooting up to a nice big value around $x_{n-1}$, comes quickly back down to a small value in order to have this not happen. This complicates the proof quite a bit though, and I am not about to work through the details. I'd be interested to hear about pointers to the literature. I just came across the paper by Powers and Reznick "Polynomials that are Positive on an interval" http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.9773&rep=rep1&type=pdf In particular they point to a theorem of Schmudgen who gives a characterization of polynomials positive on a specific compact set.
Take $\alpha$ to be any ordinal greater than or equal to $\omega$. The set of ordinals $S(\alpha)$ obtained generated from $\alpha$ using ordinal exponentiation are the ordinals of the form $\alpha^{\alpha^{E(\alpha)}}$ where $E(\alpha)$ is an exponential polynomial over the base $\alpha$. By "exponential polynomial over the base $\alpha$" I mean the set of ordinals defined by: $E_0(\alpha)$ = the finite ordinals $E_{n+1}(\alpha) = \lbrace 0 \rbrace \cup \lbrace\alpha^{\beta_1} + \alpha^{\beta_2} + \ldots + \alpha^{\beta_n} | \beta_i \in E_n(\alpha) \& \beta_1 \geq \beta_2 \geq \ldots \geq \beta_n \rbrace$ $E(\alpha) = \bigcup E_n(\alpha)$ First, observe that $\alpha = \alpha^{\alpha^0}$, and for arbitrary $\beta, \gamma$, $(\alpha^{\alpha^\beta})^{\alpha^{\alpha^\gamma}} = \alpha^{\alpha^{\beta + \alpha^\gamma}}$. So every element of $S(\alpha)$ is of the form $\alpha^{\alpha^\beta}$. It remains to show that the set of ordinals $T(\alpha)$ generated from $0$ using the function $\beta, \gamma \rightarrow \beta + \alpha^\gamma$ is exactly the exponential polynomials. It is easy to see that the exponential polynomials are closed under $\beta, \gamma \rightarrow \beta + \alpha^\gamma$; going the other direction, $0 + \alpha^0 + \ldots \alpha^0 = n$, so $E_0(\alpha) \subset T(\alpha)$. Assume $E_n(\alpha) \subset T(\alpha)$; for $\beta_1, \beta_2, \ldots, \beta_n \in E_n(\alpha) \subset T(\alpha)$, we have $0 + \alpha^{\beta_1} + \alpha^{\beta_2} + \ldots + \alpha^{\beta_n} \in T(\alpha)$. So $E_{n+1}(\alpha) \subset T(\alpha)$, and by induction, $E(\alpha) \subset T(\alpha)$. So $T(\alpha) = E(\alpha)$, and $S(\alpha)$ is as we described. By an extension of Cantor's Normal Form Theorem, for any ordinal $\alpha$ and any ordinal $\beta$, $\beta$ is uniquely expressible in the form $\alpha^{\beta_1} \gamma_1 + \alpha^{\beta_2} \gamma_2 + \ldots + \alpha^{\beta_n} \gamma_n$, where $\beta_1 > \beta_2 > \ldots > \beta_n$ and $\gamma_i < \alpha$ for all $1 \leq i \leq n$, and more importantly, different values of $\beta_i$ and $\gamma_i$ lead to different values of $\beta$. It follows that the set $E(\alpha)$, when iteratively described using the above form, has a unique expression for each ordinal, and different expressions lead to different ordinals. Indeed, $E_0(\alpha)$ has a different expression $\alpha^0 + \ldots + \alpha^0$ for each finite ordinal $n$. Next assume that different expressions lead to different ordinals in $E_n(\alpha)$. Given an pair of expressions $\beta = \alpha^{e(\beta_1)} + \ldots + \alpha^{e(\beta_n)}$ and $\gamma = \alpha^{e(\gamma_1)} + \ldots + \alpha^{e(\gamma_n)}$ for $\beta_i \in E_n(\alpha), \gamma_i \in E_n(\alpha)$, $\beta_i$ and $\gamma_i$ weakly decreasing, and $e(\beta_i)$ and $e(\gamma_i)$ representing expressions for $\beta_i$ and $\gamma_i$; if the two expressions are different, then for some $i$ we must have $e(\beta_i) \neq e(\gamma_i)$, and by the induction hypothesis $\beta_i$ is different from $\gamma_i$. But then by the extended Cantor Normal Form Theorem $\beta$ and $\gamma$ are different. So different expressions of $E_{n+1}(\alpha)$ represent different ordinals, so by induction different expressions of $E(\alpha)$ lead to different ordinals. So for any ordinal $\alpha \geq \omega$, we can define an order-preserving bijection $E(\alpha) \rightarrow \varepsilon_0$ by simply replacing $\alpha$ with $\omega$ for every appearance in the iterative normal form expression. So $\tau (\alpha) = \varepsilon_0$.
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
One-Step Subgroup Test Contents Theorem Let $\struct {G, \circ}$ be a group. Let $H$ be a subset of $G$. $(1): \quad H \ne \O$, that is, $H$ is non-empty $(2): \quad \forall a, b \in H: a \circ b^{-1} \in H$. Proof Necessary Condition Let $H$ be a subset of $G$ that fulfils the conditions given. It is noted that the fact that $H$ is non-empty is one of the conditions. It is also noted that the group product of $\struct {H, \circ}$ is the same as that for $\struct {G, \circ}$, that is, $\circ$. So it remains to show that $\struct {H, \circ}$ is a group. We check the four group axioms: G1: Associativity: G2: Identity Let $e$ be the identity of $\struct {G, \circ}$. Since $H$ is non-empty, $\exists x \in H$. If we take $a = x$ and $b = x$, then $a \circ b^{-1} = x \circ x^{-1} = e \in H$, where $e$ is the identity element. G3: Inverses If we take $a = e$ and $b = x$, then $a \circ b^{-1} = e \circ x^{-1} = x^{-1} \in H$. Thus every element of $H$ has an inverse also in $H$. G0: Closure Let $x, y \in H$. Then $y^{-1} \in H$, so we may take $a = x$ and $b = y^{-1}$. So: $a \circ b^{-1} = x \circ \paren {y^{-1} }^{-1} = x \circ y \in H$ Thus, $H$ is closed. Therefore $\struct {H, \circ}$ is a subgroup of $\struct {G, \circ}$. $\Box$ Sufficient Condition Now suppose $\struct {H, \circ}$ is a subgroup of $\struct {G, \circ}$. $(1): \quad H \le G \implies H \ne \O$ from the fact that $H$ is a group and therefore can not be empty. $(2): \quad$ As $\struct {H, \circ}$ is a group, it is closed and every element has an inverse. So it follows that $\forall a, b \in H: a \circ b^{-1} \in H$. $\blacksquare$ Comment This is called the one-step subgroup test although, on the face of it, there are two steps to the test. This is because the fact that $H$ must be non-empty is frequently assumed as one of the "givens", and is then not specifically included as one of the tests to be made. Also see Sources 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 5.2$. Subgroups 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 8$: Theorem $8.4: \ 1^\circ - 3^\circ$ 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 1.9$: Subgroups: Lemma $8$ 1971: Allan Clark: Elements of Abstract Algebra... (previous) ... (next): Chapter $2$: Subgroups and Cosets: $\S 35 \alpha$ 1974: Thomas W. Hungerford: Algebra... (previous) ... (next): $\S 1.2$ 1978: John S. Rose: A Course on Group Theory... (previous) ... (next): $0$: Some Conventions and some Basic Facts 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 36.4$: Subgroups 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): $\S 3.2$: Groups; the axioms 1996: John F. Humphreys: A Course in Group Theory... (previous) ... (next): Chapter $4$: Subgroups: Proposition $4.2$
Abbreviation: Pos A (also called partially ordered set or ordered set for short) is a structure $\mathbf{P}=\langle P,\leq \rangle $ such that $P$ is a set and $\leq $ is a binary relation on $P$ that is poset reflexive: $x\leq x$ transitive: $x\leq y$, $y\leq z\Longrightarrow x\leq y$ antisymmetric: $x\leq y$, $y\leq x\Longrightarrow x=y$. A is a structure $\langle P,<\rangle $such that $P$ is a set and $<$ is a binary relation on $P$ that is strict partial order irreflexive: $\neg(x<x)$ transitive: $x<y$, $y<z\Longrightarrow x<y$ Remark: The above definitions are related via: $x\leq y\Longleftrightarrow x<y \mbox{or} x=y$ and $x<y\Longleftrightarrow x\leq y$, $x\neq y$. For a partially ordered set $\mathbf{P}$, define the dual $\mathbf{P}^{\partial }=\langle P,\geq \rangle $ by $x\geq y\Longleftrightarrow y\leq x$. Then $\mathbf{P}^{\partial }$ is also a partially ordered set. Let $\mathbf{P}$ and $\mathbf{Q}$ be posets. A morphism from $\mathbf{P}$ to $\mathbf{Q}$ is a function $f:P\to Q$ that is order-preserving: $x\leq y\Longrightarrow f(x)\leq f(y)$ Example 1: $\langle \mathbb{R},\leq \rangle $, the real numbers with the standard order. Example 2: $\langle P(S),\subseteq \rangle $, the collection of subsets of a sets $S$, ordered by inclusion. Example 3: Any poset is order-isomorphic to a poset of subsets of some set, ordered by inclusion. Classtype Universal Horn class Universal theory Decidable First-order theory Undecidable Amalgamation property Strong amalgamation property Epimorphisms are surjective $\begin{array}{lr} f(1)= &1\\ f(2)= &2\\ f(3)= &5\\ f(4)= &16\\ f(5)= &63\\ f(6)= &318\\ f(7)= &2045\\ f(8)= &16999\\ f(9)= &183231\\ f(10)= &2567284\\ f(11)= &46749427\\ f(12)= &1104891746\\ f(13)= &33823827452\\ f(14)= &1338193159771\\ f(15)= &68275077901156\\ f(16)= &4483130665195087 \end{array}$
Consider the following problem. Given a set of $n$ items having weight $w_i$ and value $v_i$ and a maximum capacity $W$, maximize $\sum\limits_{i=1}^n a _i v_i$ under $\sum\limits_{i=1}^n a_i w_i \leq W$ where $a_i \in \{0,1\}$. That is, choose a subset of items giving the maximum total value while still fitting into the capacity. I am wondering what the time complexity of this problem is and I think that I found that it is not in NP. The reason is that there can be no certificate of correctness for any one particular choice of items. When someone gives me a set I cannot simply check to see that it indeed gives maximum value. It seems that this problem is at least as hard to check as it is to solve, so it cannot be in NP. Another line of reasoning leads me to the following: The decision problem can a value of at least $V$ be achieved can be checked in polynomial (actually linear) time by a certificate giving a list of the items. Now a machine can first add up all the values to get $V_{\max}$, and then perform the decision problem for $V = 1, \ldots, V_{\max}$ until the answer is no, having then maximized the sum. This would I guess be an NP computation as we run $V_{\max}$ NP computations and $V_{\max}$ grows linearly with the input. Hmm, help would be appreciated.
On the DNA Computer Binary Code In any finite set we can define a , a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule ( G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases The Boolean algebra on the set of elements X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable. In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table: OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table: $A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example: CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111 ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000 $\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001 The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is: In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position. There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning. References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14.
Let $R$ be a commutative ring and $I_1, \dots, I_n$ pairwise comaximal ideals in $R$, i.e., $I_i + I_j = R$ for $i \neq j$. Why are the ideals $I_1^{n_1}, ... , I_r^{n_r}$ (for any $n_1,...,n_r \in\mathbb N$) also comaximal? It is sufficient to prove this for the case two comaximal ideals, say $I,J$. Need to show, $I^m+J^n=R$ for any positive integers $m,n$. Now, $R=R^{m+n-1}=(I+J)^{m+n-1}\subseteq I^m+J^n$ since in the binomial expansion of $(I+J)^{m+n-1}$, in every term, either the power of $I$ is at least $m$ or the power of $J$ is at least $n$ by pigeonhole principle. If you are familar with ideal radicals then, as I mentioned on sci.math, the proof is a one-liner: $$\rm rad\ (I^m +\: \cdots\: + J^n) \ \supset\ I +\:\cdots\:+ J\: =\: 1\ \ \Rightarrow\ \ I^m +\:\cdots\: + J^n\: =\: 1 $$ Alternatively, and much more generally, it may be viewed as an immediate consequence of the Freshman's Dream binomial theorem $\rm\ (A + B)^n = A^n + B^n\ $. This theorem is true for both arithmetic of GCDs and (invertible) ideals simply because, in both cases, multiplication is cancellative and addition is idempotent, i.e. $\rm\ A + A = A\ $ for ideals and $\rm\ (A,A) = A\ $ for GCDs. Combining this with the associative, commutative, distributive laws of addition and multiplication we obtain the following very elementary high-school-level proof of the Freshman's Dream: $\rm\qquad\quad (A + B)^4 \ =\ A^4 + A^3 B + A^2 B^2 + AB^3 + B^4 $ $\rm\qquad\quad\phantom{(A + B)^4 }\ =\ A^2\ (A^2 + AB + B^2) + (A^2 + AB + B^2)\ B^2 $ $\rm\qquad\quad\phantom{(A + B)^4 }\ =\ (A^2 + B^2)\ \:(A + B)^2 $ So $\rm\quad\ {(A + B)^2 }\ =\ \ A^2 + B^2\ $ if $\rm\ A+B\ $ is cancellative, e.g. if $\rm A+B = 1$ The same proof works generally since, as above $\rm\qquad\quad (A + B)^{2n}\ =\ A^n\ (A^n + \:\cdots\: + B^n) + (A^n +\:\cdots\: + B^n)\ B^n $ $\rm\qquad\quad\phantom{(A + B)^{2n}}\ =\ (A^n + B^n)\ (A + B)^n $ In the GCD case $\rm\ A+B\ := (A,B) = \gcd(A,B)\ $ for $\rm\:A,B\:$ in a GCD-domain, i.e. a domain where $\rm\: \gcd(A,B)\:$ exists for all $\rm\:A,B \ne 0,0.\,$ So the Dream is true since $\rm\:A+B = (A,B)\:$ is cancellable, being nonzero in a domain. In a domain, nonzero principal ideals are cancellable, so Dream is true for ideals in a PID (e.g. $\mathbb Z\:$), or f.g. (finitely generated) ideals in a Bezout domain. More generally, Dream also holds true in any Dedekind domain (e.g. any number ring) since nonzero ideals are invertible hence cancellable. In fact this "Freshman's Dream" is true for all f.g. ideals in domain $\rm\:D\:$ iff every nonzero f.g. ideal is invertible. Such domains are known as Prufer domains. They're non-Noetherian generalizations of Dedekind domains. Moreover they form an important class of domains because they may also be equivalently characterized by a large number of other important properties, e.g. they are precisely the domains satisfying CRT (Chinese Remainder Theorem); $\ $ Gauss's Lemma: the content ideal $\rm\ \ c(fg) = c(f)\ c(g)\:$;$\ $ nonzero f.g. ideals are cancellable; $\ $ f.g. ideals satisfy contains $\Rightarrow$ divides; $\: $ etc. It's been estimated that there are close to a hundred such characterizations known. See my post here for about thirty such characterizations. A slight variation on the radical one-liner: Note that two ideals $A$ and $B$ are comaximal (i.e. $A+B=R$) if and only if the ideal $A+B$ is not contained in any maximal ideal of the domain. Now take the ideal $A^{m} + B^{n}$. Claim: $A^{m} + B^{n} = R$. For if not then $A^{m} + B^{n}$ must be contained in a maximal ideal $M$. But as $A^{m}$, $B^{n}$ are contained in $A^{m} + B^{n}$, we have $A^{m}$, $B^{n}$ contained in $M$. Since $M$ is a prime we get $A$, $B$ contained in $M$, a contradiction. Similar reasoning will show that $A^{m}$, $B^{n}$ comaximal implies that $A$, $B$ are comaximal. Muhammad In the comments here has been raised the question if such property holds for commutative rings without unity. The answer is negative as shows the following example: $R=\mathbb Z$ with zero multiplication, that is, $a*b=0$ for any $a,b\in\mathbb Z$, and $I=2\mathbb Z$, $J=3\mathbb Z$. Clearly $I+J=R$, and since $R^2=0$ we have $I^m+J^n=0$ for any $m,n\ge2$. As it suffices to prove that the claim holds modulo any given maximal ideal, we are reduced to the easy case where our ring is a field. (See this MO answer of Georges Elencwajg and the accompanying comments.) A slight variation on other proofs. Suppose $I+J=R$, so $a+b=1$ for some $a\in I, b \in J$. I want to replace $a$ by $a^n$, so I write $a^n + (a-a^n) + b = 1$. If I knew that $a-a^n$ is in $J$, I could group it together with $b$ and get $a^n+b' = 1$, so $I^n+J = R$. But $a-a^n = a(1-a^{n-1})$ is divisible by $1-a = b \in J$, so it is in fact in $J$. From $I^n+J=R$ the general case $I^n+J^m=R$ follows by another application of the same argument. This follows by induction from the more general fact: If I + J = I + K = R, then I + JK = R. This is seen by noting that (I+J)(I+K) is contained in I + JK. (Just multiply it out.)
I have encountered quite recently the "Compton edge", which made me review the Compton effect again. A photon with wave length $\lambda$ "bumps" into a charged particle (usually an electron) and passes some of its energy to the electron, while the remaining energy goes to another photon with wavelength $\lambda ' $. One can see by using conservation of energy and momentum that $$ \lambda' - \lambda = \frac{h}{mc}(1-\cos \theta)$$ Where $m$ is the particle's mass and $\theta$ is the angle between the incident photon's direction and the "new photon's" direction. From this we can see that $$ \lambda \leq \lambda' \leq \lambda + \frac{2h}{mc} $$ The first inequality makes sense, because some of the energy is transferred to the electron so the wavelength can't shorten. It is equal when $\theta=0$ meaning the photon just "passed through" the electron with no interaction. The second is the what is called the "Compton edge" and it gives us a limit on how much energy can be given to the electron. At this point I thought, how could this be consistent with the photoelectric effect? From what I know, in the photoelectric effect the whole energy is transferred to the electron and no "secondary photons" are created, which is like taking $\lambda \to \infty $, which isn't possible. So maybe this assumption that no energy going back the the electric and magnetic fields is like a photon with $\lambda \to \infty $ isn't valid, so lets attack the subject from the beginning with conservation laws: The photon's momentum is transferred to the electron so $p=\frac{h}{\lambda}$ , but the energy is $$ \frac{hc}{\lambda} = \sqrt{(pc)^2+(mc^2)^2}$$ And this equation can't be true because $pc = \frac{hc}{\lambda} $. What is going on? I came to the conclusion that the photoelectric effect as I understand it can't be true. What am I missing? Thank you for your help!
Definition:Differential/Real Function Definition Let $U \subset \R$ be an open set. Let $f: U \to \R$ be a real function. Let $f$ be differentiable at a point $x \in U$. The differential of $f$ at $x$ is the linear transformation $\rd f \left({x}\right) : \R \to \R$ defined as: $\rd f \left({x}\right) \left({h}\right) = f' \left({x}\right) \cdot h$ where $f' \left({x}\right)$ is the derivative of $f$ at $x$. Let $U \subset \R$ be an open set. Let $f : U \to \R$ be a real function. Let $f$ be differentiable in $U$. The differential $\rd f$ is the mapping $\rd f : U \to \operatorname{Hom} \left({\R, \R}\right)$ defined as: $\left({\mathrm d f}\right) \left({x}\right) = \rd f \left({x}\right)$ where: $\rd f \left({x}\right)$ is the differential of $f$ at $x$ $\operatorname{Hom} \left({\R, \R}\right)$ is the set of all linear transformations from $\R$ to $\R$. Also: $f \left({x + h}\right) - f \left({x}\right) - \mathrm d f \left({x; h}\right) = o \left({h}\right)$ as $h \to 0$. In the above, $o \left({h}\right)$ is interpreted as little-O of $h$. $\d f \left({x}\right)$ $\d f_x$ $\d_x f$ $D f \left({x}\right)$ $D_x f$ Substituting $\d y$ for $\d f \left({x; h}\right)$ and $\d x$ for $h$, the following notation emerges: $\d y = f' \left({x}\right) \rd x$ hence: $\d y = \dfrac {\d y} {\d x} \rd x$ It is generally considered to be incorrect to consider $\d y$ as: a small change in $y$ caused by a small change $\d x$ in $x$. This is nearly true for small values of $\d x$, but will only ever be exactly true when $f$ has a graph which is a straight line. If it is necessary to talk about small changes then the notation $\delta x$ and $\delta y$ are to be used instead. Thus: $\displaystyle \lim_{\delta x \mathop \to 0} \ \delta y = \frac {\d y} {\d x} \delta x$ Received wisdom tells us that an even worse misconception is the idea that $\d y$ and $\d x$ are infinitesimal quantities which are obtained by letting $\delta x$ and $\delta y$ tend to zero. Then $\dfrac {\d y} {\d x}$ could be regarded as the quotient of these quantities, and the whole concept of a limit could be disposed of. This was the original idea that Isaac Newton based his Theory of Fluxions on. However, useful as this approach is, it is generally considered that does not have any logical basis. However, the field of non-standard analysis is an attempt to address these concerns from a modern perspective. Also see Straight Line Defined by Differential, where it is shown that for any fixed $x \in \R$, the equation: $k = \mathrm d f \left({x; h}\right) = f' \left({x}\right) h$
I want to find the following limit using L Hospital's rule: $$ \lim_{x \to \infty} \sqrt{x} \sin( \frac{1}{x}) $$ I know that this can be solved using squeezed theorem from Cal 1: $$ 0 < \sqrt{x}\sin( \frac{1}{x} ) < \frac{1}{x} $$ since $0 < \sin( \frac{1}{x}) < \frac{1}{x} $. What I have done so far is trying to convert it to fraction form $$ \lim_{x \to \infty} \sqrt{x} \sin( \frac{1}{x}) = \lim_{x \to \infty} \frac{\sin( \frac{1}{x})}{\frac{1}{\sqrt{x}} }$$ But what next? Your idea of rewriting it that way is good when you have limit like $$ \lim_{x \to \infty} x^2\sin\bigg( \frac{1}{x} \bigg) $$ because you can then make the substitution $u = \frac{1}{x}$ However, for this problem, you should write it first as $$ \lim_{x \to \infty} \sqrt{x}\sin\bigg( \frac{1}{x} \bigg) = \lim_{u \to \infty} u\sin\bigg( \frac{1}{u^2} \bigg) $$ by making the substitution $u = \sqrt{x}$ . Then now you have $$ \lim_{u \to \infty} u\sin\bigg( \frac{1}{u^2} \bigg) = \lim_{u \to \infty} \frac{ \sin\bigg( \frac{1}{u^2} \bigg)}{\frac{1}{u}} =^{L'H} \lim_{u \to \infty} \frac{\frac{-2\cos(\frac{1}{u^2})}{u^3} }{\frac{-1}{u^2} } = \frac{2\cos\bigg( \lim_{u \to \infty} \frac{1}{u^2}\bigg) }{\lim_{u \to \infty} (u) } = \frac{2}{\infty} = 0 $$ Consider the Taylor series expansion for $\sin(x)$. $$\sum_{n=0}^{\infty} (-1)^n \frac{x^{2n+1}}{(2n+1)!}$$ For $x^{-1}$, this series is $$\frac{1}{x} - \frac{1}{3! *x^3} + \frac{1}{5! * x^5} - \ldots$$ This is equivalent to $O\Big(\frac{1}{x}\Big)$ (meaning $\sin(\frac{1}{x}) \to \frac{1}{x}$ as $x \to \infty$). So rewrite your equation as $$\lim_{x \to \infty} \sqrt{x} *O\Big(\frac{1}{x}\Big) = O\Big(\frac{1}{\sqrt{x}}\Big)$$ Which goes to $0$ as x goes to infinity. l'hopital method is applicable only for those limits which are indeterminate . this limit can easily be calculated using elemantry algebra
Votes cast (618) all time by type month 608 up 284 question 1 10 down 334 answer 40 How to find the sum of this series : $1+\frac{1}{2}+ \frac{1}{3}+\frac{1}{4}+\dots+\frac{1}{n}$ 18 Prove that $ n < 2^{n}$ for all natural numbers $n$. 17 What functions satisfy such equation? 17 How to compute $\prod_{n=1}^\infty\left(1+\frac{1}{n!}\right)$? 17 How to show $\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$? +10 How to find the sum of this series : $1+\frac{1}{2}+ \frac{1}{3}+\frac{1}{4}+\dots+\frac{1}{n}$ +10 How do you express the Frobenius norm of a Matrix as the squared norm of its singular values? +10 Minimize $f(x,y,z)={(x-3)^2+y^2+z^2}$ using Lagrange Multipliers given the constrain $z=\frac{x^2}{4}+\frac{y^2}{25}$ +10 uniform random point in triangle all time by type month 608 up 284 question 1 10 down 334 answer
Let $\mathcal{V}$ be the space $C^r$ vector fields on a non-compact (smooth) manifold $M$. Being a subspace of $C^r(M, T M)$, it inherits the natural $C^r$ topology (i.e. the strong topology) of that space. Furthermore, since $C^r(M, T M)$ is Baire and $\mathcal{V}$ is closed in $C^r(M, T M)$, $\mathcal{V}$ is Baire too [See Hirsch, Differential Topology, Theorem 4.4]. If we further restrict $\mathcal{V}$ to only include the vector fields that induce locally uniformly bounded trajectories, is the restricted space still Baire? Note: Let $f \in \mathcal{V}$ be a particular vector field and $\phi(x, t)$ be the flow obtained from the ODE $\dot{x} = f(x)$. Vector field $f$ induces locally uniformly bounded trajectories if $\displaystyle \sup_{x \in C \; , \; t \geq 0} \| \phi(x, t) \| < \infty$ for all compact $C \subseteq M$. Some comments The cited theorem implies the following: The space $\mathcal{V}$ has a complete metric, if we consider the weak topology instead of the strong topology. Furthermore, to prove that the restricted space is Baire in the strong topology, it is sufficient to show that the restricted space is closed in this weak topology. Therefore, if one can show that the restricted space is still complete (with the mentioned metric), it is necessarily (weakly) closed, and hence Baire in the strong topology. An informal argument: If $M \subseteq \mathbb{R}^n$ for some $n > 0$, or (more generally) if $M$ is a uniform space, then the weak topology on $\mathcal{V}$ (i.e. the compact-open topology) coincides with the topology of compact convergence. Since the restriction criterion is formulated in terms of compact subsets of $M$, I suspect that the restricted space is closed under the weak topology. Thus, I suspect it is Baire. However, I wasn't able to turn this into a rigorous argument, and it is possible that this line of reasoning is incorrect.
ISSN: 1547-5816 eISSN: 1553-166X All Issues Journal of Industrial & Management Optimization October 2013 , Volume 9 , Issue 4 Select all articles Export/Reference: Abstract: In this paper, we propose a primal-dual approach for solving the generalized fractional programming problem. The outer iteration of the algorithm is a variant of interval-type Dinkelbach algorithm, while the augmented Lagrange method is adopted for solving the inner min-max subproblems. This is indeed a very unique feature of the paper because almost all Dinkelbach-type algorithms in the literature addressed only the outer iteration, while leaving the issue of how to practically solve a sequence of min-max subproblems untouched. The augmented Lagrange method attaches a set of artificial variables as well as their corresponding Lagrange multipliers to the min-max subproblem. As a result, both the primal and the dual information is available for updating the iterate points and the min-max subproblem is then reduced to a sequence of minimization problems. Numerical experiments show that the primal-dual approach can achieve a better precision in fewer iterations. Abstract: In this paper, we consider an optimal investment-consumption problem subject to a closed convex constraint. In the problem, a constraint is imposed on both the investment and the consumption strategy, rather than just on the investment. The existence of solution is established by using the Martingale technique and convex duality. In addition to investment, our technique embeds also the consumption into a family of fictitious markets. However, with the addition of consumption, it leads to nonreflexive dual spaces. This difficulty is overcome by employing the so-called technique of ``relaxation-projection" to establish the existence of solution to the problem. Furthermore, if the solution to the dual problem is obtained, then the solution to the primal problem can be found by using the characterization of the solution. An illustrative example is given with a dynamic risk constraint to demonstrate the method. Abstract: Due to globalization and technological advances, increasing competition and falling prices have forced enterprises to reduce cost; this poses new challenges in pricing and replenishment strategy. The study develops a piecewise production-inventory model for a multi-market deteriorating product with time-varying and price-sensitive demand. Optimal product pricing and material replenishment strategy is derived to optimize the manufacturer's total profit. Sensitivity analyses of how the major parameters affect the decision variables were carried out. Finally, the single production cycle is extended to multiple production cycles. We find that the total profit for multiple production cycle increases 5.77/100 when compared with the single production cycle. Abstract: The system of absolute value equations $Ax+B|x|=b$, denoted by AVEs, is proved to be NP-hard, where $A, B$ are arbitrary given $n\times n$ real matrices and $b$ is arbitrary given $n$-dimensional vector. In this paper, we reformulate AVEs as a family of parameterized smooth equations and propose a smoothing-type algorithm to solve AVEs. Under the assumption that the minimal singular value of the matrix $A$ is strictly greater than the maximal singular value of the matrix $B$, we prove that the algorithm is well-defined. In particular, we show that the algorithm is globally convergent and the convergence rate is quadratic without any additional assumption. The preliminary numerical results are reported, which show the effectiveness of the algorithm. Abstract: We examine the problem of optimal capacity reservation policy on innovative product in a setting of one supplier and one retailer. The parameters of capacity reservation policy are two dimensional: reservation price and excess capacity that the supplier will have in additional to the reservation amount. The above problem is analyzed using a two-stage Stackelberg game. In the first stage, the supplier announces the capacity reservation policy. The retailer forecasts the future demand and then determines the reservation amount. After receiving the reservation amount, the supplier expands the capacity. In the second stage, the uncertainty in demand is resolved and the retailer places a firm order. The supplier salvages the excess capacity and the associated payments are made. In the paper, with exogenous reservation price or exogenous excess capacity level, we study the optimal expansion policy and then investigate the impacts of reservation price or excess capacity level on the optimal strategies. Finally, we characterize Nash Equilibrium and derive the optimal capacity reservation policy, in which the supplier will adopt exact capacity expansion policy. Abstract: This paper develops three (re)ordering models of a supply chain consisting of one risk-neutral manufacturer and one loss-averse retailer to study the coordination mechanism and the effects of the reordering policy on the coordination mechanism. The three (re)ordering policies are twice ordering policy with break-even quantity, twice ordering policy without break-even quantity and once ordering policy, respectively. We design a buyback-setup-cost-sharing mechanism to coordinate the supply chain for each policy, and Pareto analysis indicates that both the manufacturer and the retailer will realize a 'win-win' situation. By comparing the models, we find that twice ordering policy with break-even quantity is absolutely dominant for both the retailer and the supply chain. However, only if the break-even quantity is less than the mean quantity to failure, twice ordering policy without break-even quantity is dominant over the once ordering policy. The higher marginal revenue can induce more order quantity of the retailer under both twice ordering policy with break-even quantity and once ordering policy. However, it is interesting that it has no effect on the order plan of centralized decision-maker in twice ordering policy without break-even quantity. Abstract: In today's business environment, there are various reasons, namely, bulk purchase discounts, seasonality of products, re-order costs, etc., which force the buyer to order more than the warehouse capacity (owned warehouse). Such reasons call for additional storage space to store the excess units purchased. This additional storage space is typically a rented warehouse. It is known that the demand of seasonal products increases at the beginning of the season up to a certain moment and then is stabilized to a constant rate for the remaining time of the season (ramp type demand rate). As a result, the buyer prefers to keep a higher inventory at the beginning of the season and so more units than can be stored in owned warehouse may be purchased. The excess quantities need additional storage space, which is facilitated by a rented warehouse. In this study an order level two-warehouse inventory model for deteriorating seasonal products is studied. Shortages at the owned warehouse are allowed subject to partial backlogging. This two-warehouse inventory model is studied under two different policies. The first policy starts with an instant replenishment and ends with shortages and the second policy starts with shortages and ends without shortages. For each of the models, conditions for the existence and uniqueness of the optimal solution are derived and a simple procedure is developed to obtain the overall optimal replenishment policy. The dynamics of the model and the solution procedure have been illustrated with the help of a numerical example and a comprehensive sensitivity analysis, with respect to the most important parameters of the model, is considered. Abstract: In the framework of multi-choice games, we propose a specific reduction to construct a dynamic process for the multi-choice Shapley value introduced by Nouweland et al. [8]. Abstract: In [8], Zhang et al. proposed a modified three-term HS (MTTHS) conjugate gradient method and proved that this method converges globally for nonconvex minimization in the sense that $\liminf_{k\to\infty}\|\nabla f(x_k)\|=0$ when the Armijo or Wolfe line search is used. In this paper, we further study the convergence property of the MTTHS method. We show that the MTTHS method has strongly global convergence property (i.e., $\lim_{k\to\infty}\|\nabla f(x_k)\|=0$) for nonconvex optimization by the use of the backtracking type line search in [7]. Some preliminary numerical results are reported. Abstract: This paper analyzes an M/G/1 queue with general setup times from an economical point of view. In such a queue whenever the system becomes empty, the server is turned off. A new customer's arrival will turn the server on after a setup period. Upon arrival, the customers decide whether to join or balk the queue based on observation of the queue length and the status of the server, along with the reward-cost structure of the system. For the observable and almost observable cases, the equilibrium joining strategies of customers who wish to maximize their expected net benefit are obtained. Two numerical examples are presented to illustrate the equilibrium joining probabilities for these cases under some specific distribution functions of service times and setup times. Abstract: In this paper, a new non-monotone trust-region algorithm is proposed for solving unconstrained nonlinear optimization problems. We modify the retrospective ratio which is introduced by Bastin et al. [Math. Program., Ser. A (2010) 123: 395-418] to form a convex combination ratio for updating the trust-region radius. Then we combine the non-monotone technique with this new framework of trust-region algorithm. The new algorithm is shown to be globally convergent to a first-order critical point. Numerical experiments on CUTEr problems indicate that it is competitive with both the original retrospective trust-region algorithm and the classical trust-region algorithms. Abstract: The aim of this paper is to develop an improved inventory model which helps the enterprises to advance their profit increasing and cost reduction in a single vendor-single buyer environment with permissible delay in payments depending on the ordering quantity and imperfect production. Through this study, some numerical examples available in the literature are provided herein to apply the permissible delay in payments depending on the ordering quantity strategy. Furthermore, imperfect products will cause the cost and increase number of lots through the whole model. Therefore, for more closely conforming to the actual inventories and responding to the factors that contribute to inventory costs, our proposed model can be the references to the business applications. Finally, results of this study showed applying the permissible delay in payments can promote the cost reduction; and also showed a longer trade credit term can decrease costs for the complete supply chain. Abstract: Channel coordination is an optimal state with operation of channel. For achieving channel coordination, we present a quantity discount mechanism based on a fairness preference theory. Game models of the channel discount mechanism are constructed based on the entirely rationality and self-interest. The study shows that as long as the degree of attention (parameters) of retailer to manufacturer's profit and the fairness preference coefficients (parameters) of retailers satisfy certain conditions, channel coordination can be achieved by setting a simple wholesale price and fixed costs. We also discuss the allocation method of channel coordination profit, the allocation method ensure that retailer's profit is equal to the profit of independent decision-making, and manufacturer's profit is raised. Abstract: Constraint qualification (CQ) is an important concept in nonlinear programming. This paper investigates the motivation of introducing constraint qualifications in developing KKT conditions for solving nonlinear programs and provides a geometric meaning of constraint qualifications. A unified framework of designing constraint qualifications by imposing conditions to equate the so-called ``locally constrained directions" to certain subsets of ``tangent directions" is proposed. Based on the inclusion relations of the cones of tangent directions, attainable directions, feasible directions and interior constrained directions, constraint qualifications are categorized into four levels by their relative strengths. This paper reviews most, if not all, of the commonly seen constraint qualifications in the literature, identifies the categories they belong to, and summarizes the inter-relationship among them. The proposed framework also helps design new constraint qualifications of readers' specific interests. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
First: Yes, when you are dealing with a function $f$ of one real variable $x$, the partial derivative $\frac{\partial f}{\partial x}$ coincides with the total derivative of $f$ with respect to $x$. Be ware that those are generally two different things. They only coincide for functions that have purely explicit relations to the variable $x$ in question. That is, the other variables do not depend on $x$. Intuitively the total derivative of $f$ measures how $f$ changes along the direction $x$ as all its variables vary. The partial derivative measures how $f$ changes along $x$ when the other parameters do not vary. You will soon learn about the derivative along a vector field or Li derivative. This is the real nice one as it does not depend on your choice of coordinates $x_1,\dots,x_n$ Second: After a brief council with Git Gud I hereby revise the second section. The symbol $\frac{\partial f(x)}{\partial x}$ is not meaningless. The following abstract nonsense is well defined. Consider $\frac{\partial f(x)}{\partial x}$ to be the derivative of $f$ with respect to a constant $x$ evaluated at that constant. Note that the derivative of any function with respect to a constant always turns out to be the empty function. In other word each value is the empty set. Now the symbol $\frac{\partial f(x)}{\partial x}$ stands for the empty set, hence the formula in question is well formed. Your lector must have had something similar in mind or the formula he wrote is not well formed. Even within this context it is false. You have the right to be confused, but only ask him politely about it. Third: It is strange at first but when you think about it, you realize the need for your parameters to depend further on some other quantities. Consider a change of variables. Your new and your old variables are related to each other... Now if you define the differential $df$ by $$df(x_1,x_2,\dots,x_n)=\sum_{i=1}^n\frac{\partial f}{\partial x_i}dx_i$$Then this is what the differential means. This meaning becomes more sensible when you study differential forms in differential geometry. In laymen’s terms, the differentials also called differential $1$-forms are just the smooth covector fields. EDIT: (Answer to the comments) The symbol $\frac{\partial f}{\partial x_1}\frac{\partial x_1}{\partial x}$ denotes the product of the partial derivative of $f$ with respect to $x_1$ with the partial derivative of $x_1$ with respect to $x$. Here $f$ is a twice smooth function of $x$ and $x_1$. You seem to be having trouble allowing the arguments $x$ and $x_1$ to depend on each other. This is an abstraction you have to allow. I hope the concept becomes clear through examples. a) Most often, there are no relations between the arguments indeed. Then $\frac{\partial x_i}{\partial x}=0$ and $\frac{\partial x}{\partial x_i}=0$. Then the total derivative coincides with the partial derivative. b) $\quad x'=\frac{dx}{dx}=\frac{\partial x}{\partial x}=1$. Then $\frac{\partial f}{\partial x}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial x}$ c) Imagine $x_1$ as a function of $x$, for instance $x_1(x)=x^2$. Now define $f$ by $f(x,x_1)=x+x_1(x)$. This is a smooth function of $x$ and of $x_1$. Also $$\frac{df}{dx}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial x}+\frac{\partial f}{\partial x_1}\frac{\partial x_1}{\partial x}=1+2x$$ d) Using the above notation consider the restrictions of $x_1$ and of $f$ to the non-negative part of the reals. Then $x=\sqrt x_1$ and $$\frac{df}{dx}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial x}+\frac{\partial f}{\partial x_1}\frac{\partial x_1}{\partial x}=1+\frac{1}{2\sqrt x_1}$$ e) More generally consider a smooth function $f(x_1,\dots,x_n)$ and a relation $R(x_1,\dots,x_n)=0$ such that the function $R$ is also smooth. Now you may talk about $\frac{\partial x_i}{\partial x_j}$ for you can always solve the relation for $x_i$ as a function of the rest. This is as clear as it can get. f) Consider also a change of variables. g) Consider implicit functions. h) Foremost I recommend reading about constraints and holonomic systems. This makes the concept more sensible... Wikipedia may not be a good source. I recommend E. T. Whittakers Analytical Dynamics and Arnolds Mathematical methods of classical mechanics
I am trying to prove that the dihedral group $D_n$ has $2n$ elements by using the theory of group actions. Specifically I want to use the orbit stabilizer theorem. So I need $D_n$ to act on a specific set $X$ and then compute the order of the stabilizer and the orbit for some $x\in X$. My question is presuming $X=\{1,2,\cdots,n\}$ and if $D_n$ acts in the canonical way on $X$ what will be the orbit and stabilizer of $x\in X$? By the dihedral group $D_n$ I understand the subgroup of $S_n$ generated by the two permutations $\sigma=\left(\begin{smallmatrix}1&2&\cdots&n\end{smallmatrix}\right)$ and $\tau=\left(\begin{smallmatrix}2&3&\cdots&n-1&n\\n&n-1&\cdots&3&2\end{smallmatrix}\right)$. Update: Here is my attempt: Let $D_n$ act on $X=\{1,2,\ldots,n\}$ in the canonical way, that is by $\rho\cdot x=\rho(x)$. Then $\mathcal{O}_1=X$ as $X=\{\sigma^i(1):1\le i\le n\}\subset\mathcal{O}_1$. So the action is transitive. Furthermore the stabilizer of $1$ is $\{\rho:\rho(1)=1\}$, i.e. all those permutations which fix $1$. Clearly $Id,\tau$ are such permutations. At this point I wish to prove that there are no other permutations which fix $1$. How do I do that? I am in particular bothered by permutations like $\sigma^{n-2}\tau\sigma^{n-2}\tau\sigma^{4}$? They fix $1$ but how do I show them to be the identity/$\tau$? (I do not wish to use the fact that $D_n=\{\sigma^i\circ\tau^j:0\le i<n,j=0,1\}$.)
Let $$z(x,y)=\int_{1}^{x^{2}-y^{2}}[\int_{0}^{u}\sin(t^{2})dt]du.$$ Calculate $$\frac{\partial^{2}z}{\partial x\partial y}$$ I tried to solve this using the Fundamental Theorem of Calculus. I also found an solution like this: using Fundamental Theorem of Calculus, we get: $$\frac{\partial z}{\partial y}=\left[\int_{0}^{x^{2}-y^{2}}\sin(t^{2})dt\right]\cdot(-2y)$$ I can't understand why the extremes have changed and why I have to multiply by the partial derivate of $y$ in $x^{2}-y^{2}$. Thanks.
Forgive me for what is probably a simple question, I am new to this field. I am studying the Hirzebruch surfaces and their higher dimensional analogues $M_{n,k}$, defined to be the projective line bundles \begin{equation} M_{n,k}=\mathbb{P}(\mathcal{O}(-k)\oplus\mathcal{O}(0)) \end{equation} over $\mathbb{CP}^{n-1}$. Here, $\mathcal{O}(-1)$ is the tautological line bundle, and $\mathcal{O}(0)$ the trivial line bundle, both over $\mathbb{CP}^{n-1}$. We can define two divisors on $M_{n,k}$, namely $D_0$ to be the section with zero $\mathcal{O}(-k)$ component and $D_\infty$ to be the section with zero $\mathcal{O}(0)$ component. These two divisors then determine holomorphic line bundles $[D_0]$ and $[D_\infty]$ in the usual way. But to a line bundle $L$ on a complex manifold $X$, we can associate its first Chern class $c_1(L)$, which I understand to be the cohomology class in $H^{1,1}(X;\mathbb{R})$ determined by the curvature form $R_h$ of any Hermitian metric $h$ on $L$ (this is independent of $h$). I am then told that the cohomology classes $c_1([D_0])$ and $c_1([D_\infty])$ span $H^{1,1}(M_{n,k};\mathbb{R})$, and that for any Kahler class $\alpha\in H^{1,1}(M_{n,k};\mathbb{R})$ (i.e. any class for which a Kahler metric/form can be chosen as a representative), one can find constants $0<a<b$ such that\begin{equation}\alpha=\frac{b}{k}[D_\infty]-\frac{a}{k}[D_0].\end{equation} I'm afraid that I have absolutely no idea as to how one can show this, or why it may be obvious. I'm aware that there are other ways of defining the 1st Chern class of a line bundle, and perhaps one of these may be more useful. Any help would be much appreciated!
Let $X$ be a (connected) topological space with a $C^\infty$ atlas. It is a known theorem that if $X$ is second-countable and Hausdorff, then it admits partitions of unity. I'm trying to prove the "reverse" theorem: Let $X$ be a (connected) topological space with a $C^\infty$ atlas. If $X$ admits partitions of unity, then $X$ is second-countable and Hausdorff. I was able to prove the Hausdorff condition by taking a partition of unity $\{\rho_p,\rho_q\}$ subordinate to $\{M-\{p\},M-\{q\}\}$ and taking neighbourhoods $U,V$ of $p,q$ small enough so that the values of $\rho_p,\rho_q$ in $U$ conflict with the ones in $V$ so that $U\cap V=\emptyset$. Now I'm stuck with second-countability. Here is my attempt: For each $p\in M$ take a chart $\varphi_p:U_p\to\mathbb{R}^n$. For a partition of unity $\{\rho_p\}$ subordinate to $\{U_p\}$, let: $$V_p:=\rho_p^{-1}(0,\infty)\subset U_p$$ By definition of partition of unity, $\{V_p\}$ is a locally finite refinement of $\{U_p\}$. Now since $U_p$ is homeomorphic to $\mathbb{R}^n$, $U_p$ is second countable and therefore $V_p$ is second-countable. I think the natural thing to do is to find countably many points $\{p_n\}_{n\in\mathbb{N}}$ so that $\{V_{p_n}\}_{n\in\mathbb{N}}$ is a cover for $X$, but I can't see how to do that.
Suppose that $A$ and $B$ are DFAs. We know that there is some DFA $M$ such that $L(M) = L(A) \bigtriangleup L(B)$, the symmetric difference. Also, we can construct this $M$ by some Turing machine $N$. But can we ensure that $N$ has the following form? $N$ consists of (i) a read-only input tape, (ii) a work tape that is log-space with respect to $|\langle A, B\rangle|$, and (iii) a one-way, write-only, polynomial-time output tape. This really comes down to showing that this kind of TM can construct DFAs for $L(A) \cup L(B)$ and $L(A) \cap L(B)$. But it's not clear to me how this would work. Any help is appreciated.
On the DNA Computer Binary Code In any finite set we can define a , a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule ( G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases The Boolean algebra on the set of elements X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable. In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table: OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table: $A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example: CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111 ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000 $\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001 The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is: In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position. There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning. References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14.
Answer The diesel engine is more efficient. Work Step by Step We know that the value of $\gamma$ in air is 1.4. We use the efficiency of the gas engine found in problem 55: $e=1-r^{1-\gamma}$ $e=1-8.3^{1-1.4}=.57$ In problem 57, we found that the efficiency of the diesel engine is given by: $e_{diesel}=1-\frac{r^{1-\gamma}(\alpha^{\gamma}-1)}{(\alpha-1)\gamma}$ Thus, we find: $e_{diesel}=1-\frac{19^{1-1.4}(2.4^{1.4}-1)}{(2.4-1)(1.4)}=.62$ The diesel engine is more efficient.
The $\mathbb{Z_5}$-vector space $\mathfrak{B}$ 3 over the field $(\mathbb{Z_5}, +, .)$ $\mathfrak{B}$ 3over the field $(\mathbb{Z_5}, +, .)$ 1. BackgroundThis is a formal introduction to the genetic code $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$ over the field $(\mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable Document Format (CDF) named IntroductionToZ5GeneticCodeVectorSpace.cdf available in GitHub. This is graphic user interface with an interactive didactic introduction to the mathematical biology background that is explained here. To interact with a CDF users will require for Wolfram CDF Player or Mathematica. The Wolfram CDF Player is freely available (easy installation on Windows OS and on Linux OS). 2. Biological mathematical model If the Watson-Crick base pairings are symbolically expressed by means of the sum “+” operation, in such a way that hold: G + C = C + G = D, U + A = A + U = D, then this requirement leads us to define an additive group ($\mathfrak{B}^3$, +) on the set of five DNA bases ($\mathfrak{B}^3$, +). Explicitly, it was required that the bases with the same number of hydrogen bonds in the DNA molecule and different chemical types were algebraically inverse in the additive group defined in the set of DNA bases $\mathfrak{B}$. In fact eight sum tables (like that one shown below), which will satisfice the last constraints, can be defined in eight ordered sets: {D, A, C, G, U}, {D, U, C, G, A}, {D, A, G, C, U}, {D, U, G, C, A},{G, A, U, C},{G, U, A, C},{C, A, U, G} and {C, U, A, G} [1,2]. The sets originated by these base orders are called the strong-weak ordered sets of bases [1,2] since, for each one of them, the algebraic-complementary bases are DNA complementary bases as well, pairing with three hydrogen bonds (strong, G:::C) and two hydrogen bonds (weak, A::U). We shall denote this set SW. A set of extended base triplet is defined as $\mathfrak{B}^3$ = { XYZ | X, Y, Z $\in\mathfrak{B}$}, where to keep the biological usual notation for codons, the triplet of letters $XYZ\in\mathfrak{B}^3$ denotes the vector $(X,Y,Z)\in\mathfrak{B}^3$ and $\mathfrak{B} =$ {A, C, G, U}. An Abelian group on the extended triplets set can be defined as the direct third power of group: $(\mathfrak{B}^3,+) = (\mathfrak{B},+)×(\mathfrak{B},+)×(\mathfrak{B},+)$ where X, Y, Z $\in\mathfrak{B}$, and the operation “+” as shown in the table [2]. Next, for all elements $\alpha\in\mathbb{Z}_{(+)}$ (the set of positive integers) and for all codons $XYZ\in(\mathfrak{B}^3,+)$, the element: $\alpha \bullet XYZ = \overbrace{XYZ+XYX+…+XYZ}^{\hbox{$\alpha$ times}}\in(\mathfrak{B}^3,+)$ is well defined. In particular, $0 \bullet X =$ D for all $X\in(\mathfrak{B}^3,+) $. As a result, $(\mathfrak{B}^3,+)$ is a three-dimensional (3D) $\mathbb{Z_5}$-vector space over the field $(\mathbb{Z_5}, +, .)$ of the integer numbers modulo 5, which is isomorphic to the Galois field GF(5). Notice that the Abelian groups $(\mathbb{Z}_5, +)$ and $(\mathfrak{B},+)$ are isomorphic. For the sake of brevity, the same notation $\mathfrak{B}^3$ will be used to denote the group $(\mathfrak{B}^3,+)$ and the vector space defined on it. + D A C G U D D A C G U A A C G U D C C G U D A G G U D A C U U D A C G This operation is only one of the eight sum operations that can be defined on each one of the ordered sets of bases from SW. 3. The canonical base of the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$Next, in the vector space $\mathfrak{B}^3$, vectors (extended codons): e 1 =ADD, e 2 =DAD and e 3 =DDA are linearly independent, i.e., $\sum\limits_{i=1}^3 c_i e_i =$ DDD implies $c_1=0, c_2=0$ and $c_3=0$ for any distinct $c_1, c_2, c_3 \in\mathbb{Z_5}$. Moreover, the representation of every extended triplet $XYZ\in\mathfrak{B}^3$ on the field $\mathbb{Z_5}$ as $XYZ=xe_1+ye_2+ze_3$ is unique and the generating set $e_1, e_2$, and $e_3$ is a canonical base for the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$. It is said that elements $x, y, z \in\mathbb{Z_5}$ are the coordinates of the extended triplet $XYZ\in\mathfrak{B}^3$ in the canonical base ($e_1, e_2, e_3$) [3] References José M V, Morgado ER, Sánchez R, Govezensky T. The 24 Possible Algebraic Representations of the Standard Genetic Code in Six or in Three Dimensions. Adv Stud Biol, 2012, 4:119–52. Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Grau R. An algebraic hypothesis about the primeval genetic code architecture. Math Biosci, 2009, 221:60–76.
Let $f:\mathbb{R}^n \to \mathbb{R}^n$ be continuous and let there exist $\alpha > 0$ such that $||f(\mathbf{x}) - f(\mathbf{y})|| \geq \alpha || \mathbf{x} - \mathbf{y}||$ for all $\mathbf{x}, \mathbf{y} \in \mathbb{R}^n$. Prove that $f$ is one-one, onto and that $f^{-1}$ is continuous. One-one is trivial. It is onto-ness that I can't show. Write $S = f(\mathbb{R}^n)$. Using sequential continuity, it is possible to show that $S$ is closed. If I could show $S$ is open, I would be done, but I can't. Also, writing $g(\mathbf{x}) := \dfrac{f(\mathbf{x})}{\alpha}$, the condition can be converted to that of proper expansive map, $||g(\mathbf{x}) - g(\mathbf{y})|| \geq || \mathbf{x} - \mathbf{y}||$. But since $\mathbb{R}^n$ is not compact, I cannot use the result here. Any help is appreciated! EDIT: As commented below, the Invariance of Domain theorem seems to work in this case, but that result does not use the expansive-type condition provided here (except for showing the injectivity), and so it appears that an easier proof would be possible.
A typical chips and crepe packaging cone, for example, has V = 355 cm3.) What dimensions (height and radius) will minimize the cost of recycled paper to construct the cone? closed as off-topic by max_zorn, GNUSupporter 8964民主女神 地下教會, Ak19, Thomas Shelby, Lord Shark the Unknown Jun 16 at 5:50 This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – max_zorn, GNUSupporter 8964民主女神 地下教會, Ak19, Thomas Shelby, Lord Shark the Unknown Presume cost of material is proportional to surface area. Thus seek to minimize surface area. Look up the formula for surface area of a cone (in terms of its height and radius). Write the surface area in terms of one variable (r or h). This requires a constraint relationship, which is the equation for volume of a cone. The volume is a constant so r can be written in terms of h, or vice versa, and substituted into the surface area equation to reduce it to a single variable (whichever variable is more convenient). Differentiate surface area with respect to the chosen variable, set the derivative equal to zero, and solve for the minimizing value of the variable (r or h). Plug the minimizing value of r (or h) into the constraint (volume) equation to find the value of the other variable. You will need the formula for the surface area, which is given by $$M=\pi r\sqrt{r^2+h^2}+\pi r^2$$ and the formula for the volume $$V=\frac{1}{3}\pi r^2h$$ so we have to optimize the function $$g(r)=\pi r\sqrt{r^2+\left(\frac{3V}{\pi r^2}\right)^2}+\pi r^2$$
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A October 2015 , Volume 35 , Issue 10 Select all articles Export/Reference: Abstract: We study the set of periods of degree 1 continuous maps from $\sigma$ into itself, where $\sigma$ denotes the space shaped like the letter $\sigma$ (i.e., a segment attached to a circle by one of its endpoints). Since the maps under consideration have degree 1, the rotation theory can be used. We show that, when the interior of the rotation interval contains an integer, then the set of periods (of periodic points of any rotation number) is the set of all integers except maybe $1$ or $2$. We exhibit degree 1 $\sigma$-maps $f$ whose set of periods is a combination of the set of periods of a degree 1 circle map and the set of periods of a $3$-star (that is, a space shaped like the letter $Y$). Moreover, we study the set of periods forced by periodic orbits that do not intersect the circuit of $\sigma$; in particular, when there exists such a periodic orbit whose diameter (in the covering space) is at least $1$, then there exist periodic points of all periods. Abstract: We study sectional-Anosov flows on compact $3$-manifolds. First we prove that every periodic orbits represents an infinite order element of the fundamental group outside the strong stable manifolds of the singularities. Next, in the transitive case, we prove that the first Betti number of the manifold is positive, that the number of singularities is given by the Euler characteristic and that every boundary's connected component has nonpositive Euler characteristic. Moreover, there is one component with negative characteristic if and only if the flow has singularities. These results will be used to discuss the existence of transitive sectional-Anosov flows on specific compact 3-manifolds with boundary. Abstract: Spacing shifts were introduced by Lau and Zame in the 1970's to provide accessible examples of maps that are weakly mixing but not mixing. In previous papers by the authors and others, it has been observed that the problem of describing when spacing shifts are topologically transitive appears to be quite difficult in general. In the present paper, we give a characterization of sofic spacing shifts and begin to investigate which sofic spacing shifts are topologically transitive. We show that the canonical graph presentation of such a shift has a rather simple form, for which we introduce the terminology hereditary bunched cycleand discuss the apparently difficult problem of determining which hereditary bunched cycles actually present spacing shifts. Abstract: We present a method designed for computing solutions of infinite dimensional nonlinear operators $f(x)=0$ with a tridiagonal dominant linear part. We recast the operator equation into an equivalent Newton-like equation $x=T(x)=x-Af(x)$, where $A$ is an approximate inverse of the derivative $Df(\overline{x})$ at an approximate solution $\overline{x}$. We present rigorous computer-assisted calculations showing that $T$ is a contraction near $\overline{x}$, thus yielding the existence of a solution. Since $Df(\overline{x})$ does not have an asymptotically diagonal dominant structure, the computation of $A$ is not straightforward. This paper provides ideas for computing $A$, and proposes a new rigorous method for proving existence of solutions of nonlinear operators with tridiagonal dominant linear part. Abstract: A global Calderón-Zygmund estimate type estimate in Weighted Lorentz spaces and Lorentz-Morrey spaces is obtained for weak solutions to elliptic obstacle problems of $p$-Laplacian type with discontinuous coefficients over Reifenberg flat domains. Abstract: We consider a class of linear Schrödinger equations in $\mathbb{R}^d$ with rough Hamiltonian, namely with certain derivatives in the Sjöostrand class $M^{\infty,1}$. We prove that the corresponding propagator is bounded on modulation spaces. The present results improve several contributions recently appeared in the literature and can be regarded as the evolution counterpart of the fundamental result of Sjöstrand about the boundedness of pseudodifferential operators with symbols in that class. Finally we consider nonlinear perturbations of real-analytic type and we prove local wellposedness of the corresponding initial value problem in certain modulation spaces. Abstract: We consider special flows over two-dimensional rotations by $(\alpha,\beta)$ on $\mathbb{T}^2$ and under piecewise $C^2$ roof functions $f$ satisfying von Neumann's condition \[\int_{\mathbb{T}^2}f_x(x,y)\,dx\,dy\neq 0\quad\text{ and }\quad \int_{\mathbb{T}^2}f_y(x,y)\,dx \,dy\neq 0.\] For an uncountable set of $(\alpha,\beta)$ with both $\alpha$ and $\beta$ of unbounded partial quotients the mixing property is proved to hold. Abstract: We study the ergodic properties of a classical two-particle system with square-well pair potential in an interval. Abstract: We study the bifurcation curve and exact multiplicity of positive solutions of a two-point boundary value problem arising in a theory of thermal explosion \begin{equation*} \left\{ \begin{array}{l} u^{\prime\prime}(x) + \lambda \exp ( \frac{au}{a+u}) =0,   -1 < x < 1, \\ u(-1)=u(1)=0, \end{array} \right. \end{equation*} where $\lambda >0$ is the Frank--Kamenetskii parameter and $a>0$ is the activation energy parameter. By developing some new time-map techniques and applying Sturm's theorem, we prove that, if $a\geq a^{\ast \ast }\approx 4.107$, the bifurcation curve is S-shaped on the $(\lambda ,\Vert u \Vert _{\infty })$-plane. Our result improves one of the main results in Hung and Wang (J. Differential Equations 251 (2011) 223--237). Abstract: Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^n$, $n\ge 3$, $0 < m \le \frac{n-2}{n}$, $a_1,a_2,\dots, a_{i_0}\in\Omega$, $\delta_0 = \min_{1 \le i \le i_0} \mbox{dist} (a_i,∂\Omega)$ and let $\Omega_{\delta}=\Omega\setminus\cup_{i=1}^{i_0}B_{\delta}(a_i)$ and $\hat{\Omega}=\Omega\setminus\{a_1\,\dots,a_{i_0}\}$. For any $0<\delta<\delta_0$ we will prove the existence and uniqueness of positive solution of the Neumann problem for the equation $u_t=\Delta u^m$ in $\Omega_{\delta}\times (0,T)$ for some $T>0$. We will prove the existence of singular solutions of this equation in $\hat{\Omega}\times (0,T)$ for some $T>0$ that blow-up at the points $a_1,\dots, a_{i_0}$. Abstract: The Cauchy problem of Klein-Gordon equations is considered for power and exponential type nonlinear terms with singular weights. Time local and global solutions are shown to exist in the energy class. The Caffarelli-Kohn-Nirenberg inequality and the Trudinger-Moser type inequality with singular weights are applied to the problem. Abstract: We show that the fractional Laplacian can be viewed as a Dirichlet-to-Neumann map for a degenerate hyperbolic problem, namely, the wave equation with an additional diffusion term that blows up at time zero. A solution to this wave extension problem is obtained from the Schrödinger group by means of an oscillatory subordination formula, which also allows us to find kernel representations for such solutions. Asymptotics of related oscillatory integrals are analysed in order to determine the correct domains for initial data in the general extension problem involving non-negative self-adjoint operators. An alternative approach using Bessel functions is also described. Abstract: This paper deals with a diffusive stage structured model with state-dependent delay which is assumed to be an increasing function of the population density. Compared with the constant delay, the state--dependent delay makes the dynamic behavior more complex. For the state--dependent delay system, the dynamic behavior is dependent of the diffusion coefficients, while the equilibrium state of constant delay system is not destabilized by diffusion. Through calculating the minimum wave speed, we find that the wave is slowed down by the state-dependent delay. Then, the existence of traveling waves is obtained by constructing a pair of upper--lower solutions and using Schauder's fixed point theorem. Finally, the traveling wavefront solutions for large wave speed are also discussed, and the fronts appear to be all monotone, regardless of the state dependent delay. This is an interesting property, since many findings are frequently reported that delay causes a loss of monotonicity, with the front developing a prominent hump in some other delay models. Abstract: Stability analysis is performed for a linear differential equation with two delays. Geometric arguments show that when the two delays are rationally dependent, then the region of stability increases. When the ratio has the form $1/n$, this study finds the asymptotic shape and size of the stability region. For example, a delay ratio of $1/3$ asymptotically produces a stability region about 44.3% larger than any nearby delay ratios, showing extreme sensitivity in the delays. The study provides a systematic and geometric approach to finding the eigenvalues on the boundary of stability for this delay differential equation. A nonlinear model with two delays illustrates how our methods can be applied. Abstract: The aim of this work is to show the existence of free boundary minimal surfaces of Saddle Tower type which are embedded in a vertical solid cylinder in $\mathbb{R}^3$ and invariant with respect to a vertical translation. The number of boundary curves equals $2l$, $l \ge 2$. These surfaces come in families depending on one parameter and they converge to $2l$ vertical stripes having a common vertical intersection line. Such surfaces are obtained by perturbing the symmetrically modified Saddle Tower minimal surfaces. Abstract: In this paper we deal with Robin and Neumann parametric elliptic equations driven by a nonhomogeneous differential operator and with a reaction that exhibits competing nonlinearities (concave-convex nonlinearities). For the Robin problem and without employing the Ambrosetti-Rabinowitz condition, we prove a bifurcation theorem for the positive solutions for small values of the parameter $\lambda>0$. For the Neumann problem with a different geometry and using the Ambrosetti-Rabinowitz condition we prove bifurcation for large values of $\lambda>0$. Abstract: We study partially hyperbolic diffeomorphisms satisfying a trapping property which makes them look as if they were Anosov at large scale. We show that, as expected, they share several properties with Anosov diffeomorphisms. We construct an expansive quotient of the dynamics and study some dynamical consequences related to this quotient. Abstract: This article is concerned with the study of Mather's $\beta$-functionassociated to Birkhoff billiards. This function corresponds to the minimal average action of orbits with a prescribed rotation number and, from a different perspective, it can be related to the maximal perimeter of periodic orbits with a given rotation number, the so-called Marked length spectrum. After having recalled its main properties and its relevance to the study of the billiard dynamics, we stress its connections to some intriguing open questions: Birkhoff conjecture and the isospectral rigidity of convex billiards. Both these problems, in fact, can be conveniently translated into questions on this function. This motivates our investigation aiming at understanding its main features and properties. In particular, we provide an explicit representation of the coefficients of its (formal) Taylor expansion at zero, only in terms of the curvature of the boundary. In the case of integrable billiards, this result provides a representation formula for the $\beta$-function near $0$. Moreover, we apply and check these results in the case of circular and elliptic billiards. Abstract: In this paper, the compressible magnetohydrodynamic equations without heat conductivity are considered in $\mathbb{R}^3$. The global solution is obtained by combining the local existence and a priori estimates under the smallness assumption on the initial perturbation in $H^l (l>3)$. But we don't need the bound of $L^1$ norm. This is different from the work [5]. Our proof is based on pure estimates to get the time decay estimates on the pressure, velocity and magnet field. In particular, we use a fast decay of velocity gradient to get the uniform bound of the non-dissipative entropy, which is sufficient to close the priori estimates. In addition, we study the optimal convergence rates of the global solution. Abstract: A delayed lattice dynamical system with non-local diffusion and interaction is considered in this paper. The exact asymptotics of the wave profile at both wave tails is derived, and all the wave profiles are shown to be strictly increasing. Moreover, we prove that the wave profile with a given admissible speed is unique up to translation. These results generalize earlier monotonicity, asymptotics and uniqueness results in the literature. Abstract: We present here a construction of horseshoes for any $\mathcal{C}^{1+\alpha}$ mapping $f$ preserving an ergodic hyperbolic measure $\mu$ with $h_{\mu}(f)>0$ and then deduce that the exponential growth rate of the number of periodic points for any $\mathcal{C}^{1+\alpha}$ mapping $f$ is greater than or equal to $h_{\mu}(f)$. We also prove that the exponential growth rate of the number of hyperbolic periodic points is equal to the hyperbolic entropy. The hyperbolic entropy means the entropy resulting from hyperbolic measures. Abstract: This paper is concerned with a four-component Camassa-Holm type system proposed in [37], where its bi-Hamiltonian structure and infinitely many conserved quantities were constructed. In the paper, we first establish the local well-posedness for the system. Then we present several global existence and blow-up results for two integrable two-component subsystems. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
Search astrophysics (66)biophysics (16)chemistry (18)electric field (58)electric current (61)gravitational field (64)hydromechanics (123)nuclear physics (34)oscillations (40)quantum physics (25)magnetic field (29)mathematics (75)mechanics of a point mass (219)gas mechanics (79)mechanics of rigid bodies (188)molecular physics (59)geometrical optics (65)wave optics (47)other (135)relativistic physics (33)statistical physics (24)thermodynamics (117)wave mechanics (42) hydromechanics (6 points)6. Series 32. Year - 3. range A container is filled with sulfuric acid to the height $h$. We drill a very small hole perpendicularly to the side of the container. What is the maximal distance (from the container) that the acid can reach from all possible positions of the hole? Assume the container placed horizontally on the ground. Do not leave drills where Jáchym may take them! (6 points)5. Series 32. Year - 3. border Imagine an aquarium in the shape of a cube with edge length $a = 1 \mathrm{m}$, which is separated into two parts via a vertical partition perpendicular to sides of the aquarium. Let us assume that the partition can move in the direction perpendicular to the plane of the partition, but it is fixed in other directions. Also, it can't rotate. We pour $V_1 = 200 \mathrm{\ell }$ of water (density $\rho \_v = 1 000 \mathrm{kg\cdot m^{-3}}$) into the first part and $V_2 = 230 \mathrm{\ell }$ of oil (density $\rho \_o = 900 \mathrm{kg\cdot m^{-3}}$) into the second part. In which position is the partition in mechanical equilibrium? In what height will be the surfaces of the liquids? Bonus: Find frequency of small oscillations of the partition. Assume, that mass of the partition is $m = 10 \mathrm{oz}$ and the liquids move without friction or viscosity. Michal cleaned an aquarium. (3 points)3. Series 32. Year - 1. discounted bananas Mikulas put bananas into a carry bag in a grocery store and before he had weighted them, he got an idea. If he fills the bag with helium instead of air, the bananas will weigh less. Mikulas bought the helium in a sale - one CZK for a litre at standard pressure. Calculate the prize of the bananas so that this „bluff“ pays off. Bonus: Find a gas for which it would pay off when the price of bananas is 30 CZK per kilogram. Do not forget to cite references. What do you think about while weighting bananas? (12 points)2. Series 32. Year - E. Measure an average vertical velocity of falling leaves. Use leaves from several different trees and discuss what impact the shape of a leaf has on the velocity. How should an ideal leaf look like when we want it to fall as slow as it is possible? Jachym got this idea, when he asked his friend, whether he knew any interesting experiment (10 points)2. Series 32. Year - P. Create an accurate weather forecast for address V Holešovičkách 2, Prague 8, for Wednesday 14th of November from 12:00 to 15:00. How will the weather change throughout the whole day? You are allowed to use previous data about the weather in this area (remember you are only permitted to use data until 10th of November). It is necessary to justify your weather prediction, write down references and ideally to use as many data and resources as possible. Karl listened to radio on a motorway (3 points)5. Series 30. Year - 2. spheres in viscous fluids When solving problems involving drag in air or in general a fluid, we use Newton's resistance equation $$F=\frac{1}{2}C\rho Sv^2\,,$$ where $Cis$ the drag coefficient of the object in the direction of motion, $ρis$ the density of the fluid, $Sis$ the cross-section area and $vis$ the velocity of the object. This is usually quite accurate for turbulent flow. We are interested in a sphere, for which $C=0.50$. In the case of laminar flow, we usually use Stokes' law $$F = 6 \pi \eta r v\,,$$ where $ηis$ the dynamic viscosity of the fluid and $ris$ the radius of the sphere. Is there a velocity for which the two resistance forces are equal for the same sphere?. How will this velocity depend on the radius of the sphere? Karel heard at a conference that people struggle with equations. (8 points)1. Series 30. Year - P. The sky is falling Did you ever think about, why the clouds simply don't fall down, when they consist of water, which is much denser than air? The raindrops fall to the ground in minutes, so why not clouds? Try to physically explain this. Support all of your claims with calculations. Mirek se zadíval na nebe a dostal strach.
I'm going to start my answer by going back to what I was taught at Uni - basically how each of the parameters of the transistor scale - an approach called "Constant Electric Field Scaling". Lets say we have a transistor, and want to scale it's length \$L\$ and width \$W\$ by, \$\alpha\$ (both are scaled to keep the aspect ratio the same). \$\alpha\$ Could be \$2\$, \$4\$, \$1.23\$, anything really. What happens? The material isn't changing, so to avoid breakdown, we want to keep the electric field across the transistor the same. $$E=\frac{V_{ds}}{L}=\frac{V_{ds}'}{(L/\alpha)} \Rightarrow V_{ds}'=\frac{V_{ds}}{\alpha}$$ The drain-source voltage of the transistor must scale - hence voltages go down. You can also say the same for the threshold voltage of the transistor (\$V_t\$) and the gate-source voltage, (\$V_{gs}\$). Again, for the electric fields to remain the same strength, specifically that across the gate oxide. \$E=\frac{V}{m}\$ so: $$T_{ox}'=\frac{T_{ox}}{\alpha}$$ So the gate get's thinner! This in turn changes the capacitance of the oxide: $$\begin{align}C_{ox}&=\epsilon \frac{WL}{T_{ox}} \\C_{ox}'&=C_{ox}\times\frac{\alpha}{\alpha^2}=\frac{C_{ox}}{\alpha}\\\end{align} $$ Why does the capacitance matter? Well, we can approximate the saturation current \$I_{d(sat)}\$. We can say: $$I_{d(sat)}\approx(\frac{V_{sat}C_{ox}}{L})(V_{gs}-V_t)$$ This means we can reasonably assume that: $$I_{d(sat)}'=I_{d(sat)}\frac{\alpha}{\alpha}\frac{1}{\alpha}=\frac{I_{d(sat)}}{\alpha}$$ I'm not going to go into it, but you can also work out that frequency \$f'=\alpha f\$, hence things can speed up as we scale down. Now, the dissipated power of each transistor can be approximated as: $$P=IV=I_d V_{ds}$$ So as the transistor scales: $$P'=I_{d(sat)}'\times V_{ds}'=\frac{I_{d(sat)}}{\alpha}\times \frac{V_{ds}}{\alpha} = \frac{P}{\alpha^2}$$ Notice how the power dissipation has gone down by the square of \$\alpha\$! So the power density \$U=P/A\$, will remain constant: $$U'=U\frac{\alpha^2}{\alpha^2}=U$$ This all looks great, it means we can keep scaling, and increase the number of transistors for the same amount of power, whilst getting faster and faster. Or does it? The thing is, there is another important consideration. In order to interact with the outside world, and for noise immunity, we can't keep reducing the voltage of the process - notice how in the above, all of the electric fields are kept the same by scaling the voltages. In practice this isn't done directly - the voltages are being scaled much slower than the size of the transistors. If they weren't, then by now CPUs would probably be running at 0.1V logic instead of 0.65V or so. The slightest amount of noise either on signals or power rails would be catastrophic. In practice, two different scale factors are used, one for size (\$\alpha\$) and one for voltages (\$\kappa\$). The scaling is something like this: $$\begin{array}{c|c}Dimension & Scale Factor \\\hlineL, W, T_{ox} & 1/\alpha\\A & 1/\alpha^2\\V_{ds}, V_{gs} & 1/\kappa\\E_{ds}, E_{ox} & \alpha/\kappa\\C_{ds}, C_{ox} & 1/\alpha\\I_{d(sat)} & 1/\kappa\\P & 1/\kappa^2\\U & \alpha^2/\kappa^2\\f & \alpha\\\end{array}$$ From this we can see that because of the two different scale factors, the power density, \$U\$, will go up if \$\alpha\$ is scaled faster than \$\kappa\$ is, which is what is happening in practice. Furthermore, this is a very simplified overview. It holds quite well if you have very large transistors, but as they get smaller and smaller, it doesn't hold as well as you might hope. Notice how two key factors \$T_{ox}\$ and \$L\$ get smaller? Well basically this means the barrier between the channel and gate is getting smaller and smaller, as is the distance between drain and source. The gate oxide thickness is now getting so thin, you can comfortably measure it in number of atoms thick! The distance between drain and source getting smaller also means the electric field between drain and source when the transistor is off starts to interact with the barrier created by the electric field of the gate. Both of these factors mean that the amount of leakage in the transistor - unwanted currents flowing from drain to source, or drain to gate, increase. If leakages go up, power dissipation goes up (and at some point the transistors stop working properly). This leakage is not factored in to the above derivations.
The i.i.d. assumption about the pairs $(\mathbf{X}_i, y_i)$, $i = 1, \ldots, N$, is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and sometimes just because we usually make this assumption. To satisfactorily answer if the assumption is really necessary, and what the consequences are of not making this assumption, I would easily end up writing a book (if you ever easily end up doing something like that). Here I will try to give a brief overview of what I find to be the most important aspects. A fundamental assumption Let's assume that we want to learn a probability model of $y$ given $\mathbf{X}$, which we call $p(y \mid \mathbf{X})$. We do not make any assumptions about this model a priory, but we will make the minimal assumption that such a model exists such that the conditional distribution of $y_i$ given $X_i$ is $p(y_i \mid X_i)$. What is worth noting about this assumption is that the conditional distribution of $y_i$ depends on $i$ only through $X_i$. This is what makes the model useful, e.g. for prediction. The assumption holds as a consequence of the identically distributed part under the i.i.d. assumption, but it is weaker because we don't make any assumptions about the $\mathbf{X}_i$'s. In the following the focus will mostly be on the role of independence. Modelling There are two major approaches to learning a model of $y$ given $\mathbf{X}$. One approach is known as discriminative modelling and the other as generative modelling. Discriminative modelling: We model $p(y \mid \mathbf{X})$ directly, e.g. a logistic regression model, a neural network, a tree or a random forest. The working modelling assumption will typically be that the $y_i$'s are conditionally independent given the $\mathbf{X}_i$'s, though estimation techniques relying on subsampling or bootstrapping make most sense under the i.i.d. or the weaker exchangeability assumption (see below). But generally, for discriminative modelling we don't need to make distributional assumptions about the $\mathbf{X}_i$'s. Generative modelling: We model the joint distribution, $p(\mathbf{X}, y)$, of $(\mathbf{X}, y)$ typically by modelling the conditional distribution $p(\mathbf{X} \mid y)$ and the marginal distribution $p(y)$. Then we use Bayes's formula for computing $p(y \mid \mathbf{X})$. Linear discriminant analysis and naive Bayes methods are examples. The working modelling assumption will typically be the i.i.d. assumption. For both modelling approaches the working modelling assumption is used to derive or propose learning methods (or estimators). That could be by maximising the (penalised) log-likelihood, minimising the empirical risk or by using Bayesian methods. Even if the working modelling assumption is wrong, the resulting method can still provide a sensible fit of $p(y \mid \mathbf{X})$. Some techniques used together with discriminative modelling, such as bagging (bootstrap aggregation), work by fitting many models to data sampled randomly from the dataset. Without the i.i.d. assumption (or exchangeability) the resampled datasets will not have a joint distribution similar to that of the original dataset. Any dependence structure has become "messed up" by the resampling. I have not thought deeply about this, but I don't see why that should necessarily break the method as a method for learning $p(y \mid \mathbf{X})$. At least not for methods based on the working independence assumptions. I am happy to be proved wrong here. Consistency and error bounds A central question for all learning methods is whether they result in models close to $p(y \mid \mathbf{X})$. There is a vast theoretical literature in statistics and machine learning dealing with consistency and error bounds. A main goal of this literature is to prove that the learned model is close to $p(y \mid \mathbf{X})$ when $N$ is large. Consistency is a qualitative assurance, while error bounds provide (semi-) explicit quantitative control of the closeness and give rates of convergence. The theoretical results all rely on assumptions about the joint distribution of the observations in the dataset. Often the working modelling assumptions mentioned above are made (that is, conditional independence for discriminative modelling and i.i.d. for generative modelling). For discriminative modelling, consistency and error bounds will require that the $\mathbf{X}_i$'s fulfil certain conditions. In classical regression one such condition is that $\frac{1}{N} \mathbb{X}^T \mathbb{X} \to \Sigma$ for $N \to \infty$, where $\mathbb{X}$ denotes the design matrix with rows $\mathbf{X}_i^T$. Weaker conditions may be enough for consistency. In sparse learning another such condition is the restricted eigenvalue condition, see e.g. On the conditions used to prove oracle results for the Lasso. The i.i.d. assumption together with some technical distributional assumptions imply that some such sufficient conditions are fulfilled with large probability, and thus the i.i.d. assumption may prove to be a sufficient but not a necessary assumption to get consistency and error bounds for discriminative modelling. The working modelling assumption of independence may be wrong for either of the modelling approaches. As a rough rule-of-thumb one can still expect consistency if the data comes from an ergodic process, and one can still expect some error bounds if the process is sufficiently fast mixing. A precise mathematical definition of these concepts would take us too far away from the main question. It is enough to note that there exist dependence structures besides the i.i.d. assumption for which the learning methods can be proved to work as $N$ tends to infinity. If we have more detailed knowledge about the dependence structure, we may choose to replace the working independence assumption used for modelling with a model that captures the dependence structure as well. This is often done for time series. A better working model may result in a more efficient method. Model assessment Rather than proving that the learning method gives a model close to $p(y \mid \mathbf{X})$ it is of great practical value to obtain a (relative) assessment of "how good a learned model is". Such assessment scores are comparable for two or more learned models, but they will not provide an absolute assessment of how close a learned model is to $p(y \mid \mathbf{X})$. Estimates of assessment scores are typically computed empirically based on splitting the dataset into a training and a test dataset or by using cross-validation. As with bagging, a random splitting of the dataset will "mess up" any dependence structure. However, for methods based on the working independence assumptions, ergodicity assumptions weaker than i.i.d. should be sufficient for the assessment estimates to be reasonable, though standard errors on these estimates will be very difficult to come up with. [ Edit: Dependence among the variables will result in a distribution of the learned model that differs from the distribution under the i.i.d. assumption. The estimate produced by cross-validation is not obviously related to the generalization error. If the dependence is strong, it will most likely be a poor estimate.] Summary (tl;dr) All the above is under the assumption that there is a fixed conditional probability model, $p(y \mid \mathbf{X})$. Thus there cannot be trends or sudden changes in the conditional distribution not captured by $\mathbf{X}$. When learning a model of $y$ given $\mathbf{X}$, independence plays a role as a useful working modelling assumption that allows us to derive learning methods a sufficient but not necessary assumption for proving consistency and providing error bounds a sufficient but not necessary assumption for using random data splitting techniques such as bagging for learning and cross-validation for assessment. To understand precisely what alternatives to i.i.d. that are also sufficient is non-trivial and to some extent a research subject.
I have been having trouble with how to go forward with a proof for about three days now. I know the basic structure of the proof, but can't seem to construct it. Basically, I am trying to do a proof by contradiction for the following: Say $u: x \rightarrow \mathbb{R}$ has no local maxima. Let $p \in \mathbb{R^l}_{++}$ and $w>0$. Show that if $x^*$ is a solution to the maximization problem: $$\max_x \ (u(x)) \ \text{s.t.} \ x \in B(p) = [x \in \mathbb{R^l}_{++} : p \cdot x \leq w]$$ then for all $y \in \mathbb{R^l_{++}}$ such that $u(y) \geq u(x^*)$, then it must be that $p \cdot y \geq w$ So I'm supposed to do this proof by contradiction, (suppose we have $y \in \mathbb{R^l_{++}}$ such that $u(y) \geq u(x^*)$, then it must be that $p \cdot y < w$) and use the fact that $u$ doesn't have a local max implies that the function $u$ is locally non-satiated: $$\forall y \in \mathbb{R^l_{+}, \forall \epsilon > 0, \exists {y'} \in \mathbb{R^l_{+}}} \ \text{s.t.} \ \|y - y'\| < \epsilon \ \text{and} \ y' \succ y$$ But I've been stuck for a while now. Any help would be appreciated.
Just have been trying to approach this problem from Resnick's book on probability but have got no clue so far. The problem is like this: We are giving two random variables X, Y on the same space $(\Omega, \mathcal{B})$, and we are asked to show: $\sup_{A \in \mathcal{B} } | P[X\in A] - P[Y\in A] | \leq P[X \neq Y] $. What I have thought: My intuition is that maybe X and Y have the same distribution, although I don't see how the distribution of the two RVs plays a role here. For the RHS I can say that if we set $P[X \neq Y] = \epsilon$, and we can check that the LHS $< \epsilon$ we may be able to get this done. I realized that, if X, Y are random variables, then both satisfy the mapping: $ X:(\Omega, \mathcal{B}) \to (\mathbb{R}, \mathcal{B}(\mathbb{R})) $ Then, $A \in \mathcal{B}(\mathbb{R})$, so I'm confused why the problem states that $A \in \mathcal{B}$. Any solid hint please?
This problem is simpler than it might look: although it might get confusing when one tries to apply routine Calculus methods, it is easy when worked from general principles. By definition, the likelihood $\mathcal L$ is the probability of the data. Since the data are (implicitly) assumed independent, this is the product of the individual probability densities, each equal to $(n+1/2)(x_i^2)^n$. Consequently, as shown in the question, $$\mathcal{L}(n) = \prod_{i=1}^m \left((n+1/2)x_i^{2n}\right) = (n+1/2)^m \left(\prod_{i=1}^m x_i^2\right)^n.$$ The only part depending on the data is that product on the right. Since it is almost surely positive for any of the $n$ under consideration, we may write it in terms of its logarithm. A convenient multiple of that is the mean log of the squared data, $$q=\frac{1}{m}\log\prod_{i=1}^m x_i^2.$$ Note that $q$ depends on the data, but not on the unknown parameter $n$: it will be the test statistic on which the Maximum Likelihood estimate is based. Indeed, let's take logs of both sides, obtaining $$\log\mathcal{L}(n) = m\log(n+1/2) + mnq.$$ Observe that since $|x_i| \lt 1$ for all $i$, $q \lt 0$. To maximize $\mathcal{L}$ we merely have to select the largest of the four values with $n=1,2,3,4$: namely, $$\eqalign{\log \mathcal{L}(1) &= m\log(3/2) + mq,\\\log \mathcal{L}(2) &= m\log(5/2) + 2mq,\\\log \mathcal{L}(3) &= m\log(7/2) + 3mq,\\\log \mathcal{L}(4) &= m\log(9/2) + 4mq.}$$ From row to row the $m\log(n+1/2)$ terms increase, but they do so more and more slowly; yet the $nmq$ terms decrease the value of $\log\mathcal L(n)$ at a constant rate of $m|q|$. Thus, the largest value of $(1/m)\log\mathcal L$--which corresponds to the largest value of $\mathcal L$ itself--is attained at the point where, in scanning these four values, $|q|$ first exceeds $$\log((n+1)+1/2) - \log(n+1/2) = \log\left(\frac{2n+3}{2n+1}\right).$$ This leads to an extremely simple procedure that can be specified even before collecting the data. Namely, The increasing sequence $\log(5/3) \gt \log(7/5) \gt \log(9/7)$ partitions the real numbers into four intervals $$I_1=(-\infty, \log\frac{9}{7}],\ I_2=[\log\frac{9}{7}, \log\frac{7}{5}],\ I_3=[\log\frac{7}{5}, \log\frac{5}{3}],\ I_4=[\log\frac{5}{3},\infty).$$ If $|q|\in I_j$, pick $\hat n=j$ as the maximum likelihood estimator. If $|q|$ lies at the common boundary of two intervals, they give two MLEs for $n$. For each $n=1,2,3,4$, this procedure was applied to $1000$ independent samples of size $10$, then $1000$ more independent samples of size $500$. The tabulation of the estimates shows decent correspondence between $n$ and $\hat n$ for the small samples--which is as much as one might hope--and near perfect correspondence for the large samples--which is where the Maximum Likelihood method ought to perform well. Here are the results (which can be reproduced with the following R code): $`m = 10` MLE=1 MLE=2 MLE=3 MLE=4 n=1 753 169 15 1 n=2 213 488 241 77 n=3 27 222 340 283 n=4 7 121 404 639 $`m = 500` MLE=1 MLE=2 MLE=3 MLE=4 n=1 1000 0 0 0 n=2 0 999 0 0 n=3 0 1 998 2 n=4 0 0 2 998 For instance, when the true parameter was $n=2$ and the sample size was only $m=10$, the top table shows the MLE was correct $488$ out of $1000$ times and was off by one (that is, estimating $n$ as either $1$ or $3$) $213+241$ other times. On the whole, the MLE appears a little biased towards the middle values for small $m$ and extremely accurate for large $m$. # # Generate random values. # rpow <- function(n, p) { q <- runif(n) i <- sample.int(2, n, replace=TRUE) ifelse(i==1, -1, +1) * q^(1/(2*p+1)) } # # Confirm `rpow` works as intended by matching histograms to the PDFs. # par(mfrow=c(1,4)) for (n in 1:4) { hist(rpow(1e4, n), main=paste("n =", n), freq=FALSE) curve(x^(2*n)*(n+1/2), add=TRUE, col="Red", lwd=2) } # # Test the MLE. # MLE.pow <- function(x, breaks=log(c(5/3, 7/5, 9/7))) { # Given data `x`, return the MLE in {1,2,3,4}. m <- length(x) q <- sum(log(x^2)) / m sum(abs(q) <= breaks) + 1 } m <- c(10,500) # Specify sample sizes to test set.seed(17) # Create a reproducible starting point sim <- lapply(m, function(m) { x <- replicate(1e3, sapply(1:4, function(n) MLE.pow(rpow(m, n)))) results <- apply(x, 1, tabulate, nbins=4) # Tabulate the MLEs rownames(results) <- paste0("n=", 1:4) colnames(results) <- paste0("MLE=", 1:4) results }) names(sim) <- paste("m =", m) print(sim)
An example of methylation analysis with simulated datasets Part 2: Potential DMPs from the methylation signal Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches. 1. Background Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1. 2. Potential DMPs from the methylation signal using empirical distribution As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$. DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "ECDF") 3. Potential DMPs detected with Fisher’s exact test In Methyl-IT Fisher’s exact test (FT) is implemented in function FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing ( pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested ( tv.cut = 0.926). ft = FisherTest(LR = divs, tv.cut = 0.926, pAdjustMethod = "BH", pooling.stat = "mean", pvalCutOff = 0.05, num.cores = 4L, verbose = FALSE, saveAll = FALSE) ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None", tv.cut = 0.926, tv.col = 7, alpha = 0.05) There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality: $TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1]. where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1. So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$ 4. Potential DMPs detected with Weibull 2-parameters model Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values. nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)# Potential DMPs from 'Weibull2P' modelDMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Weibull2P")nlms.wb$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV## shape 0.991666258901194 0.996595712743823 34.7217494754823## scale ## AIC BIC COV.shape COV.scale## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06## scale -1.165129e-06 2.427280e-04## COV.mu n## shape NA 50000## scale NA 50000 5. Potential DMPs detected with Gamma 2-parameters model As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values $TV_d > 0.926$ are considered DMPs. nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P")# Potential DMPs from 'Gamma2P' modelDMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P")nlms.g2p$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282## scale 76.1580083 0.0642929555 1184.547 0 ## rho R.Cross.val DEV## shape 0.999998194084045 0.998331895911125 0.00752417919133131## scale ## AIC BIC COV.alpha COV.scale## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06## scale -8.581717e-06 4.133584e-03## COV.mu df## shape NA 49998## scale NA 49998 Summary table: data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),Gamma = unlist(lapply(DMPs.g2p, length))) ## ft ft.hd ecdf Weibull Gamma## C1 1253 773 63 756 935## C2 1221 776 62 755 925## C3 1280 786 64 768 947## T1 2504 1554 126 924 1346## T2 2464 1532 124 942 1379## T3 2408 1477 121 979 1354 6. Density graphic with a new critical value The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$. suppressMessages(library(ggplot2)) # Some information for graphic dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif") References Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427.
Preamble: If one considers an ideal gas of non interacting charged particles of charge $q$ in a uniform magnetic field $\mathbf{B} = \mathbf{\nabla} \wedge \mathbf{A}$, then the classical partition function in the canonical ensemble reads (in SI units): $Q(\beta,V,N,\mathbf{B}) = \frac{1}{N!}q(\beta,V,\mathbf{B})^N$ where $q(\beta,V,\mathbf{B}) = \int \frac{d\mathbf{p} d \mathbf{r}}{h^3}\:e^{-\frac{\beta}{2m}(\mathbf{p}-q\mathbf{A}(\mathbf{r}))^2}$ If we integrate first with respect to momenta over all possible values from $-\infty$ to $+\infty$ for each component, a simple change of variable leads to $q(\beta,V,\mathbf{B})=\frac{V}{\Lambda^3}$ which is the ideal gas result and where $\Lambda$ is the thermal de Brooglie wavelength. If one then wants to get the magnetization per particle $\mathbf{\mu}$ induced by the field $\mathbf{B}$ it is simply: $\mathbf{\mu} = -\frac{\partial \langle \epsilon \rangle}{\partial \mathbf{B}} = \frac{\partial }{\partial \mathbf{B}}\left( \frac{\partial \ln(q(\beta,V,\mathbf{B}))}{\partial \beta} \right) = \frac{\partial }{\partial \beta}\left( \frac{\partial \ln(q(\beta,V,\mathbf{B}))}{\partial \mathbf{B}} \right) = \mathbf{0}$ This is one way to state the Bohr-van Leeuwen theorem. Now, I physically understand this result as coming from some symmetry associated with the momenta (it is as likely to go to the right as it is to go to the left) and the fact that the boundaries of the integral over the momenta are infinite. If the problem is treated quantum mechanically, the eigenstates of one charge particle are discretized Landau levels with a typical spacing between two neighbouring levels that is $\hbar \omega_c$ where $\omega_c = qB/m$ is the cyclotron frequency and one finds that the sum over these states depends on the magnetic field $\mathbf{B}$. Question(s): I am lost in my interpretation of the quantum to classical limit for this system...so far I thought that the quantum -> classical limit for the statistical properties of an individual particle was related to the way of counting the number of states for this particle i.e. whether we consider the set of states as a continuum or as a discrete set. This analogy seems to work in this case as well since the classical limit arises if $k_B T \gg \hbar \omega_c$. However two major points differ from what I am used to: The quantum treatment of this system yields a non zero magnetic moment (although it vanishes at infinite temperatures) in the limit where $k_B T \gg \hbar \omega_c$ while the classical treatment gives strictly zero. I do not understand how does the left-right symmetry argument used in the classical partition function disappear in the quantum treatment to yield a partition function that depends on $\mathbf{B}$. Is there any classical way to assess that quantum corrections will be of order $\mathcal{O}(\Lambda/R_c)$ where $R_c \sim \sqrt{m k_B T}/(qB)$ is the typical size of the radius of the helical paths taken by a charged particle? Sorry if my questions seem confused, I will try to improve them if they are not clear enough. EDIT: I realize that one of my points is not very clear and shall explain it with the example of a true harmonic oscillator.If I consider classical statistical mechanics, I know that $\langle \frac{1}{2}m\omega^2 x^2 \rangle = \frac{1}{2}k_B T$. This tells me that the typical uncertainty on the position of my particle is $\sigma_x = \sqrt{k_B T/(m\omega^2)}$. Incindently this length is also the typical confinement length scale owing to the harmonic potential. One way to semi-classicaly probe the validity of the classical limit is to imagine the particle as a non dispersive wave packet of width $\Lambda = h/\sqrt{2\pi m k_B T}$ and to realize that interferences (ultimately leading to quantization) are not important if $\Lambda \ll \sigma_x$.This is very appealing because one can then probe the validity of a classical approximation using a $\sigma_x$ that comes from a classical treatment. My biggest problem with a charged particle in a magnetic field is that the Bohr-van Leewen theorem apparently prevents this typical length scale (that I know for sure is $R_c$) to be found with a classical statistical treatment.
It looks like you're new here. If you want to get involved, click one of these buttons! Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these: Our first approach will only tackle question 1. Given \(y\), we will only ask is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\). So, for now our resources will form a "preorder", as defined in Lecture 3. Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\). All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\). What's new is that we can also combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it. It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\). Definition. A monoid is a set \(X\) equipped with: such that these laws hold: the associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\) the left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\). You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders: Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying: $$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast! You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders: The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder. Same for the set \(\mathbb{Q}\) of rational numbers. Same for the set \(\mathbb{Z}\) of integers. Same for the set \(\mathbb{N}\) of natural numbers. Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it. But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way? Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning $$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets? Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets?
At small values close to $x=1$, you can use taylor expansion for $\ln x$: $$ \ln x = (x-1) - \frac{1}{2}(x-1)^2 + ....$$ Is there any valid expansion or approximation for large values (or at infinity)? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Almost as Semiclassical answered, write $x=A \times 10^{n-1}$ where $n$ is the number of digits before the decimal point, that is to say $1\leq A < 10$ and use the very fast converging expansion $$\log \left(\frac{1+y}{1-y}\right)=2\sum_{k=0}^{\infty}\frac{y^{2 k+1}}{2 k+1}$$ with $y=\frac{A-1}{A+1}$. Let us take an example : $x=123456789$; then $A=1.23456789$ and $n=8$; so $y \approx 0.104972$. Now, let us look at the value of $$S_p=2\sum_{k=0}^{p}\frac{y^{2 k+1}}{2 k+1}$$ For the first values of $p$, the sums are successively $0.2099447424$, $0.2107158833$, $0.2107209817$, $0.2107210219$, $0.2107210222$ which is the solution for ten exact decimal places. So, by the end $$\log^{(p)}(x)=(n-1)\log(10)+ S_p$$ which leads to the successive values of $18.63062549$, $18.63139663$ ,$18.63140173$, $18.63140177$, $18.63140177$ for an exact value equal to $\approx 18.63140177$. for any positive $x$ $\ln \left( x \right) \approx a{x^{\frac{1}{a}}} - a$ where $a$ is any large constant The larger $a$, the better approximation. Hint: $~\ln x=-\ln\dfrac1x~:~$ Can you take it from here ? A crude approach is to assume scientific notation: If $x=A \times 10^n$ where $1\leq A < 10$. Then $\ln x = \ln A + n\ln 10\approx 2.3n$. To be a little more precise, we know $\ln x\in [n \ln 10,(n+1)\ln 10)$. There is no polynomial or rational approximation to $\ln(x)$ that is accurate for all large $x$. This follows from the fact that $\ln(x) =o(x^{\epsilon}) $ for every $\epsilon > 0$. There is a procedure from my complex variables class fifty years ago called "analytic continuation" that allows you to modify a power series to a different radius of convergence. But any series will have a finite radius of convergence, and when you get near that cutoff point, convergence is very slow. That is why most log routines incorporate scaling to reduce the range. However, instead of scaling by the number base, you can take the square root of the argument. Pade approximations are rational functions that frequently beat truncated power series. Although you still have the problem of limited range, you reduce the need for range reduction. For example, Ln(1+x) = x(6+x)/(6+4x). As for the previously posted series above with y = (a-1)/(a+1), I have seen a Pade approximation that is truly spectactular. It is in the article on Shank's Transformation, and it is P{ ln (1+y)/(1-y) } = 2y*(15 - 4y^2 )/(15 - 9y^2 ) ln(1) = 0, ln(2) = 0.693122 max error = -0.000025. You should probably use a computer algebra system for this work to reduce the frustration and likelihood of error. If you just need to compute logs somewhere, consider using a table of multipliers and their logs. Good multipliers are powers of two or have only two bits, so you can do the multiply quickly in hardware by shift and add. Their logs will have as many bits as you need. So you do something like the Handbook of Mathematical Functions table 4.3 to bust the argument down to a range where the approximation is good. See Knuth, vol 1 page 22++ for algorithms that compute logs one bit at a time. One of these is due to Euler, and involves a square root for each bit; the other squares x and divides by the base of the log if the square is greater than the base. Since your base is e, that division would demand greater resources than division by 2 or 10. The taylor series for $\ln(\cdot)$ in the vicinity of point $a$ is $$\ln(x+a) =\ln(a) + 2 \cdot (y+\frac{y^3}{3}+\frac{y^5}{5}+\frac{y^7}{7} + \ldots + \frac{y^{2n+1}}{2n+1}), \mathrm{where\;} y=\frac{x}{x+2a}$$ So for fast convergence, we need the logarithms of first 26 primes numbers: $2,3,5,7.\ldots$
Let for integers $n\geq 1$ the radical of an integer, see the definition of this arithmetical function, for example, from this Wikipedia. I wondered next question when I did a comparison with a well-known identity from the literature. Question.Does converge $$\sum_{n=2}^\infty\frac{(-1)^n\zeta(n)}{\operatorname{rad}(n)},\tag{1}$$ where $\zeta(s)$ denotes the Riemann's Zeta function? Thanks in advance. Then we need to know relevant facts about the asymptotic behaviour of $$\sum_{2\leq n\leq x}\frac{(-1)^n\zeta(n)}{\operatorname{rad}(n)},\tag{2}$$ as $x\to\infty$. I believe that there are two approaches, using Abel's summation identity or well with summation by parts. I tried the first one. From Abel's summation formula one knows that $$\sum_{2<n\leq y}\frac{(-1)^n\zeta(n)}{\operatorname{rad}(n)}=\left(\sum_{1\leq n\leq y}\frac{(-1)^n}{\operatorname{rad}(n)}\right)\zeta(y)-\frac{\zeta(2)}{2}-\int_2^y\left(\sum_{1\leq n\leq t}\frac{(-1)^n}{\operatorname{rad}(n)}\right)\zeta'(t)dt.\tag{3}$$ Also we know that $$\lim_{y\to\infty}\zeta(y)=1.\tag{4}$$ To solve previous question I require an easy way while it is possible (do not use the advanced knowledges). If you follow my way, we need to state facts about the asymptotic behaviour of $$\sum_{1\leq n\leq y}\frac{(-1)^n}{\operatorname{rad}(n)},$$ but as I've said to me isn't required the best statement. Just those ideas and calculations to know if our series in $(1)$ will be convergent. Of course you can to choose yourself strategy.
Please read this introduction first before looking through the solutions. Here’s a quick index to all the problems in this section. Composing two general equiareal transformations, we get a transformation of the form below. $$\begin{pmatrix} a_{12}b_{21} + a_{11}b_{11} & a_{12}b_{22} + a_{11}b_{12} & a_{12}b_{23} + a_{11}b_{13} + a_{13} \\ a_{22}b_{21} + a_{21}b_{11} & a_{22}b_{22} + a_{21}b_{12} & a_{22}b_{23} + a_{21}b_{13} + a_{23} \\ 0 & 0 & a_{33}b_{33} \end{pmatrix}$$ Let’s consider the fraction relevant to equiareal transformations $$\frac{(a_{12}b_{21} + a_{11}b_{11})(a_{22}b_{22} + a_{21}b_{12}) - (a_{12}b_{22} + a_{11}b_{12})(a_{22}b_{21} + a_{21}b_{11})}{a_{33}^2b_{33}^2}$$ Expanding this we get $$\frac{(a_{12}a_{21} - a_{11}a_{22})(b_{11}b_{22} - b_{12}b_{21})}{a^2_{33}b^2_{33}}$$ $$\implies \frac{(a_{12}a_{21} - a_{11}a_{22})}{a^2_{33}}\frac{(b_{11}b_{22} - b_{12}b_{21})}{b^2_{33}}$$ As the absolute value of each of these terms is 1, the absolute value of this fraction will also be 1. Thus, two equiareal transformations are closed under the operation of composition. The inverse of a general equiareal transformation has the form $$\frac{1}{a_{33}(a_{11}a_{22} - a_{12}a_{21})}\begin{pmatrix} a_{22}a_{33} & -a_{12}a_{33} & a_{12}a_{33} - a_{13}a_{22} \\ -a_{21}a_{33} & a_{11}a_{33} & a_{13}a_{21} - a_{11}a_{23} \\ 0 & 0 & a_{11}a_{22} - a_{12}a_{21} \end{pmatrix}$$ Let’s consider the fraction relevant to equiareal transformations $$\frac{a_{33}^2(a_{22}a_{11} - a_{12}a_{21})}{(a_{22}a_{11} - a_{12}a_{21})^2}$$ $$= \frac{a_{33}^2}{(a_{22}a_{11} - a_{12}a_{21})}$$ The absolute value of this is clearly 1. Hence the inverse is also an equiareal transformation. Finally, it is obvious, from the rules of matrix multiplication, that the composition of equiareal transformations is associative and that the identity is included among the equiareal transformations. Hence the set of all equiareal transformations is a group under the operation of composition. Type I $a_{33} \ne \pm1$ $\begin{pmatrix} a_{33}^2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & a_{33} \end{pmatrix}$ Type II $a_{23} \ne 0$ $\begin{pmatrix} a_{11} & 0 & a_{13} \\ 0 & -a_{11} & a_{23} \\ 0 & 0 & -a_{11} \end{pmatrix}$ Type III $\begin{pmatrix} a_{11} & 0 & 0 \\ 0 & a_{11} & 0 \\ 0 & 0 & -a_{11} \end{pmatrix}$ Type IV $\begin{pmatrix} a_{11} & 0 & 0 \\ a_{21} & a_{11} & 0 \\ a_{31} & a_{32} & a_{11} \end{pmatrix}$ Type V $a_{13} \ne 0, a_{23} \ne 0$ $\begin{pmatrix} a_{11} & 0 & a_{13} \\ 0 & a_{11} & a_{23} \\ 0 & 0 & a_{11} \end{pmatrix}$ Type VI $\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ 3. Determine which, if any, of the following mappings can be accomplished by an equiareal transformation, and when it exists, find the equations of the transformation: (a) $(0,0,1) \rightarrow (0,0,1)$, $(1,0,1) \rightarrow (1,0,1)$, $(2,1,1) \rightarrow (-1,1,1)$ (b) $(0,0,1) \rightarrow (0,0,1)$, $(2,0,1) \rightarrow (0,3,1)$, $(2,1,1) \rightarrow (1,0,3)$ (c) $(0,0,1) \rightarrow (0,0,1)$, $(2,0,1) \rightarrow (3,1,1)$, $(1,1,1) \rightarrow (1,2,2)$ (a) The two triangles share the same base and their heights are the same. So an equiareal transformation between the two is possible. Using the invariance of $(0, 0, 1)$, we get $a_{13} = a_{23} = 0$ and $a_{33} = 1$. Further, using the invariance of $(1, 0, 1)$, we get $a_{11} = 1$ and $a_{21} = a_{31} = 0$. Finally, using the mapping $(2, 1, 1) \rightarrow (-1, 1, 1)$ we get $a_{12} = -3$ and $a_{22} = 1$. Hence, the matrix of transformation is $$\begin{pmatrix} 1 & -3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ (b) Areas of the two triangles are $\frac{1}{2} \times 2 \times 1 = 1$ and $\frac{1}{2} \times 3 \times \frac{1}{3} = \frac{1}{2}$ respectively. Clearly, the area of the two triangles do not match; hence an equiareal transformation is not possible. (c) Areas of the two triangles are $\frac{1}{2} \times 2 \times 1 = 1$ and $\frac{1}{2} \times 1 \times \frac{5}{2} = \frac{5}{4}$ respectively. Clearly, the area of the two triangles do not match; hence an equiareal transformation is not possible. 4. If $A$ and $B$ are distinct finite points, and if $A’$ and $B’$ are distinct finite points, show that there is an equiareal transformation which maps $A$ into $A’$ and $B$ into $B’$. Is this transformation unique? Given two pairs of distint points, it is possible to choose another pair of points C and C’ such that the areas of $\bigtriangleup ABC$ and $\bigtriangleup A’B’C’$ are the same. Hence, it is possible to define an equiareal transformation that achieves the given mapping. As the choice of $C$ and $C’$ are arbitrary, the equiareal mapping is not unique. 5. Show that for a given equiareal transformation there are in general two directions with the property that the lengths of segments in these directions are unaltered by the transformation. By corollary 1, every equiareal transformation is an affine transformation. Hence the result in Exercise #8, Sec 5.10 holds for equiareal transformations too.comments powered by Disqus
I am using Analysis on Manifolds by Munkres to study for a course and the following question comes from an early section in topology. Definition of limit point: Let $A \subset R^n$ and let $x_o \in R^n$. $x_o$ is a limit point of $A$ if, for every $r>0,B(x_o,r)$ contains a point of $A \setminus \{x_o\}$ Exercise: Let $A$ be a subset of $X$ where $X$ is a metric space. Show that if $C$ is a closed subset of $X$ and $C$ contains $A$, then $C$ contains the closure of $A$. This is what I have so far: $C$ is closed implies that $C$ contains its limit points. $A \subset C$, thus $A$ contains all points of $A$ (by Theorem from book). Since $\bar{A} = A \cup \{\text{limit points of $A$}\}$, we only have left to show that the set of limit points of $A$ is in $C$. Let $p$ be a limit point of $A$. Suppose $p \notin C$. Then $p \in X \setminus C$. We know that $X \setminus C$ is open... EDITED (changed proof strategy after doing more research) let $p$ be a limit point of $A$ and if $p$ is not in $C$, then $X \setminus C$ is an open set containing $p$ but not intersecting $C$, which implies that $X \setminus C$ does not intersect $A$, which contradicts the fact that $p$ is a limit point of $A$. This is since any neighborhood of a limit point $A$ must intersect a point other than $x_o$ in $A$ (by definition of limit point; and since $X \setminus C$ is open, every neighborhood of a point contained in $X \setminus C$ has a radius in $x \setminus C$ (by definition of open).
The following question from Furdui's book (Exercise 1.32. page 6) is an "open problem" : Let $f: [0,1] \to \mathbb{R}$ be a continuous (and not a continuously differentiable) function and let $$x_n = f\left(\dfrac{1}{n}\right) + f\left(\dfrac{2}{n}\right) + \dots + f\left(\dfrac{n-1}{n}\right).$$ Calculate $\lim_{n \to \infty} (x_{n+1}-x_n).$ I am very interested in doing research on this problem but I am not sure whether the problem still is unsolved. [The book has written in 2010(?)] I searched the internet but I couldn't find anything about it esp. searching a long formula without a name attached to it is more difficult (to me). I am new in research and there are lots of difficulties to me (e.g. lack of access to an adviser) to find out proper clear answers to the following questions: 1- Is the mentioned problem unsolved, yet? 2- How to find out all the signs of progress have been done on the specific problem? (It is much easier to gather most of the related things about, say, Catalan's constant but I have no idea about ways of searching an-exercise-of-a-book looking problem in ArXiv or printed journals).
This doesn't seem to have been presented or published anywhere other than as a somewhat casual/informal document on the arXiv, which probably helps explain why it's been ignored. Well, that and the fact that it's ignorant of all the relevant research and is very, very wrong. Since I'm not really a GR person, I'll pass over the first ("explains dark energy") part, though I suspect it's got serious problems. The second part of the paper attempts to explain "dark matter" effects -- i.e., the rotation curves of galaxies (he doesn't seem aware of the role postulated dark matter plays in explaining galaxy group and cluster dynamics, or the overall energy density of the universe) -- with the Coriolis force. This requires that disk galaxies be oriented so that their angular momentum vectors are parallel to the axis of universal rotation. But for galaxies oriented perpendicular to that direction, the Coriolis effect vanishes. Since real galaxies are randomly oriented, this effect would have been obvious back in the 1970s, when people like Vera Rubin were measuring outer rotation curves of galaxies and finding evidence for dark matter. All disk galaxies show roughly the same dark matter effects, regardless of their orientation. Put another way, in order for this model to explain dark matter, all disk galaxies would have to have the same orientation in 3D space. So all the galaxies we observe would have the same position angle on the sky, with all galaxies near the Galactic plane seen edge-on and all galaxies near the Galactic poles seen face-on. Needless to say, this is not our universe. The real killer is that the whole model requires the universe to be uniformly rotating with an angular speed approximately equal to the Hubble constant $H_{0}$. But a rotating universe would produce distortions in the cosmic background radiation, as pointed out by Stephen Hawking back in 1969. This was the first in a series of papers attempting to measure, or put upper limits, on the vorticity of the universe. This is usually parameterized as the ratio of the vorticity $\omega$ to the Hubble constant: $\omega / H_{0}$ (note that Zorba uses "$\omega$" for the angular speed). Since the vorticity of a uniformly rotating system is just twice its angular speed, Zorba's model is $\omega/H_{0} \approx 2$. Hawking was able to derive crude upper limits of $\omega / H_{0} < 10^{-3}$, meaning that Zorba's preferred value would be about a thousand times too large. Barrow et al. (1985), using updated data and a more detailed analysis, found an upper limit of $\omega / H_{0} < 2 \times 10^{-5}$. Studies since then have pushed the upper limit further down. The most recent attempt is probably this paper in Physical Review Letters by Saadeh et al. (2016), which uses both temperature and polarization data from the Planck satellite and finds $\omega / H_{0} < 4.7 \times 10^{-11}$. In other words, if the universe is rotating, it's doing so about a trillion times more slowly than what Zorba's model assumes.
Abbreviation: BilinA A is a structure $\mathbf{A}=\langle A,+,-,0,\cdot,s_r\ (r\in F)\rangle$ of type $\langle 2,1,0,2,1_r\ (r\in F)\rangle$ such that bilinear algebra $\langle A,+,-,0,s_r\ (r\in F)\rangle$ is a vector space over a field $F$ $\cdot$ is : $x(y+z)=xy+xz$, $(x+y)z=xz+yz$, and $s_r(xy)=s_r(x)y=xs_r(y)$ bilinear Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Lie algebras]] [[Associative algebras]] [[Vector spaces]] reduced type
SettingExactly as the title stated: Give an example of an $\mathsf{NL}$-complete context free language. $\newcommand{\angle}[1]{\langle #1 \rangle}$ Current Solution Recall in the past we proved that $E_{DFA}$ is regular, so it is also context free. $E_{DFA}$ is in $\mathsf{NL}$ since given DFA $\mathcal M$ over $n$ states, we can start at the initial state and nondeterminstically traverse through states of $\mathcal M$ for at most $n$ steps, storing only the current state ($\log n$ space). Note if there is a path to an accept state, then it must be $\le n$ steps long. So if there is no path to an accept state within $n$ steps, then $\mathcal M$ does not accept any string, thus $\mathcal M \in E_{DFA}$. Now we show $\overline{PATH} \le_l E_{DFA}$ and since $PATH$ is $\mathsf{NL}$-complete and $\mathsf{NL} = \mathsf{coNL}$, it follows that $E_{DFA}$ is also $\mathsf{NL}$-complete. Given an instance $\angle{G,s,t}$, we construct a DFA $\mathcal M$ by letting $s \in G$ be $q_{initial}$, and $t \in G$ be $q_{accept}$. We label the edge arbitrarily from $\{0,1\}$. Clearly this can be done in log-space. So now we show $$\angle{G,s,t} \in \overline{PATH} \Leftrightarrow \mathcal M \in E_{DFA}.$$ Suppose $\angle{G,s,t} \in \overline{PATH}$, then there is no path from $s$ to $t$, so there is no path from $q_{initial}$ to $q_{accept}$ in $\mathcal M$, then $\mathcal M$ does not accept any strings so it is in $E_{DFA}$. Conversely, suppose $\angle{G,s,t} \not\in E_{DFA}$, then there is a path from $s$ to $t$, taking this path in $\mathcal M$ correspond to a $\mathcal M$ accepting some string, so $\mathcal L(\mathcal M) \ne \emptyset$ and $\mathcal M \not\in E_{DFA}$. Problem I don't think my algorithm for proving $E_{DFA} \in \mathsf{NL}$ is correct.
Yes. Denote the entrywise absolute value of a complex matrix $X$ by $|X|$. In general, if $|X|\le Y$ entrywise, then $\rho(X)\le\rho(|X|)\le\rho(Y)$. Your $P$ is an orthogonal projection. Therefore the moduli of its entries are bounded above by $1$. Hence $|A|\le|B||D|$ and $\rho(A)\le\rho(|B||D|)$. So, it suffices to prove that the latter quantity is strictly smaller than $1$. Suppose $(\lambda,v)$ is an eigenpair of $|B||D|$. Clearly, $\|\,|B|\,\|_\infty=\|B\|_\infty=1$. As $D$ is diagonal, $\|\,|D|\,\|_\infty=\|D\|_2<1$. Therefore $|\lambda|\|v\|_\infty=\|\,|B|\,|D|\,v\|_\infty\le\|\,|B|\,\|_\infty\|\,|D|\,\|_\infty\|v\|_\infty<\|v\|_\infty$. Thus $\rho(|B||D|)<1$. $\square$ Remark. Your conjecture is true because $D$ is diagonal. Otherwise it is false in general. Here is a counterexample. Let $B=\pmatrix{-\frac12&-\frac12\\ 0&1}$. Its largest singular value is $\sigma_1=\frac14(3+\sqrt{5})>1$. Let $B=USV^T$ be its singular value decomposition. Set $P=U\pmatrix{1\\ &0}U^T$ and $D=rVU^T$ for some $r$ that is smaller than but sufficiently close to $1$. Then $A=PBD=r\sigma_1U\pmatrix{1\\ &0}U^T$, whose spectral radius is $r\sigma_1>1$.
Suppose we have a feedforward neural network with L2 regularization and we train it using SGD initializing the weights with the standard Gaussian. The weight update scheme can be written as: $$w \rightarrow \left( 1 - {\eta \lambda \over n} \right)w - {\eta \over m} \sum_x{\partial C_x \over \partial w}$$ where $w$ is any given weight in the network, $\eta$ is the learning rate, $\lambda$ is the regularization rate, $m$ is the size of the mini-batch, the sum is over all the training examples $x$ in a given mini-batch and $C_x$ is the cost function for a given training example $x$. I want to understand four characteristics of this configuration: 1) Supposing $\lambda$ is not too small, the first epochs of training will be dominated almost entirely by weight decay. 2) Provided $\eta \lambda ≪ n$ the weights will decay by a factor of $e^{-{\eta \lambda \over m}}$ per epoch. 3) Supposing $\lambda$ is not too large, the weight decay will tail off when the weights are down to a size around ${1 \over n_w}$, where $n_w$ is the total number of weights in the network. 4) How does this relate to weight initialization using $N(0, {1 \over n_{in}})$, where $n_{in}$ is the number of inputs to a neuron (no regularization)? My comments: 1) With standard initialization of weights, during the fist epochs of learning, we will often have ${1 \over m} \sum_x{\partial C_x \over \partial w} \approx 0$ and weight decay will be dominant. 2) We could replace $\eta \lambda ≪ n$ with $\lambda$ being a constant and $n \to \infty$. Then we have weight decay at $\lim\limits_{n \to \infty}\left( 1 - {\eta \lambda \over n} \right)^{n \over m}=e^{-{\eta \lambda \over m}}$ per epoch. 3) I don't quite see how we can tie the tapering of the decay weights to $w \approx {1 \over n_w}$. It would seem that the weight decay component vanishes in later epochs, irrespectively of the actual level of the weights (which doesn't have to be $0$ thanks to the gradient component). 4) Perhaps the relation is that L2 cuts the weights and in doing so emulates the effects of $N(0, {1 \over n_{in}})$ initiaization. I would appreciate any comments on the correctness of 1) and 2), and any ideas on what is going on in 3) and 4).
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Problem: Let $f\in L^2(0,\infty)$ and let $(Tf)(s)=\frac1s\int_0^sf(t)dt$. Find the adjoint, $T^*$. Attempt: I know that problems like these should be very simple, but oftentimes I find them very difficult, to my shame. I understand that the adjoint in this case is defined as being the $T^*:L^2(0,\infty)\to L^2(0,\infty)$ where, for all $f,g\in L^2(0,\infty)$, $$\langle Tf,g\rangle=\langle f,T^*g\rangle$$ I understand the method one usually employs; namely to start from the left hand side and via manipulation (in this case, of the integrals) to get the inner product of $f$ with something else. In this case though, I don't really understand what is going on with regards to the order of integration changing and how, exactly, the limits thereof change as well. I have been told it is to do with Fubini's theorem, but it's not something I have really grappled with, and had exposure to before. I have had a look at some examples like this one here involving the same inner product, but I am thrown more than usual by the $1/s$ factor. Could somebody use the above problem to elucidate what is going on, and how the machinations behind these manipulations really work?
Finding Additive Biclusters with Random Background Abstract The biclustering problem has been extensively studied in many areas including e-commerce, data mining, machine learning, pattern recognition, statistics, and more recently in computational biology. Given an n × m matrix A ( n ≥ m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard under various mathematical models. In this paper, we study a probabilistic model of the implanted additive bicluster problem, where each element in the n× m background matrix is a random number from [0, L − 1], and a k× k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L − 1] with probability θ. We propose an O( n 2 m) time voting algorithm to solve the problem. We show that for any constant δ such that \((1-\delta)(1-\theta)^2 -\frac 1 L >0\), when \(k \ge \max \left\{\frac 8 \alpha \sqrt{n\log n},~ \frac {8 \log n} c + \log (2L)\right\}\), where c is a constant number, the voting algorithm can correctly find the implanted bicluster with probability at least \(1 - \frac{9}{n^{2}}\). We also implement our algorithm as a software tool for finding novel biclusters in microarray gene expression data, called VOTE. The implementation incorporates several nontrivial ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with multiple (and overlapping) implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed. Keywordsbicluster Chernoff bound polynomial-time algorithm probability model computational biology gene expression data analysis Preview Unable to display preview. Download preview PDF. References 1. 2. 3.Ben-Dor, A., Chor, B., Karp, R., Yakhini, Z.: Discovering local structure in gene expression data: the order-preserving submatrix problem. In: Proceedings of Sixth International Conference on Computational Molecular Biology (RECOMB), pp. 45–55. ACM Press, New York (2002)Google Scholar 4. 5.Cheng, Y., Church, G.M.: Biclustering of expression data. In: Proceedings of the 8th International Conference on Intelligent Systems for Molecular (ISMB 2000), pp. 93–103. AAAI Press, Menlo Park (2000)Google Scholar 6. 7.Gasch, A.P., Spellman, P.T., Kao, C.M., Carmel-Harel, O., Eisen, M.B., Storz, G., Botstein, D., Brown, P.O.: Genomic expression programs in the response of yeast cells to enviormental changes. Molecular Biology of the Cell 11, 4241–4257 (2000)Google Scholar 8. 9.Ihmels, J., Friedlander, G., Bergmann, S., Sarig, O., Ziv, Y., Barkai, N.: Revealing modular organization in the yeast transcriptional network. Nature Genetics 31, 370–377 (2002)Google Scholar 10. 11. 12. 13. 14. 15. 16.Lonardi, S., Szpankowski, W., Yang, Q.: Finding biclusters by random projections. In: Proceedings of the Fifteenth Annual Symposium on Combinatorial Pattern Matching, pp. 102–116 (2004)Google Scholar 17. 18. 19. 20. 21. 22.Tanay, A., Sharan, R., Shamir, R.: Discovering statistically significant biclusters in gene expression data. Bioinformatics 18, suppl. 1, 136–144 (2002)Google Scholar 23.Westfall, P.H., Young, S.S.: Resampling-based multiple testing. Wiley, New York (1993)Google Scholar 24.Yang, J., Wang, W., Wang, H., Yu, P.: δ-clusters: capturing subspace correlation in a large data set. In: Proceedings of the 18th International Conference on Data Engineering, pp. 517–528 (2002)Google Scholar
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Let's consider a rooted tree $T$ of $n$ nodes. For any node $u$ of the tree, define $L(u,d)$ to be the list of descendants of $u$ that are distance $d$ away from $u$. Let $|L(u,d)|$ denote the number of nodes that are present in the list $L(u,d)$. Prove that the sum of $|L(u,d)|$ over all distinct lists $L(u,d)$ is bounded by $\mathcal{O}(n\sqrt{n})$. My work Consider all $L(u,d)$ such that the left most node on the level $Level(u) + d$ is some node $v$. The pairs $u, d$ for all such $L(u,d)$ must be distinct and the sum of all $d_i$ will correspond to the number of nodes $x$ in the tree with $Level(x) \le Level(u) + d$. This is because if some sequence of nodes $v_1, v_2, \dots v_k$ corresponds to the descendants of some node $u$ at a distance $d$ and the sequence of nodes $v_1, v_2, \dots v_{k'}$ where $k' > k$ corresponds to the descendants of some node $u'$ at a distance $d+1$, then there must also exist a node $u''$ such that $L(u'', d) = v_{k+1}, v_{k+2}, \dots v_{k'}$. This would also mean that $u''$ is not in the subtree of $u$ and thus there are at least $d$ distinct nodes in the subtree of $u''$ upto a distance $d$ from $u''$. If the distinct distances are $d_1, d_2, \dots d_k$ then, $n \ge \sum_{i}d_i \ge \sum_{i=1}^{k}i \ge \frac{k(k+1)}{2}$. = $\implies k \le \sqrt{n}$ After this I tried to show that there can be only $\mathcal{O}(\sqrt{n})$ distinct lists $L(u,d)$ so that I can then trivially obtain the upper-boud of $n\sqrt{n}$ but I could not make any more useful observations. This link claims that such an upper bound does exist but has not provided the proof. Any ideas how we might proceed to prove this?
Why has the message $P$ to be relative prime to $n$ in RSA encryption? This should be fault? \begin{align} C &\equiv P^e \pmod{n} \\ &\equiv 101112^{11111357} \pmod{9998000099} \\ &\equiv 3316546434 \pmod{9998000099} \end{align} Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community $P$ can be any number $< n$, encryption will work. It will produce a valid ciphertext for which decryption still works. It's also a very small probability (for realistically sized $n$) that this would happen anyway. If anyone were to randomly generate some $P$ and notice that the gcd of $P$ and $n$ was not $1$, that person would have factored $n$ and broken this RSA-instance. Your example does have gcd of $P$ and $n$ equal to $1$.
In the book APP. Math. Stat (https://books.google.com/books?id=enUouJ4EHzQC&pg=PA266&lpg=PA266&dq=serfling+variance+of+L+estimate&source=bl&ots=ehRxuMmiQ5&sig=lDK209BhPb5chwbBrbMG-RMDFFA&hl=en&sa=X&ved=0ahUKEwiHpbfotubJAhUBGh4KHZprB0oQ6AEITTAI#v=snippet&q=He%20establishes%20several%20results&f=false) by Robert Serfling at page 276, it pointed out that, for the type of estimator $S_{n}$=$\frac{1}{n}\sum_{i=1}^{n}J(\frac{i}{n+1})X_{i}$, where $X_{i}$'s are order statistics and $J()$ is a bounded function, Stigler(1974) showed that the variance of $S_{n}$ has the form $\sigma^2(S_{n})$=$\int_{-\infty }^{-\infty}\int_{-\infty }^{-\infty}J(F(x))J(F(y))[F(min(x,y)-F(x)F(y)]dxdy$ where capital $F()$ is the distribution function. I have two questions about the variance formula $\sigma^2(S_{n})$. First is that in practice if I have $n$ data points and a given functional form of $J$, can I approximate the variance formula in the following manner as $\sigma_{2}^2(S_{n})$= $\sum_{i=1 }^{n}\sum_{j=1 }^{n}J(\frac{i-1}{n})J(\frac{j-1}{n})[\frac{min(i-1,j-1)}{n}-\frac{(i-1)(j-1)}{n^2}](X_{i}-X_{i-1})(X_{j}-X_{j-1})$ ?? where I also use empirical distribution function instead of the unknown distribution $F$. The second question is that for the simple example of mean estimtaion, given $J()=1$, then $S_{n}$ is just simple mean, we already know that the variance formula looks like $\frac{1}{n^2}\sum_{i=1}^{n}(X_{i}-\bar{X})^2$ in a quadratic form. So I am thinking is there any way to rewrite $\sigma^2(S_{n})$ in more intuitively understandable quadratic form ?? thank you
Okay consider a game $G$ if a strategy $s_i$ has the following property we call $s_i$ the strictly dominant strategy $$u_i(s_i,s_{-i})>u_i(s_i',s_{-i}) \\ \forall s_{-i} \ \forall s_i' \epsilon S'_i$$ Where $s_{i}$ indicates the strategies of players other then $i$ in the game and $S'_i$ is the set for strategies of player $i$ except the specific strategy $s_i$ Now let's look at the definition of a weakly dominant strategy if a strategy $s_i$ has the following property we call $s_i$ the weakly dominant strategy $$u_i(s_i,s_{-i})≥u_i(s_i',s_{-i}) \\ \forall s_{-i} \ \forall s_i' \epsilon S'_i \ and \\ \exists s_i' \epsilon S'_i \ such \ that: \ u_i(s_i,s_{-i})>u_i(s_i',s_{-i})$$ Okay I believe from these two definitions we can derive that any strictly dominant strategy $s_i$ is also a weakly dominant strategy Definition of strict dominance solvable is as follows : A strict dominance solvable game is a game where the equilibrium outcome is strict dominance equilibrium. A weakly dominance solvable games is a game where the equilibrium outcome is weakly dominance equilibrium. So It seems like then a strictly dominance solvable game is always a weakly dominance solvable game. Am I wrong?
For this particular question, I am wondering if Q1 must be in saturation mode or active? I am aware Q2 must be in saturation but upon trying to solve this circuit I feel as if I have more unknowns than equations. I would show my work but it does not make sense because I cannot straighten out my thought process. anything help would be appreciated as I feel I am missing a key fact. thanks! It is easiest to pull the most current from Q 2b if Q 1 is in saturation. Additionally, the lowest voltage drop across Q 2 is when it is in saturation. Therefore you should solve this question with each transistor in saturation during the appropriate time. This question seems very academic and poorly written. Both transistors must be in saturation and Vce(sat) is only value for a defined Ic/Ib ratio which is certainly not hFE and is more like from < 20% of hFE to 10:1 ratio when Vce< 1V. Also you can't assume the source impedance of 3V logic but it is only rated for 3.3V then the out impedance is around 33 ohms or 25 Ohms at 3.3V while 5V logic is closer to 50 Ohms nom. 0.02% accurate values? with hFE being invalid during saturation. Sorry that's impossible in reality. At best there are only 2 significant figures. What are you teaching here ? How to design with significant error and insignificant figures? Rc only shares some of the Ic1 in order to turn off Q2. Rc2 the motor , we can assume from Ic2 = start current of 0.4A at (10.9V -Vce2) Again Datasheets always define Vce(sat with Ic/Ib=10 and sometimes also =20,50 for devices with hFE>= 1000 where 10% to 20% of this linear beta is possible. MOTOROLA used to call it beta overdrive factor in the 60's and 70's. I would assume Ic2/Ib2=10 and Ic1/Ib1=10 and let I(Rc)= 10% of Ib2 then realize the 2stage current gain has to be at least 11*10 so the input current must be 400mA/110 =3.636 mA with 2.3V/3.6 mA = 825 ohms with at least 25 Ohms to 33 ohms in the CMOS driver impedance so choose 825-33 Ohms for Rb1 and go from there. @JayRivera, the way this question is worded is somewhat bogus IMO. The beta value given in the problem statement is for forward-active mode, not for saturation. For small-signal transistors \$\beta_{sat} \approx 10\$, and this is the value I recommend you use in the equations that follow. The way I'd approach this problem is as follows. First, I want both Q1 and Q2 to be driven into saturation because 1) saturation minimizes the power dissipated by Q1 and Q2 in their "ON" state, and 2) saturation ensures maximum power delivery through the transistor and into the load (the motor in this case). The problem statement indicates Q2's collector saturation current is \$I_{C2(sat)}=400\,mA\$. Q2's base current must therefore be, $$ I_{B2(sat)}=I_{C2(sat)}/\beta_{sat} \;\;\;\;\;\;\;\;(1) $$ I will design this circuit to ensure Q1 is driven into saturation (see below). Therefore, as per the problem statement, Q1's collector-emitter saturation voltage \$V_{CE1(sat)}=0.2\,V\$ and Q2's base-emitter voltage is \$V_{BE2(sat)}=-0.7\,V\$ (n.b., \$V_{EB2(sat)}=+0.7\,V\$). I now know enough information to solve for \$R_{B2}\$'s value: $$ R_{B2}=\frac{V_{DC}-V_{EB2(sat)}-V_{CE1(sat)}}{I_{B2(sat)}} \;\;\;\;\;\;\;\;(2) \\[0.3in] =\frac{10.9\,V-0.7\,V-0.2\,V}{40\,mA} $$ The current flowing through resistor \$R_C\$ contributes to Q1's collector current, so calculate the current through \$R_C\$: $$ I_{RC}=\frac{V_{DC}-V_{CE1(sat)}}{R_C} \;\;\;\;\;\;\;\;(3) $$ Now I have enough information to determine the current flowing into Q1's collector, $$ I_{C1(sat)}=I_{RC}+I_{B2(sat)} \;\;\;\;\;\;\;\;(4) $$ and therefore I know how much base current I require to saturate Q1, $$ I_{B1(sat)}=I_{C1(sat)}/\beta_{sat} \;\;\;\;\;\;\;\;(5) $$ Now I can calculate a value for \$R_{B1}\$ that ensures a current of \$I_{B1(sat)}\$ flows into \$Q1_B\$ when the microcontroller outputs a logic HIGH (ONE) voltage (see also Note 1): $$ R_{B1}=\frac{V_{OH}-V_{BE1(sat)}}{I_{B1(sat)}} \;\;\;\;\;\;\;\;(6) $$ where \$V_{OH}\$ is the minimum voltage that represents a logic HIGH (ONE) output voltage at the microcontroller's digital output pin: $$ V_{OH} \le V_{LogicOne} \le 3.0\,V \;\;\;\;\;\;\;\;(7) $$ Using this approach I solved for RB1 and RB2, and then I ran a PSpice simulation with Q1=2N3904 (NPN) and Q2=2N4403 (PNP). The results of that simulation (with voltage source V2 outputting 3 Volts) are shown in Figure 1. (HINT: NPN transistors are in saturation when \$V_E \lt V_B \gt V_C\$. Likewise, PNP transistors are in saturation when \$V_E \gt V_B \lt V_C\$.) Figure 1. PSpice simulation results: current values for \$V_2=3\,V\$. The PSpice simulation calculates for me the value of load resistor \$R_4\$ as \$\{10.7V/0.4A\}\,\Omega\$, where \$V_{CC}-V_{EC2(sat)}=10.7\,V\$. NOTES Equation 6 does not take into account the digital output pin's output impedance \$R_{OUT}\$. In cases where the digital pin must source (or sink) a "significant" amount of current, the pin's output (or input) impedance becomes important and should be included in your \$R_{B1}\$ calculation. $$ R_{B1}+R_{OUT}=\frac{V_{OH}-V_{BE1(sat)}}{I_{B1(sat)}} \;\;\;\;\;\;\;\;(7) $$ The equation is (keeping Q1 and Q2 both in linear): EDIT: $$ \require{cancel} \xcancel{I_M = I_{C2} = \frac{β_F(V_{DC} - β_F R_{C}(\frac{V_{IN}}{R_{B1}}))} {R_{B2}}}$$ is now: $$ I_M = I_{C2} = \frac{β_F^2 V_{IN} R_C}{R_{B1} (R_{B2} + 2R_C)}$$ I had it in my head that Q2 was voltage following, rather it's a current divider. So I made the mistake. . $$ I_M ≤ 0.4A \; (max \; motor \; current \; at \; V_{DC} = 10.9V)$$ For each BJT to reach saturation: Q1: EDIT: $$ \require{cancel} \xcancel{V_{DC} < β_F R_{C1}(\frac{V_{IN}}{R_{B1}}) \; + \; V_{CE\_SAT}}$$ is now: $$ V_{DC} < \frac{β_F V_{IN} R_{B2} R_C}{R_{B1}(R_{B2} + R_C)} \; + \; V_{CE\_SAT}$$ Q2: $$ I_M > 0.4A$$ To solve for \$R_{B1}\$ and \$R_{B1}\$: $$ 0.4A ≥ I_M$$ until you isolate the \$R_{B1}\$ and \$R_{B1}\$ into a multi-variate: EDIT: $$\require{cancel} \xcancel{\frac{R_{B1}}{R_{B2}} \quad or \quad \frac{R_{B2}}{R_{B1}}}$$ is now $$R_{B1} R_{B2}$$ You can do algebra enough, right? I thought I already showed these things about BJT saturation being the minimum current that can be drawn: with degeneration resistor: $$ I_C = min(\frac{β_F(V_{IN} - V_{BE\_SAT})}{(β_F + 1)R_E}, \frac{V_{DC} - V_{CE\_SAT} - V_{IN} + V_{BE\_SAT}}{R_C})$$ ^ This one I showed already without degeneration resistor (but with base resistor): $$ I_C = min(β_F \frac{V_{IN}}{R_B}, \frac{V_{DC} - V_{CE\_SAT}}{R_C})$$ ^ This one I haven't We can have a third case where there is no degeneration resistor and no base resistor and the base and emitter current are only limited by parasitic resistance, but I won't do that. For the very complete equation you put together the \$I_M\$ above and the 2 separate cases for maximum currents (through the min() function). For more accuracy, one can substitute \$V_{BE\_SAT}\$ and \$V_{CE\_SAT}\$ with the equations in Ebers-Moll model, with little or no modifications to the given equations above, but I haven't tried it myself. ALSO, don't buy from Lees's Electronics, they might mistake to give you parts set aside for me and you might end up with faulty components.
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including: Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT (1) \(\sqrt{x} < \sqrt{y}\). Since both sides of the inequality are positive (the square root from a positive number is positive), then we can safely square: x < y. Directly answers the question. Sufficient. (2) \((x-3)^2 < (y-3)^2\). If \(x=3\) and \(y\neq 3\), the inequality will hold true: the left hand side will be 0, while the right hand side will be more than 0. Thus, if \(x=3\), y can be less than 3, giving a NO answer to the question, as well as more than 3, giving an YES answer to the question. Not sufficient. (1) \(\sqrt{x} < \sqrt{y}\) we can say that ans is YES, but lets solve it algebrically too.. is x<y can be written as x-y<0.. \(\sqrt{x}^2-\sqrt{y}^2\)<0.. \((\sqrt{x}-\sqrt{y})(\sqrt{x}+\sqrt{y}) <0\).. \((\sqrt{x}+\sqrt{y}) >0\), as x and y are +ive, so we have to find if \((\sqrt{x}-\sqrt{y})<0\) or \(\sqrt{x}<\sqrt{y}\).. statement 1 tells us exactly this.. SUFF (2) \((x-3)^2 < (y-3)^2\) It will hold in many cases : two case a) if x = 1 and y= 7.. y>x b) if x= 4 and y=1.. y<x. two different answers Insuff Re: If x and y are positive, is x < y?[#permalink] Show Tags 25 Mar 2016, 02:49 Hi guys, thank you for your explanations Bunuelchetan2u . I have a question, I wanted to understand why we cant use an algebraic approach on the second statement? Since we have squares on either side, can't we take take the square root on either side, which would give us x - 3 < y - 3, then by adding 3 on either side we get x < y .. ? This is exactly what I did and marked D as the answer when I saw this question for the first time. From both the posts, my understanding is that we cant do what i mentioned earlier because we wouldn't know after taking the root whether x-3 and y-3 are positive or negative... Is this right? or am I mistaken? I'm asking this because I do not want to be confused in the exam under time pressure.. for example what would have happened had the question not specified that x and y are positive... and the first statement was x^2<y^2... ? would the answer in that case be E... Sorry about the long post, I just wanted clarity on this.. I do NOT want to get a 600-700 level question wrong in the exam. Hi guys, thank you for your explanations Bunuelchetan2u . I have a question, I wanted to understand why we cant use an algebraic approach on the second statement? Since we have squares on either side, can't we take take the square root on either side, which would give us x - 3 < y - 3, then by adding 3 on either side we get x < y .. ? This is exactly what I did and marked D as the answer when I saw this question for the first time. From both the posts, my understanding is that we cant do what i mentioned earlier because we wouldn't know after taking the root whether x-3 and y-3 are positive or negative... Is this right? or am I mistaken? I'm asking this because I do not want to be confused in the exam under time pressure.. for example what would have happened had the question not specified that x and y are positive... and the first statement was x^2<y^2... ? would the answer in that case be E... Sorry about the long post, I just wanted clarity on this.. I do NOT want to get a 600-700 level question wrong in the exam. MUST KNOW: \(\sqrt{x^2}=|x|\): The point here is that since square root function cannot give negative result then \(\sqrt{some \ expression}\geq{0}\). So \(\sqrt{x^2}\geq{0}\). But what does \(\sqrt{x^2}\) equal to? Let's consider following examples: If \(x=5\) --> \(\sqrt{x^2}=\sqrt{25}=5=x=positive\); If \(x=-5\) --> \(\sqrt{x^2}=\sqrt{25}=5=-x=positive\). So we got that: \(\sqrt{x^2}=x\), if \(x\geq{0}\); \(\sqrt{x^2}=-x\), if \(x<0\). What function does exactly the same thing? The absolute value function: \(|x|=x\), if \(x\geq{0}\) and \(|x|=-x\), if \(x<0\). That is why \(\sqrt{x^2}=|x|\). BACK TO THE QUESTION: According to the above if you take the square root from \((x-3)^2 < (y-3)^2\) you'll get |x - 3| < |y - 3|, which means that the distance between x and 3 is less than the distance between y and 3, which is not sufficient to say whether x < y. Re: If x and y are positive, is x < y?[#permalink] Show Tags 19 Jan 2017, 09:48 BACK TO THE QUESTION: According to the above if you take the square root from \((x-3)^2 < (y-3)^2\) you'll get |x - 3| < |y - 3|, which means that the distance between x and 3 is less than the distance between y and 3, which is not sufficient to say whether x < y. Hope it helps. Dear Bunuel, Stem tells x>0 and y>0, so why cannot we take (x-3)<(y-3) as it is since we know the signs for both x and y. In that case, I thought x was indeed less than y. Pls let me know why this is wrong? Re: If x and y are positive, is x < y?[#permalink] Show Tags 20 Jan 2017, 07:15 1 1 Liza99 wrote: BACK TO THE QUESTION: According to the above if you take the square root from \((x-3)^2 < (y-3)^2\) you'll get |x - 3| < |y - 3|, which means that the distance between x and 3 is less than the distance between y and 3, which is not sufficient to say whether x < y. Hope it helps. Dear Bunuel, Stem tells x>0 and y>0, so why cannot we take (x-3)<(y-3) as it is since we know the signs for both x and y. In that case, I thought x was indeed less than y. Pls let me know why this is wrong? Thnak you We know that |x| = x, when \(x \geq{0}\) (so |something| = something, when that something is >=0) and |x| = -x, when \(x \leq{0}\) (so |something| = -something, when that something is =<0). Know for positive x, x-3 (expression in modulus) can be positive (when x>3) as well as negative (when x<3), thus |x-3| = x-3, when x>3 and |x-3| = -(x-3), when x<3. Thus knowing that x>0 is not enough to say that |x-3| = x-3. Re: If x and y are positive, is x < y?[#permalink] Show Tags 09 Apr 2017, 20:44 Bunuel wrote: Gurshaans wrote: Hi guys, thank you for your explanations Bunuelchetan2u . I have a question, I wanted to understand why we cant use an algebraic approach on the second statement? Since we have squares on either side, can't we take take the square root on either side, which would give us x - 3 < y - 3, then by adding 3 on either side we get x < y .. ? This is exactly what I did and marked D as the answer when I saw this question for the first time. From both the posts, my understanding is that we cant do what i mentioned earlier because we wouldn't know after taking the root whether x-3 and y-3 are positive or negative... Is this right? or am I mistaken? I'm asking this because I do not want to be confused in the exam under time pressure.. for example what would have happened had the question not specified that x and y are positive... and the first statement was x^2<y^2... ? would the answer in that case be E... Sorry about the long post, I just wanted clarity on this.. I do NOT want to get a 600-700 level question wrong in the exam. MUST KNOW: \(\sqrt{x^2}=|x|\): The point here is that since square root function cannot give negative result then \(\sqrt{some \ expression}\geq{0}\). So \(\sqrt{x^2}\geq{0}\). But what does \(\sqrt{x^2}\) equal to? Let's consider following examples: If \(x=5\) --> \(\sqrt{x^2}=\sqrt{25}=5=x=positive\); If \(x=-5\) --> \(\sqrt{x^2}=\sqrt{25}=5=-x=positive\). So we got that: \(\sqrt{x^2}=x\), if \(x\geq{0}\); \(\sqrt{x^2}=-x\), if \(x<0\). What function does exactly the same thing? The absolute value function: \(|x|=x\), if \(x\geq{0}\) and \(|x|=-x\), if \(x<0\). That is why \(\sqrt{x^2}=|x|\). BACK TO THE QUESTION: According to the above if you take the square root from \((x-3)^2 < (y-3)^2\) you'll get |x - 3| < |y - 3|, which means that the distance between x and 3 is less than the distance between y and 3, which is not sufficient to say whether x < y. Hope it helps. Hi Bunuel, I can't get this statement clear, can you help? |x - 3| < |y - 3|, which means that the distance between x and 3 is less than the distance between y and 3. Re: If x and y are positive, is x < y?[#permalink] Show Tags 09 Apr 2017, 22:08 Cez005 wrote: Bunuel wrote: Gurshaans wrote: Hi guys, thank you for your explanations Bunuelchetan2u . I have a question, I wanted to understand why we cant use an algebraic approach on the second statement? Since we have squares on either side, can't we take take the square root on either side, which would give us x - 3 < y - 3, then by adding 3 on either side we get x < y .. ? This is exactly what I did and marked D as the answer when I saw this question for the first time. From both the posts, my understanding is that we cant do what i mentioned earlier because we wouldn't know after taking the root whether x-3 and y-3 are positive or negative... Is this right? or am I mistaken? I'm asking this because I do not want to be confused in the exam under time pressure.. for example what would have happened had the question not specified that x and y are positive... and the first statement was x^2<y^2... ? would the answer in that case be E... Sorry about the long post, I just wanted clarity on this.. I do NOT want to get a 600-700 level question wrong in the exam. MUST KNOW: \(\sqrt{x^2}=|x|\): The point here is that since square root function cannot give negative result then \(\sqrt{some \ expression}\geq{0}\). So \(\sqrt{x^2}\geq{0}\). But what does \(\sqrt{x^2}\) equal to? Let's consider following examples: If \(x=5\) --> \(\sqrt{x^2}=\sqrt{25}=5=x=positive\); If \(x=-5\) --> \(\sqrt{x^2}=\sqrt{25}=5=-x=positive\). So we got that: \(\sqrt{x^2}=x\), if \(x\geq{0}\); \(\sqrt{x^2}=-x\), if \(x<0\). What function does exactly the same thing? The absolute value function: \(|x|=x\), if \(x\geq{0}\) and \(|x|=-x\), if \(x<0\). That is why \(\sqrt{x^2}=|x|\). BACK TO THE QUESTION: According to the above if you take the square root from \((x-3)^2 < (y-3)^2\) you'll get |x - 3| < |y - 3|, which means that the distance between x and 3 is less than the distance between y and 3, which is not sufficient to say whether x < y. Hope it helps. Hi Bunuel, I can't get this statement clear, can you help? |x - 3| < |y - 3|, which means that the distance between x and 3 is less than the distance between y and 3. Absolute value a number is the distance between this number and 0. For example, |x| is the distance from 0 to x. Similarly |x - 3| is the distance between x-3 and 0 or between x and 3._________________ Re: If x and y are positive, is x < y?[#permalink] Show Tags 19 Dec 2017, 14:44 1 Hi All, This prompt is based on a couple of Number Property rules - and you can TEST VALUES to solve it. We're told that X and Y are POSITIVE. We're asked if X is less than Y. This is a YES/NO question. 1) √X < √Y Since we know that X and Y are both POSITIVE, squaring or square-rooting those values will NOT change the "order" of them. Even if you're dealing with positive fractions, the 'order' will not change. For example: √X = 1/4 and √Y = 1/2 X = 1/2 and Y = about .71 Thus, the answer to the question is ALWAYS YES. Fact 1 is SUFFICIENT 2) (X-3)^2 < (Y-3)^2 While X and Y are both POSITIVE, we could end up with an (X-3) or (Y-3) that is negative though... and that will impact the answer to the question. IF... X = 2, Y = 10.... then (-1)^2 is less than (7)^2 and the answer to the question is YES IF... X = 2, Y = 1.... then (-1)^2 is less than (-2)^2 and the answer to the question is NO Fact 2 is INSUFFICIENT Re: If x and y are positive, is x < y?[#permalink] Show Tags 10 Jul 2019, 16:27 x> 0 y> 0 We don't know if they are integers or not, but we know they are positive. Stat (1) We know that x and y are positive, therefore we can square the integers. This is only permitted when we know the value of the sign, otherwise if we don't we cannot do this. The reason is that if we have a negative number as one of those variables, then the sign may change. Alternative approach: test numbers 0<x<y<1 - both variables are proper fractions x=1/16 sq.root of x = 1/4 y= 1/4 sq.root y = 1/2 x<y test integers that satisfy the stem x= 4, y=16 x<y
Search Now showing items 1-10 of 52 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Over the interval $[0,1]$, the function $f(x) = \cos(x)$ is a strictly decreasing continuous function mapping $[0,1]$ into itself. So $f\circ f$ is strictly increasing there. If one pick$a_1 = \frac{\pi}{4} \in [0,1]$ and generate a sequence by iteration $a_n = f(a_{n-1})$,it is not hard to check $$a_2 < a_3 < a_1$$ Since $f$ is strictly decreasing, $a_2 < a_3 < a_1 \implies a_3 > a_4 > a_2$ and hence $$a_2 < a_4 < a_3 < a_1$$ Since $f \circ f$ is strictly increasing, this implies the even sub-sequence $a_{2n}$ is strictly increasing and the odd sub-sequence $a_{2n-1}$ is strictly decreasing. Since all $a_k$ belongs to $[0,1]$, the even and odd sub-sequences are both bounded monotonic sequences. As a result, following two limits exists: $$a_{even} = \lim_{n\to\infty} a_{2n}\quad\text{ and }\quada_{odd} = \lim_{n\to\infty} a_{2n-1}$$Since $f\circ f$ is continuous, both $a_{even}$ and $a_{odd}$ are fixed points for $f \circ f$. If one make a plot of $f\circ f$ over the interval $[0,1]$, one will notice $f \circ f$ has a unique fixed point in $[0,1]$. That point is also the unique fixed point of $f$. This forces$a_{even} = a_{odd}$ and the sequence $a_n$ converges to the unique fixed point of $f$.
You have confused counting worlds with computing probabilities. They are different things. If you measured $m$ systems identically prepared to give one of $n$ result $v_1,$ ...$v_n$ with respective frequencies $p_1,$ ... $p_n$, then there are $n^m$ aggregate outcomes. But the MWI doesn't predict different probabilities than any other interpretation. Both many worlds and Copenhagen predict any one of the $n^m$ aggregate possibilities appears with a frequency of $p_1^{k_1}p_2^{k_2}...p_n^{k_n}$ when the $i$th value occurs $k_i$ times (and by frequency the two interpretations mean the ... exact ... same ... thing, and this frequency applies to the frequency of a particular aggregate outcome). Sure you can talk about how many worlds have those aggregate outcomes. And there are least $\frac{m!}{k_1!k_2!...k_n!}p_1^{k_1}p_2^{k_2}...p_n^{k_n}$ such worlds. But a computation of how many worlds a person feels like grouping together is not a scientific prediction of a frequency. No one ever claimed it was. Copenhagen predicts all $n^m$ aggregate outcomes can happen even ones where the observed $k_i$ for an aggregate outcome is very different than $mp_i$. Many worlds predicts all $n^m$ do happen, even ones where the observed $k_i$ for an aggregate is very different than $mp_i$. And they predict the same frequencies. And both of them predict the frequency of a particular aggregate outcome, one of the $n^m$. Not one of the $\left(\begin{matrix}m+n-1\\m\end{matrix}\right)\neq n^m$ many groupings. And no one ever claimed that $mp_i$ is predicted (by any interpretation) to be close the frequency of one of those aggregate outcomes. It's ranges of $k_i$ near to $mp_i$ where for each value of $k_1,$ ... $k_n$ you group together $\frac{m!}{k_1!k_2!...k_n!}$ many aggregate outcomes (outcomes for Copenhagen or for many worlds) it is only the frequency of those groupings of outcomes that gets anywhere near $mp_i$. Copenhagen predicted a single one of the $n^m$ aggregate outcomes, and many worlds predicts each of the $n^m$ aggregate outcomes. And each of the $n^m$ aggregate outcomes is predicted by both interpretations to occur with a particular frequency (as measured by repeated trials over time). Which is a super tiny frequency (as you'd expect when they are $n^m$ outcomes). Copenhagen and many worlds predict the same frequencies. They both do it by talking about each of the $n^m$ outcomes and their frequencies. And both of them only compare to $mp_i$ when you group collections of the $n^m$ outcomes into $\left(\begin{matrix}m+n-1\\m\end{matrix}\right)\neq n^m$ many groups, each grouping having some values of $k_1,$ ... $k_n$ such that all the outcomes in that grouping are predicted to be equally likely and we know there are $\frac{m!}{k_1!k_2!...k_n!}$ many outcomes making up that group. Let's be really really really clear. Copenhagen and many worlds both predict $n^m$ distinct outcomes. Both of them predict the same frequencies of getting each of those $n^m$ outcomes. None of those frequencies are directly connected to $mp_i$ which instead of only compared to collections of groupings of some of the $n^m$ predicted outcomes. The grouping have the same $k$'s and the collections of groupings have collections of $k$'s that are similar to each other. Both interpretations do this. They both do it using the same math. The difference is that Copenhagen takes a solipsistic position about one world thinking it's special just because it can't detect the other worlds. But they predicted the exact same things. And neither of them predicts seeing $k_i=mp_i$ that's an mathematical value of a mathematical average of doing many trials of $m$ on each trial. And since Copenhagen and many worlds predict the same probabilities for each of the $n^m$ outcomes they don't disagree with each other. If you want to see the difference between Copenhagen and many worlds imagine a computer that decides that it is special. But it also likes having all new parts. And it also thinks the world should have more computers like itself. So it puts in a work order to have itself taken apart and its parts recycled but done with with enough details recorded to have two new computers made with the exact same specifications and loaded with the same data. Then it saves its information to its hard drive and powers itself down. But remember how it thought it was special? When it boots up, it investigates itself and its parts look new, and there is a computer next to it that also looks new. And every interaction with the computer next to it is consistent with the computer next to it being identical. And it knows that its work orders are generally followed. So it has every reason to think the other computer is the same, in fact it can send cameras into itself and into the other computer and it does look the same. But if it thought it was special then it could think that the other computer looks the same but isn't special. The feeling of specialness is purely solipsism. It's isn't scientific or objective. Same with the wavefunction. If the math predicts that the wavefunction branches into parts, then obviously each part could think it is special and in Copenhagen it does. And in many worlds it doesn't think it is special. That's the whole difference. The two parts act independently. So unlike the computer example they can't see each other. So its like if the computer also put in a work order to have the newly built computers be sent to different planets. So when it boots up it doesn't see the other computer. But it still knows that its works orders are generally followed. Each computer could believe (with no evidence) that every work order is followed to the letter, except the ones that involve copying and sending to new planets. And that somehow magically those work orders aren't followed. Even though a combination of work orders when executed together could achieve that end in a non obvious way. So a Copenhagenist could believe that the Schrödinger equation is fine, except for situations where the wavefunction branches into parts that act independently. And then in that case something magical happens and all except one of those branches somehow magically ceases to exist for no known reason. That's the difference. One (many worlds) predicts the same evolution equation holds all the time because we saw no evidence when or where or how it would deviate. Another (Copenhagen) postulates that something else happens too, but only in the circumstances where you could never test it. Since Copenhagen hid all of its differences in the things that can't be tested, they don't make different predictions. Since they don't make different predictions your whole question falls apart. Copenhagen has to group outcomes too in order to compare things to $mp_i$ and it does so the same way many worlds does. The only difference is Copenhagen pretends that only one branch "exists" but since branches are independent there is no testable consequence to that solipsism. Just like the computer's assertion that it is "special" is scientifically and objectively meaningless. If many worlds came first, people would criticize Copenhagen for making untestable nonpredictions. Since Copenhagen came first, it simply had to share the space with theories like many worlds that make the same predictions, or it has to be unreasonable about the superiority of the untestable parts of its theory. Just like the computers sitting next to each other could be unreasonable and each assert that they are "special" (with no testable predictions about that) and that the objectively identical computer isn't "special."
Calculates the Akaike's information criterion (AIC) of the given estimated ARMA model (with correction to small sample sizes). Syntax ARMA_AIC( X, Order, mean, sigma, phi, theta) X is the univariate time series data (one dimensional array of cells (e.g. rows or columns)). Order is the time order in the data series (i.e. the first data point's corresponding date (earliest date=1 (default), latest date=0)). Order Description 1 ascending (the first data point corresponds to the earliest date) (default) 0 descending (the first data point corresponds to the latest date) mean is the ARMA model mean (i.e. mu). sigma is the standard deviation of the model's residuals/innovations. phi are the parameters of the AR(p) component model (starting with the lowest lag). theta are the parameters of the MA(q) component model (starting with the lowest lag). Remarks The underlying model is described here. Akaike's Information Criterion (AIC) is described here. Warning:ARMA_AIC() function is deprecated as of version 1.63: use ARMA_GOF function instead. The time series is homogeneous or equally spaced. The time series may include missing values (e.g. #N/A) at either end. The standard deviation (i.e. $\sigma$) of the ARMA model's residuals should be greater than zero. Given a fixed data set, several competing models may be ranked according to their AIC, the model with the lowest AIC being the best. The ARMA model has p+q+2 parameters, and it has independent and normally distributed residuals with constant variance. Maximizing for the log-likelihood function, the AICc function for an ARMA model becomes: $$\mathit{AICc}(p,q)= \ln(\hat\sigma^2(p,q))+\frac{2\times(p+q)}{T}$$ Where: $T$ is the number of non-missing values in the time series. $p$ is the order of the AR component model. $q$ os the order of the MA component model. $\hat\sigma$ is the standard deviation of the residuals. The number of parameters in the input argument - phi - determines the order of the AR component. The number of parameters in the input argument - theta - determines the order of the MA component. Examples Example 1: Formula Description (Result) =ARMA_AIC($B$2:$B$15,1,$D$3,$D$4,$D$5,$D$6) Akaike's Information Criterion (1046.59) =ARMA_LLF($B$2:$B$15,1,$D$3,$D$4,$D$5,$D$6) Log-Likelihood Function (-519.095) =ARMA_CHECK($D$3,$D$4,$D$5,$D$6) Is ARMA model stable? (1) Files Examples References D. S.G. Pollock; Handbook of Time Series Analysis, Signal Processing, and Dynamics; Academic Press; Har/Cdr edition(Nov 17, 1999), ISBN: 125609906 James Douglas Hamilton; Time Series Analysis; Princeton University Press; 1st edition(Jan 11, 1994), ISBN: 691042896 Tsay, Ruey S.; Analysis of Financial Time Series; John Wiley & SONS; 2nd edition(Aug 30, 2005), ISBN: 0-471-690740 Box, Jenkins and Reisel; Time Series Analysis: Forecasting and Control; John Wiley & SONS.; 4th edition(Jun 30, 2008), ISBN: 470272848 Walter Enders; Applied Econometric Time Series; Wiley; 4th edition(Nov 03, 2014), ISBN: 1118808568
So is $\Theta$ undefined for insertion sort? This question contains a category error. It's like saying, "I know that Donald Trump has a height of at least 5 and at most 7. So are numbers undefined for Donald Trump? $\Theta$ is notation for expressing the growth rate of mathematical functions. "Insertion sort" is not a mathematical function, so if you want to talk about $\Theta$ and insertion sort in the same sentence, you need to say what property of insertion sort you're measuring with a mathematical function that you wish to describe with $\Theta$. We measure the resource usage of algorithms in terms of the length of the input, which is usually denoted $n$. You've proposed a function which is the number of execution steps of insertion sort on some input. However, this is a function of the input itself, not of its length. Some inputs of length $n$ will take roughly $n$ steps to sort (I'm using "roughly" to hide constant factors), and some inputs of length $n$ will take roughly $n^2$ steps. So you can't write this as a function of $n$ at all – the number of steps required isn't just a function of the length of the input but, rather, it's a function of the whole input. Because the thing you're trying to measure isn't a function of the length of the input, you can't directly measure it using $\Theta$ at all. So we need to come up with a function that does just depend on the length of the input. Two natural functions are the best-case and worst-case number of execution steps. We know that, for an input of length $n$, the best case is that insertion sort finds that the input is already sorted, and in this case, it takes a linear number of steps. No more, no less, so we're entitled to say that the best-case running time of the algorithm is $\Theta(n)$. Similarly, we're entitled to say that the worst-case running time is $\Theta(n^2)$. If the best and worst case was (asymptotically) the same (up to constant factors), then the running time would actually just be a function of the length of the input, so it would make sense to say that the running time was, e.g., $\Theta(n\log n)$. However, unless the running time really is a function of $n$, this is an abuse of notation. In the case or insertion sort, where the best and worst case running times are different, we can abuse notation a little harder and say the running time is $\Omega(n)$ and $O(n^2)$. This says that, for any (sufficiently large) input, the running time will be somewhere between $n$ and $n^2$ steps (up to constant factors) but, again, there is no actual function of $n$ that is "the running time." It would be more formal to say that the running time $T(x)$ for an input $x$ satisfies $c|x|\leq T(x)\leq c'|x|^2$ for large enough $|x|$ and some constants $c$ and $c'$.
Context: I have been trying to understand the genetic algorithm discussed in the paper Decomposition of unitary matrices for finding quantum circuits: Application to molecular Hamiltonians (Daskin & Kais, 2011) (PDF here) and Group Leaders Optimization Algorithm (Daskin & Kais, 2010). I'll try to summarize what I understood so far, and then state my queries. Let's consider the example of the Toffoli gate in section III-A in the first paper. We know from other sources such as this, that around 5 two-qubit quantum gates are needed to simulate the Toffoli gate. So we arbitrarily choose a set of gates like $\{V, Z, S, V^{\dagger}\}$. We restrict ourselves to a maximum of $5$ gates and allow ourselves to only use the gates from the gate set $\{V, Z, S, V^{\dagger}\}$. Now we generate $25$ groups of $15$ random strings like: 13 2 0.0; 23 1 0.0; 32 1 0.0; 43 2 0.0; 21 3 0.0 In the above string of numbers, the first numbers in bold are the index number of the gates (i.e. $V = 1, Z = 2, S = 3, Z^{\dagger} = 4$), the last numbers are the values of the angles in $[0,2\pi]$ and the middle integers are the target qubit and the control qubits respectively. There would be $374$ such other randomly generated strings. Our groups now look like this (in the image above) with $n=25$ and $p=15$. The fitness of each string is proportional the trace fidelity $\mathcal{F} = \frac{1}{N}|\operatorname{Tr}(U_aU_t^{\dagger})|$ where $U_a$ is the unitary matrix representation corresponding to any string we generate and $U_t$ is the unitary matrix representation of the 3-qubit Toffoli gate. The group leader in any group is the one having the maximum value of $\mathcal{F}$. Once we have the groups we'll follow the algorithm: The Eq. (4) mentioned in the image is basically: $$\text{new string} [i] = r_1 \times \text{old string}[i] + r_2 \times \text{leader string}[i] + r_3 \times \text{random string}[i]$$ (where $1 \leq i \leq 20$) s.t. $r_1+r_2+r_3 = 1$. The $[i]$ represents the $i$ th number in the string, for example in 1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 2 1 3 0.0, the $6$-th element is 3. In this context, we take $r_1 = 0.8$ and $r_2,r_3 = 0.2$. That is, in each iteration, all the $375$ strings get mutated following the rule: for each string in each group, the individual elements (numbers) in the string gets modified following the Eq. (4). Moreover, In addition to the mutation, in each iteration for each group of the population one-way-crossover (also called the parameter transfer) is done between a chosen random member from the group and a random member from a different random group. This operation is mainly replacing some random part of a member with the equivalent part of a random member from a different group. The amount of the transfer operation for each group is defined by a parameter called transfer rate, here, which is defined as $$\frac{4\times \text{max}_{\text{gates}}}{2} - 1$$ where the numerator is the number of variables forming a numeric string in the optimization. Questions: When we are applying this algorithm to find the decomposition of a random gate, how do we know the number and type of elementary gates we need to take in our gate set? In the example above they took $\{V,Z,S,V^{\dagger}\}$. But I suspect that that choice was not completely arbitrary (?) Or could we have chosen something random like $\{X,Z,R_x,R_{zz},V\}$ too? Also, the fact that they used only $5$ gates in total, isn't arbitrary either (?) So, could someone explain the logical reasoning we need to follow when choosing the gates for our gate set and choosing the number of gates to use in total? (It is mentioned in the papers that the maximum possible value of the number of gates is restricted to $20$ in this algorithm) After the part (in "Context") discussing the selection of the gate set and number of gates,is my explanation/understanding (paragraph 3 onwards) of the algorithm correct? I didn't quite understand the meaning of "parameter transfer rate". They say that $4\times \text{max}_{\text{gates}} - 2$ is the number of variables forming a numeric string in the optimization. What is $\text{max}_{\text{gates}}$ in this context: $5$ or $20$? Also, what exactly do they mean by the portion I italicized ( number of variables forming a numeric string in the optimization) ? How do we know when to terminate the program? Do we terminate it when any one of the group leaders cross a desired value of trace fidelity (say $0.99$)?
Suppose we are given a function and we want to calculate the surface area of the function f {\displaystyle f} rotated around a given line. The calculation of surface area of revolution is related to the arc length calculation. f {\displaystyle f} If the function is a straight line, other methods such as surface area formulae for cylinders and conical frustra can be used. However, if f {\displaystyle f} is not linear, an integration technique must be used. f {\displaystyle f} Recall the formula for the lateral surface area of a conical frustum: A = 2 π r l {\displaystyle A=2\pi rl} where is the average radius and r {\displaystyle r} is the slant height of the frustum. l {\displaystyle l} For and y = f ( x ) {\displaystyle y=f(x)} , we divide a ≤ x ≤ b {\displaystyle a\leq x\leq b} into subintervals with equal width [ a , b ] {\displaystyle [a,b]} and endpoints δ x {\displaystyle \delta x} . We map each point x 0 , x 1 , … , x n {\displaystyle x_{0},x_{1},\ldots ,x_{n}} to a conical frustum of width Δx and lateral surface area y i = f ( x i ) {\displaystyle y_{i}=f(x_{i})} . A i {\displaystyle A_{i}} We can estimate the surface area of revolution with the sum A = ∑ i = 0 n A i {\displaystyle A=\sum _{i=0}^{n}A_{i}} As we divide into smaller and smaller pieces, the estimate gives a better value for the surface area. [ a , b ] {\displaystyle [a,b]} Definition (Surface of Revolution) [ edit ] The surface area of revolution of the curve about a line for y = f ( x ) {\displaystyle y=f(x)} is defined to be a ≤ x ≤ b {\displaystyle a\leq x\leq b} A = lim n → ∞ ∑ i = 0 n A i {\displaystyle A=\lim _{n\to \infty }\sum _{i=0}^{n}A_{i}} The Surface Area Formula [ edit ] Suppose is a continuous function on the interval f {\displaystyle f} and [ a , b ] {\displaystyle [a,b]} represents the distance from r ( x ) {\displaystyle r(x)} to the axis of rotation. Then the lateral surface area of revolution about a line is given by f ( x ) {\displaystyle f(x)} A = 2 π ∫ a b r ( x ) 1 + f ′ ( x ) 2 d x {\displaystyle A=2\pi \int \limits _{a}^{b}r(x){\sqrt {1+f'(x)^{2}}}\,dx} And in Leibniz notation A = 2 π ∫ a b r ( x ) 1 + ( d y d x ) 2 d x {\displaystyle A=2\pi \int \limits _{a}^{b}r(x){\sqrt {1+\left({\tfrac {dy}{dx}}\right)^{2}}}\,dx} Proof: A {\displaystyle A} = lim n → ∞ ∑ i = 1 n A i {\displaystyle =\lim _{n\to \infty }\sum _{i=1}^{n}A_{i}} = lim n → ∞ ∑ i = 1 n 2 π r i l i {\displaystyle =\lim _{n\to \infty }\sum _{i=1}^{n}2\pi r_{i}l_{i}} = 2 π ⋅ lim n → ∞ ∑ i = 1 n r i l i {\displaystyle =2\pi \cdot \lim _{n\to \infty }\sum _{i=1}^{n}r_{i}l_{i}} As and n → ∞ {\displaystyle n\to \infty } , we know two things: Δ x → 0 {\displaystyle \Delta x\to 0} the average radius of each conical frustum approaches a single value r i {\displaystyle r_{i}} the slant height of each conical frustum equals an infitesmal segment of arc length l i {\displaystyle l_{i}} From the arc length formula discussed in the previous section, we know that l i = 1 + f ′ ( x i ) 2 {\displaystyle l_{i}={\sqrt {1+f'(x_{i})^{2}}}} Therefore A {\displaystyle A} = 2 π ⋅ lim n → ∞ ∑ i = 1 n r i l i {\displaystyle =2\pi \cdot \lim _{n\to \infty }\sum _{i=1}^{n}r_{i}l_{i}} = 2 π ⋅ lim n → ∞ ∑ i = 1 n r i 1 + f ′ ( x i ) 2 Δ x {\displaystyle =2\pi \cdot \lim _{n\to \infty }\sum _{i=1}^{n}r_{i}{\sqrt {1+f'(x_{i})^{2}}}\Delta x} Because of the definition of an integral , we can simplify the sigma operation to an integral. ∫ a b f ( x ) d x = lim n → ∞ ∑ i = 1 n f ( c i ) Δ x i {\displaystyle \int \limits _{a}^{b}f(x)dx=\lim _{n\to \infty }\sum _{i=1}^{n}f(c_{i})\Delta x_{i}} A = 2 π ∫ a b r ( x ) 1 + f ′ ( x ) 2 d x {\displaystyle A=2\pi \int \limits _{a}^{b}r(x){\sqrt {1+f'(x)^{2}}}dx} Or if is in terms of f {\displaystyle f} on the interval y {\displaystyle y} [ c , d ] {\displaystyle [c,d]} A = 2 π ∫ c d r ( y ) 1 + f ′ ( y ) 2 d y {\displaystyle A=2\pi \int \limits _{c}^{d}r(y){\sqrt {1+f'(y)^{2}}}dy}
Difference between revisions of "Unitriangular matrix group:UT(3,p)" (→Subgroups) m (→In coordinate form) (4 intermediate revisions by 2 users not shown) Line 10: Line 10: <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> + + + + + + + + + + + + + + + + ===In coordinate form=== ===In coordinate form=== Line 16: Line 32: with the multiplication law given by: with the multiplication law given by: − <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + + (a_{12},a_{13},a_{23})^{-1} = (-a_{12}, -a_{13} + a_{12}a_{23}, -a_{23}) </math>. The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: Line 30: Line 48: The group can be defined by means of the following [[presentation]]: The group can be defined by means of the following [[presentation]]: − <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = + <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = \rangle</math> − where <math> + where <math></math> denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: Line 49: Line 67: 0 & 0 & 1\\ 0 & 0 & 1\\ \end{pmatrix}</math> \end{pmatrix}</math> + + ===As a semidirect product=== ===As a semidirect product=== Line 190: Line 210: ==External links == ==External links == − * {{wp| + * {{wp|}} Latest revision as of 11:21, 22 August 2014 This article is about a family of groups with a parameter that is prime. For any fixed value of the prime, we get a particular group. View other such prime-parametrized groups Contents 1 Definition 2 Families 3 Elements 4 Arithmetic functions 5 Subgroups 6 Linear representation theory 7 Endomorphisms 8 GAP implementation 9 External links Definition Note that the case , where the group becomes dihedral group:D8, behaves somewhat differently from the general case. We note on the page all the places where the discussion does not apply to . As a group of matrices The multiplication of matrices and gives the matrix where: The identity element is the identity matrix. The inverse of a matrix is the matrix where: Note that all addition and multiplication in these definitions is happening over the field . In coordinate form We may define the group as set of triples over the prime field , with the multiplication law given by: , . The matrix corresponding to triple is: Definition by presentation The group can be defined by means of the following presentation: where denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators correspond to matrices: Note that in the above presentation, the generator is redundant, and the presentation can thus be rewritten as a presentation with only two generators and . As a semidirect product This group of order can also be described as a semidirect product of the elementary abelian group of order by the cyclic group of order , with the following action. Denote the base of the semidirect product as ordered pairs of elements from . The action of the generator of the acting group is as follows: In this case, for instance, we can take the subgroup with as the elementary abelian subgroup of order , i.e., the elementary abelian subgroup of order is the subgroup: The acting subgroup of order can be taken as the subgroup with , i.e., the subgroup: Families These groups fall in the more general family of unitriangular matrix groups. The unitriangular matrix group can be described as the group of unipotent upper-triangular matrices in , which is also a -Sylow subgroup of the general linear group . This further can be generalized to where is the power of a prime . is the -Sylow subgroup of . These groups also fall into the general family of extraspecial groups. For any number of the form , there are two extraspecial groups of that order: an extraspecial group of "+" type and an extraspecial group of "-" type. is an extraspecial group of order and "+" type. The other type of extraspecial group of order , i.e., the extraspecial group of order and "-" type, is semidirect product of cyclic group of prime-square order and cyclic group of prime order. Elements Further information: element structure of unitriangular matrix group:UT(3,p) Summary Item Value number of conjugacy classes order Agrees with general order formula for : conjugacy class size statistics size 1 ( times), size ( times) orbits under automorphism group Case : size 1 (1 conjugacy class of size 1), size 1 (1 conjugacy class of size 1), size 2 (1 conjugacy class of size 2), size 4 (2 conjugacy classes of size 2 each) Case odd : size 1 (1 conjugacy class of size 1), size ( conjugacy classes of size 1 each), size ( conjugacy classes of size each) number of orbits under automorphism group 4 if 3 if is odd order statistics Case : order 1 (1 element), order 2 (5 elements), order 4 (2 elements) Case odd: order 1 (1 element), order ( elements) exponent 4 if if odd Conjugacy class structure Note that the characteristic polynomial of all elements in this group is , hence we do not devote a column to the characteristic polynomial. For reference, we consider matrices of the form: Nature of conjugacy class Jordan block size decomposition Minimal polynomial Size of conjugacy class Number of such conjugacy classes Total number of elements Order of elements in each such conjugacy class Type of matrix identity element 1 + 1 + 1 + 1 1 1 1 1 non-identity element, but central (has Jordan blocks of size one and two respectively) 2 + 1 1 , non-central, has Jordan blocks of size one and two respectively 2 + 1 , but not both and are zero non-central, has Jordan block of size three 3 if odd 4 if both and are nonzero Total (--) -- -- -- -- -- Arithmetic functions Compare and contrast arithmetic function values with other groups of prime-cube order at Groups of prime-cube order#Arithmetic functions For some of these, the function values are different when and/or when . These are clearly indicated below. Arithmetic functions taking values between 0 and 3 Function Value Explanation prime-base logarithm of order 3 the order is prime-base logarithm of exponent 1 the exponent is . Exception when , where the exponent is . nilpotency class 2 derived length 2 Frattini length 2 minimum size of generating set 2 subgroup rank 2 rank as p-group 2 normal rank as p-group 2 characteristic rank as p-group 1 Arithmetic functions of a counting nature Function Value Explanation number of conjugacy classes elements in the center, and each other conjugacy class has size number of subgroups when , when See subgroup structure of unitriangular matrix group:UT(3,p) number of normal subgroups See subgroup structure of unitriangular matrix group:UT(3,p) number of conjugacy classes of subgroups for , for See subgroup structure of unitriangular matrix group:UT(3,p) Subgroups Further information: Subgroup structure of unitriangular matrix group:UT(3,p) Note that the analysis here specifically does not apply to the case . For , see subgroup structure of dihedral group:D8. Table classifying subgroups up to automorphisms Automorphism class of subgroups Representative Isomorphism class Order of subgroups Index of subgroups Number of conjugacy classes Size of each conjugacy class Number of subgroups Isomorphism class of quotient (if exists) Subnormal depth (if subnormal) trivial subgroup trivial group 1 1 1 1 prime-cube order group:U(3,p) 1 center of unitriangular matrix group:UT(3,p) ; equivalently, given by . group of prime order 1 1 1 elementary abelian group of prime-square order 1 non-central subgroups of prime order in unitriangular matrix group:UT(3,p) Subgroup generated by any element with at least one of the entries nonzero group of prime order -- 2 elementary abelian subgroups of prime-square order in unitriangular matrix group:UT(3,p) join of center and any non-central subgroup of prime order elementary abelian group of prime-square order 1 group of prime order 1 whole group all elements unitriangular matrix group:UT(3,p) 1 1 1 1 trivial group 0 Total (5 rows) -- -- -- -- -- -- -- Tables classifying isomorphism types of subgroups Group name GAP ID Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup Trivial group 1 1 1 1 Group of prime order 1 1 Elementary abelian group of prime-square order 0 Prime-cube order group:U3p 1 1 1 1 Total -- Table listing number of subgroups by order Group order Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup 1 1 1 1 1 1 0 1 1 1 1 Total Linear representation theory Further information: linear representation theory of unitriangular matrix group:UT(3,p) Item Value number of conjugacy classes (equals number of irreducible representations over a splitting field) . See number of irreducible representations equals number of conjugacy classes, element structure of unitriangular matrix group of degree three over a finite field degrees of irreducible representations over a splitting field (such as or ) 1 (occurs times), (occurs times) sum of squares of degrees of irreducible representations (equals order of the group) see sum of squares of degrees of irreducible representations equals order of group lcm of degrees of irreducible representations condition for a field (characteristic not equal to ) to be a splitting field The polynomial should split completely. For a finite field of size , this is equivalent to . field generated by character values, which in this case also coincides with the unique minimal splitting field (characteristic zero) Field where is a primitive root of unity. This is a degree extension of the rationals. unique minimal splitting field (characteristic ) The field of size where is the order of mod . degrees of irreducible representations over the rational numbers 1 (1 time), ( times), (1 time) Orbits over a splitting field under the action of the automorphism group Case : Orbit sizes: 1 (degree 1 representation), 1 (degree 1 representation), 2 (degree 1 representations), 1 (degree 2 representation) Case odd : Orbit sizes: 1 (degree 1 representation), (degree 1 representations), (degree representations) number: 4 (for ), 3 (for odd ) Orbits over a splitting field under the multiplicative action of one-dimensional representations Orbit sizes: (degree 1 representations), and orbits of size 1 (degree representations) Endomorphisms Automorphisms The automorphisms essentially permute the subgroups of order containing the center, while leaving the center itself unmoved. GAP implementation GAP ID For any prime , this group is the third group among the groups of order . Thus, for instance, if , the group is described using GAP's SmallGroup function as: SmallGroup(343,3) Note that we don't need to compute ; we can also write this as: SmallGroup(7^3,3) As an extraspecial group For any prime , we can define this group using GAP's ExtraspecialGroup function as: ExtraspecialGroup(p^3,'+') For , it can also be constructed as: ExtraspecialGroup(p^3,p) where the argument indicates that it is the extraspecial group of exponent . For instance, for : ExtraspecialGroup(5^3,5) Other descriptions Description Functions used SylowSubgroup(GL(3,p),p) SylowSubgroup, GL SylowSubgroup(SL(3,p),p) SylowSubgroup, SL SylowSubgroup(PGL(3,p),p) SylowSubgroup, PGL SylowSubgroup(PSL(3,p),p) SylowSubgroup, PSL
For a general group of order $p$ and $q$, there are very few possibilities (though you need Sylow theorems to know this). The fact is, for $p>q$ and $G$ a group of order $pq$, we must have$$G\cong C_p\rtimes C_q$$where the semi-direct product is defined in terms of some homomorphism $$\Phi:C_q\to\mathrm{Aut}(C_p)\cong C_{p-1}.$$ If $q$ does not divide $p-1$, this homomorphism must be trivial and you get $G\cong C_p\times C_q\cong C_{pq}$. When $\Phi$ is nontrivial, we can write $\Phi(c_q^k)=\phi_k$. Then, the product structure on $C_p\rtimes C_q$ is given by$$(c_p^a,c_q^b)(c_p^r,c_q^s)=(c_p^a\phi_b(c_p)^r,c_q^{b+s}).$$It is a nice exercise to check that this is a group structure, and $C_p$ is normal. It is also useful to describe the isomorphism $S_3\to C_3\rtimes C_2$ explicitly. EDIT: As you are requesting more detail, here you go: Let $G$ be a group of order $pq$ with $p>q$ primes. Using Cauchy's theorem there are (cyclic) subgroups $P=\langle x\mid x^p=1\rangle$ and $Q=\langle y\mid y^q=1\rangle$ of orders $p$ and $q$, respectively. It follows from the Sylow theorems that $P\lhd G$ is normal (Since all Sylow $p$-subgroups are conjugate in $G$ and the number $n_p$ of Sylow $p$ subgroups must divide $q$ and satisfies $n_p\equiv 1$ (mod $p$)). With this taken as given, it is straightforward to prove that $G\cong P\rtimes Q$, where the semidirect product is defined in terms of a homomorphism $\phi:Q\to\mathrm{Aut}(P)$. We first note that since $|P\cap Q|$ divides both $p$ and $q$ we must have $|P\cap Q|=1$. It follows that $$|PQ|=\frac{|P||Q|}{|P\cap Q|}=pq=|G|$$Hence, $PQ=G$. Now, since $Q=\langle y\rangle$ normalizes $P=\langle x\rangle$, the map $\phi_k:P\to P$ given by $\phi_k(x)=y^kxy^{-k}$ is well defined. Moreover, it is clearly an automorphism with inverse $\phi_{-k}$. Finally, since $\phi_{k}\phi_j=\phi_{k+j}$, the map $y^k\mapsto\phi_k$ defines a homomorphism$$\phi:Q\to \mathrm{Aut}(P).$$ As above, we define $P\rtimes Q$ to be $P\times Q$ as a set, with multiplication$$(x^i,y^j)(x^k,y^l)=(x^i\phi_j(x^k),y^{j+k}).$$Of course, one needs to verify that this is indeed a group. The identity is $(1,1)$, $(x^k,y^l)^{-1}=(\phi_{-l}(x^{-k}),y^{-l})$. Associativity is tedious but true. Define a map $\psi: P\rtimes Q\to G$ by $\psi(x^i,y^j)=x^iy^j$. The map $\psi$ is surjective since $PQ=G$, and it is injective because $|P\rtimes Q|=pq=|G|$. To see that it is a homomorphism we compute\begin{align*}\psi((x^i,y^j)(x^k,y^l))&=\psi(x^i\phi_j(x^k),y^{j+l})\\&=x^i\phi_j(x^k)y^{j+l}\\&=x^i(y^jx^ky^{-j})y^{j+l}\\&=x^iy^jx^ky^l=\psi(x^i,y^j)\psi(x^k,y^l).\end{align*}Hence, $\psi$ is an isomorphism as promised. Now, either the homoorphism $\phi:Q\to\mathrm{Aut}(P)$ is trivial or it is not. If it is trivial, then $$G\cong P\rtimes Q=P\times Q\cong C_p\times C_q\cong C_{pq}.$$If the homomorphism is nontrivial, then $G$ has the following presentation:$$G = \langle x,y\mid x^p=1=y^q, yx=x^ny\rangle$$where $n\in\mathbb{Z}$ satisfies $n\not\equiv1$ (mod $p$), but $n^q\equiv 1$ (mod $p$). (To see this note that $yxy^{-1}=x^n$ for some $n\not\equiv_p 1$, but $x=y^qxy^{-q}=x^{n^q}$.) This works for any pair of primes with $q|(p-1)$, not just $p=3$. An example: $p=11$, $q=5$. Take $n=3$ so we have$$G=\langle x,y\mid x^{11}=1,y^5=1,yx=x^3y\rangle.$$This group has order 55 and you can compute\begin{align*}yxy^{-1}&=x^3\\yx^3y^{-1}&=(yxy^{-1})^3=x^9\\yx^{9}y^{-1}&=x^{27}=x^5\\yx^5y^{-1}&=x^{15}=x^4\\yx^4y^{-1}&=x^{12}=x\end{align*}
Forget spending hours tweaking in Keynote. Get your ideas across quickly and stylishly. Built exclusively for engineers. Enter your email below to join the beta. Ultradeck is a developer-focused, web-based app and command-line tool that makes it incredibly fast and easy to create gorgeous presentations using only Markdown. Blow the socks off of your audience by choosing from a growing collection of amazing looking themes and colors. All of Ultradeck's colors and themes have been meticulously curated to look and be awesome, much like yourself! Under the hood, Ultradeck slides are nothing but HTML and CSS. Augment your slides by adding custom HTML, CSS, Javascript charts, and anything else available in a browser. The possibilities are limitless. You don't need to use Ultradeck's UI to build decks. With the open-source Ultradeck CLI tool, you can seamlessly sync any markdown file and it will update in realtime. Create new decks quickly and seamlessly, straight from the command line! Ultradeck supports KaTeX notation, which makes it a perfect choice for presenting to your buddies at Math Club! Ultradeck is meant for showing code. It auto-highlights tons of languages and makes code samples look amazing. Add your own background images or a beautiful image from Unsplash. It's incredibly easy to quickly make fantastic looking slides with only markdown. It's geared towards developers, and allows you to iterate on your deck quickly. Once done, sharing your deck with others is effortless. ![filter](building.png)#### Making _amazing_ slides with# Ultradeck#### A super simple guide ![](tent.png)## Features* Create amazing looking slides with markdown* Use the amazing `ultradeck` command-line tool to use your own editor* Iterate quickly + efficiently to get your idea across ![](sun.png)# How to be awesome## A simple, 3-step process for maximizing your awesomeness ## Fun with javascript!```javascript[1, 2, 3] + [4, 5, 6]// call toString()[1, 2, 3].toString() + [4, 5, 6].toString()// concatenation'1,2,3' + '4,5,6'// ->'1,2,34,5,6'``` *Important Equation:*# $\sum_{i=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$