text
stringlengths
256
16.4k
Difference between revisions of "Group cohomology of dihedral group:D8" (→Baer invariants) (10 intermediate revisions by the same user not shown) Line 3: Line 3: group = dihedral group:D8| group = dihedral group:D8| connective = of}} connective = of}} + + + + + + + + ==Homology groups for trivial group action== ==Homology groups for trivial group action== Line 24: Line 32: ===Over an abelian group=== ===Over an abelian group=== − The homology groups over an abelian group <math>M/math> are given as follows: + The homology groups over an abelian group <math>M/math> are given as follows: − <math>H_q(D_8;M) = \left \lbrace \begin{array}{rl} M, & q = 0 \\(M/2M)^{(q + 3)/2} \oplus (\operatorname{Ann}_M(2))^{(q - 1)/2}, & q \equiv 1 \pmod 4\\ (M/2M)^{q/2} \oplus (\operatorname{Ann}_M(2))^{(q + 2)/2}, & q equiv 2 \pmod 4 \\(M/2M)^{(q + 1)/2} \oplus M/4M \oplus (\operatorname{Ann}_M(2))^{(q - 1)/2}, & q \equiv 3 \pmod 4 \\(M/2M)^{q/2} \oplus (\operatorname{Ann}_M(2))^{q/2} \oplus \operatorname{Ann}_M(4), & q \equiv 0 \pmod 4, q > 0 \\ \end{array}\right.</math> + <math>H_q(D_8;M) = \left \lbrace \begin{array}{rl} M, & q = 0 \\(M/2M)^{(q + 3)/2} \oplus (\operatorname{Ann}_M(2))^{(q - 1)/2}, & q \equiv 1 \pmod 4\\ (M/2M)^{q/2} \oplus (\operatorname{Ann}_M(2))^{(q + 2)/2}, & q equiv 2 \pmod 4 \\(M/2M)^{(q + 1)/2} \oplus M/4M \oplus (\operatorname{Ann}_M(2))^{(q - 1)/2}, & q \equiv 3 \pmod 4 \\(M/2M)^{q/2} \oplus (\operatorname{Ann}_M(2))^{q/2} \oplus \operatorname{Ann}_M(4), & q \equiv 0 \pmod 4, q > 0 \\ \end{array}\right.</math> The first few homology groups with coefficients in an abelian group <math>M</math> are given below: The first few homology groups with coefficients in an abelian group <math>M</math> are given below: {| class="sortable" border="1" {| class="sortable" border="1" − ! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math> + ! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math> |- |- − | <math>H_q</math> || <math>M</math> || <math>M/2M \oplus M/2M</math> || <math>M/2M \oplus \operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> || + | <math>H_q</math> || <math>M</math> || <math>M/2M \oplus M/2M</math> || <math>M/2M \oplus \operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> || || || || || |} |} Line 40: Line 48: ===Over the integers=== ===Over the integers=== + + + + The first few cohomology groups are given below: The first few cohomology groups are given below: Line 50: Line 62: ===Over an abelian group=== ===Over an abelian group=== + + + + The first few cohomology groups with coefficients in an abelian group <math>M</math> are: The first few cohomology groups with coefficients in an abelian group <math>M</math> are: Line 56: Line 72: ! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math> ! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math> |- |- − | <math>H^q</math> || <math>M</math> || <math>\operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> || <math>M/2M \oplus M/2M \oplus \operatorname{Ann}_M(2)</math> || + | <math>H^q</math> || <math>M</math> || <math>\operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> || <math>M/2M \oplus M/2M \oplus \operatorname{Ann}_M(2)</math> || || || || || |} |} Line 94: Line 110: | [[abelian group]]s || [[Schur multiplier]] || [[cyclic group:Z2]] | [[abelian group]]s || [[Schur multiplier]] || [[cyclic group:Z2]] |- |- − | [[group of nilpotency class two|groups of nilpotency class at most two]] || 2-[[nilpotent multiplier]] || + | [[group of nilpotency class two|groups of nilpotency class at most two]] || 2-[[nilpotent multiplier]] || |- |- − | groups of nilpotency class at most three || 3-[[nilpotent multiplier]] || + | groups of nilpotency class at most three || 3-[[nilpotent multiplier]] || |- |- − | any variety of groups containing all groups of nilpotency class at most three || -- || + | any variety of groups containing all groups of nilpotency class at most three || -- || |} |} Latest revision as of 00:27, 29 May 2013 Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Family contexts Family name Parameter value Information on group cohomology of family dihedral group of degree , order degree , order group cohomology of dihedral groups Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups over the integers are given as follows: The first few homology groups are given below: Over an abelian group The homology groups over an abelian group are given as follows: The first few homology groups with coefficients in an abelian group are given below: Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups over the integers are given as follows: The first few cohomology groups are given below: 0 Over an abelian group The cohomology groups over an abelian group are given as follows: The first few cohomology groups with coefficients in an abelian group are: Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Schur multiplier This has implications for projective representation theory of dihedral group:D8. Schur covering groups The three possible Schur covering groups for dihedral group:D8 are: dihedral group:D16, semidihedral group:SD16, and generalized quaternion group:Q16. For more, see second cohomology group for trivial group action of D8 on Z2, where these correspond precisely to the stem extensions. Second cohomology groups for trivial group action Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action (as an abstract group) Order of second cohomology group Extensions Number of extensions up to pseudo-congruence, i.e., number or orbits under automorphism group actions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 6 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 elementary abelian group:E8 8 direct product of D8 and Z4, nontrivial semidirect product of Z4 and Z8, SmallGroup(32,5), central product of D16 and Z4, SmallGroup(32,15), wreath product of Z4 and Z2 6 second cohomology group for trivial group action of D8 on Z4 Klein four-group 4 2 elementary abelian group:E64 64 [SHOW MORE] 11 second cohomology group for trivial group action of D8 on V4 Baer invariants Subvariety of the variety of groups General name of Baer invariant Value of Baer invariant for this group abelian groups Schur multiplier cyclic group:Z2 groups of nilpotency class at most two 2-nilpotent multiplier groups of nilpotency class at most three 3-nilpotent multiplier any variety of groups containing all groups of nilpotency class at most three -- GAP implementation Computation of integral homology The homology groups for trivial group action with coefficients in can be computed in GAP using the GroupHomology function in the HAP package, which can be loaded by the command LoadPackage("hap"); if it is installed but not loaded. The function outputs the orders of cyclic groups for which the homology or cohomology group is the direct product of these (more technically, it outputs the elementary divisors for the homology or cohomology group that we are trying to compute). Here are computations of the first few homology groups: Computation of first homology group gap> GroupHomology(DihedralGroup(8),1); [ 2, 2 ] The way this is to be interpreted is that the first homology group (the abelianization) is the direct sum of cyclic groups of the orders listed, so in this case we get that is , which is the Klein four-group. Computation of second homology group gap> GroupHomology(DihedralGroup(8),2); [ 2 ] Computation of first few homology groups To compute the first eight homology groups, do: gap> List([1,2,3,4,5,6,7,8],i->[i,GroupHomology(DihedralGroup(8),i)]); [ [ 1, [ 2, 2 ] ], [ 2, [ 2 ] ], [ 3, [ 2, 2, 4 ] ], [ 4, [ 2, 2 ] ], [ 5, [ 2, 2, 2, 2 ] ], [ 6, [ 2, 2, 2 ] ], [ 7, [ 2, 2, 2, 2, 4 ] ], [ 8, [ 2, 2, 2, 2 ] ] ]
The best way to see the differences is to use the Thevenin equivalent for a resistor divider set up between two ideal (no source resistance of their own) voltage sources. Often, this is just some supply voltage and ground. Let's look at the obvious case: simulate this circuit – Schematic created using CircuitLab The left side has a resistor divider between an ideal voltage source and ground and, without any load hanging off of \$V_\text{OUT}\$ (it's just open, as you can see), the voltage is easy to compute as \$V_\text{OUT}=V_\text{IN}\cdot\frac{R_2}{R_1+R_2}\$. However, what's missing from that simple calculation is the fact that \$V_\text{OUT}\$ is no longer ideal. It now has a source resistance that makes it non-ideal. That's because any current required by a load (currently not present) attached between \$V_\text{OUT}\$ and ground must cause an additional voltage drop across \$R_1\$ and that changes the voltage that the load experiences. So, again, \$V_\text{OUT}\$ is no longer ideal. The effective, non-ideality of \$V_\text{OUT}\$ is expressed by first setting up a fictional \$V_\text{TH}\$ which is equal to the unloaded \$V_\text{OUT}\$ and then inserting a series resistor between this fictional \$V_\text{TH}\$ and \$V_\text{OUT}\$. This is shown on the right side, above. This resistor that represents the non-ideality of the voltage source is \$R_\text{TH}=\frac{R_1\cdot R_2}{R_1+R_2}\$. The upshot of all this is that you now have a simpler way to view the resistor divider and you can easily see exactly how it is by simply examining the value of \$R_\text{TH}\$. The closer this value is to zero, the more ideal is the voltage source. But the price you pay for getting closer to zero is a rapidly increasing power dissipation wasted in the resistor divider, itself. non-ideal Just to completely generalize the above, let's look at a resistor divider that sits between two different ideal voltage sources, where one is NOT zero volts. (That's just an arbitrary reference point, anyway.) simulate this circuit The only difference here is that now both voltages can be non-zero. In this case, the only new computation is the more general version: \$V_\text{TH}=\frac{V_\text{B}\cdot R_1+V_\text{A}\cdot R_2}{R_1+R_2}\$. That reduces to the equation I gave earlier, above, when \$V_\text{B}=0\:\text{V}\$. The choice of resistor values will depend on the range of load impedances you want to allow attached to \$V_\text{OUT}\$ and how much voltage variation your loads can tolerate. For example, suppose you have a power supply rail of \$5\:\text{V}\$ and want to use a voltage divider to create a voltage source at \$3.3\:\text{V}\$. Suppose also that the maximum current required by the device you'll attach to \$V_\text{OUT}\$ is \$10\:\text{mA}\$. Suppose that the device must not experience more than \$3.6\:\text{V}\$ nor less than \$3.1\:\text{V}\$ or else it won't work properly. And finally that the worst-case minimum current required by the device is \$100\:\mu\text{A}\$. Given these specifications, we want a worst-case \$\Delta V=3.6\:\text{V}-3.1\:\text{V}=500\:\text{mV}\$ with a worst case current variation of \$\Delta I=10\:\text{mA}-100\:\mu\text{A}=9.9\:\text{mA}\$. This suggests an effective source impedance of \$R_\text{TH}=R_\text{SRC}=\frac{500\:\text{mV}}{9.9\:\text{mA}}\approx 50.5\:\Omega\$. You now have two equations and two unknowns: $$\begin{align*}50.5\:\Omega &= \frac{R_1\cdot R_2}{R_1+R_2}\\\\5\:\text{V}\cdot\frac{R_2}{R_1+R_2} &=3.6\:\text{V}+100\:\mu\text{A}\cdot 50.5\:\Omega\end{align*}$$ Roughly speaking, you'd need \$R_1\approx 70\:\Omega\$ and \$R_2\approx 181\:\Omega\$. Note that just operating this divider requires \$\frac{\left(5\:\text{V}\right)^2}{70\:\Omega+181\:\Omega}\approx 100\:\text{mW}\$. (Also note that the output voltage (if the device didn't draw any current at all) might reach about \$5\frac12 \:\text{mV}\$ above the maximum \$3.6\:\text{mV}\$ spec. Which may be acceptable.
Intuition:A way to see what is going on is to see the affine approximation of $e^x$ around $0$: $$e^u \simeq e^0 + (e^\prime)(0) x = 1 + x$$ (this can be made formal by Taylor approximations to order $1$, for instance). This implies that your quantity is roughly $\left(x+ 1+ \frac{x}{3}\right)^{3/x} = \left(1+ \frac{4x}{3}\right)^{3/x}$, where you recognize, setting $t = \frac{3}{x}\to \infty$, the limit $$\left(1+\frac{4}{t}\right)^t \xrightarrow[t\to\infty]{} e^4.$$ The only key is to make this first approximation $\simeq$ rigorous, which is done below. An approach based on Taylor expansions: (but which requires no knowledge of them besides the Landau notation $o(\cdot)$ — justifying what is needed as we go) Start (as often when you have both a base and an exponent depending on $x$) by rewriting it in exponential form:$$\left(x+e^{\frac{x}{3}}\right)^\frac{3}{x} = e^{\frac{3}{x}\ln\left(x+e^{\frac{x}{3}}\right)} $$ Now, when $u\to 0$, we have $\frac{e^u-1}{u}\to \exp^\prime 0 = e^0 = 1$, so that $e^u = 1+u + o(u)$; which gives $$x+e^{\frac{x}{3}} = x+1+ \frac{x}{3} + o(x) = 1+\frac{4}{3}x.$$ Similarly, since $\frac{\ln(1+u)}{u}\xrightarrow[u\to 0]{} 1$, we have $\ln(1+u) = u + o(u)$. Combining the two, we get $$\ln\left(x+e^{\frac{x}{3}}\right) = \ln\left(1+\frac{4}{3}x\right) = \frac{4}{3}x + o(x).$$ Putting it together, $$\frac{3}{x}\ln\left(x+e^{\frac{x}{3}}\right) = \frac{3}{x}\left(\frac{4}{3}x + o(x)\right) = 4 + o(1) \xrightarrow[x\to 0]{} 4$$and, by continuity of $\exp$,$$e^{\frac{3}{x}\ln\left(x+e^{\frac{x}{3}}\right)} \xrightarrow[x\to 0]{} e^4.$$
For any finite cyclic group $H$ with generator $t$, an automorphism of $H$ is completely determined by its effect on $t$ (which it must take to $t^k$ for some $k$ prime to the order of $t$). Hence Aut($H$) is abelian. Next we show that if $H$ is any subgroup of $G$, then $N_G(H)/C_G(H)$ is isomorphic to a subgroup of Aut($H$). Define $\theta:N_G(H)\to $Aut($H$) by $\theta(x)(h)=x^{-1}hx$. Evidently it has kernel $C_G(H)$, so the claim follows. Applying these two observations to $H=G''$, where $G''$ is cyclic, we conclude that $N_G(G'')/C_G(G'')$ is isomorphic to a subgroup of an abelian group and hence abelian. Finally, repeated application of $(xy)^g=x^gy^g$ shows that $[x,y]^g=[x^g,y^g]$, and repeated application of that shows that if $k\in G''$ and $g\in G$, then $k^g\in G''$ and so $N_G(G'')=G$. Take any $x,y\in G$. Then $xyC_G(G'')=yxC_G(G'')$ and hence $x^{-1}y^{-1}xyC_G(G'')=C_G(G'')$, so $[x,y]\in C_G(G'')$ and hence $G'\subseteq C_G(G'')$. In other words, any element of $G'$ commutes with any element of $G''$ and so $G''\subseteq Z(G')$
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
If there are 4 people involved, and every two of them should be able to know the secret (the polynomial is just a line) and you are given $f(x)$ and $x$ for each of those people, and you know one of them is a cheater, how do you find the cheater? Shamir's secret-sharing scheme has $n$ shares of a secret. The shares are of the form $(x_0,f(x_0)), (x_1,f(x_1)), \ldots , (x_{n-1},f(x_{n-1}))$ where the $x_i$ are $n$distinct nonzero elements of a finite field $\mathbb F$, and $f(x)$ is a polynomial ofdegree $k-1$ with coefficients in $\mathbb F$. One coefficient, say $f_0$, of $f(x)$ is the secret being shared and the remaining $k-1$ coefficients are chosen at random. Note that the while the $i$-th share is constructed as $(x_i,f(x_i))$, what the owner of the $i$-th share sees is just a pair $(x_i,y_i)$ ofelements of $\mathbb F$, that is, while the owner knows that $y_i = f(x_i)$, the owner has no other knowledge about $f$. Given any set of $k$ shares $\{(x_{i_j},y_{i_j}) \colon 1 \leq j \leq k\}$,there is a unique polynomial $g(x)$ of at most degree $k-1$ that interpolates through these $k$ points, that is, enjoys the property $g(x_{i_j}) = y_{i_j}$for $1 \leq j \leq k$, andthis polynomial can be found, for example, through Lagrange interpolation.Since $f(x)$ is of degree $k-1$ (or less if the random choice of $f_{k-1}$resulted in $f_{k-1}=0$) and also has the property that $f(x_{i_j}) = y_{i_j}$for $1 \leq j \leq k$, it follows that the $g(x)$ that has been interpolatedis the same as $f(x)$, and thus $g_0$ is the secret. If fewer than $k$ shares are available, the $g(x)$ that is recovered is not the same as $f(x)$. If $m > k$ shares are available, we can interpolate through any $k$ of them and recover the secret. For the application on hand, if it is known that there is one (or at most one) cheater, and the cheater's share may have the wrong value for $x_i$ or $y_i$ or both, proceed as follows. If two shares are identical, one of the two shares belongs to the cheater but it is not possible to determine who is the cheater. Fortunately, the secret is recoverable if at least $k+1$ shares are available (that is, $k$ distinct correct shares are available). Of course, in a different scenario, the reconstructor may know the identity of each share owner, and so know that the $j$-th owner should not be "turning in" a share in which the first symbol is $x_i$, and thereby identify the cheater right away. If there are two shares $(x_i,y_i)$ and $(x_i,y_j)$, then one of them belongs to the cheater. If $m \geq k+2$, interpolate through a set of $k$ shares excluding these two to recover $f(x)$. Calculate $f(x_i)$ and check which of the two possible cheater shares matches $(x_i,f(x_i))$: the otherbelongs to the cheater. Again, in the different scenario, $(x_i,y_j)$ could be identified right away as the cheater's share since the $j$-th owner should not be turning in a share with $x_i$. If all $m \geq k+2$ shares have different (and correct) $x_i$ values, interpolate through the $\binom{m}{k}$ different subsets of $k$ shares. $\binom{m-1}{k}$ of these subsets will notcontain the cheater's share and all these interpolations will recover the same $g(x) = f(x)$. The other interpolations will be through the cheater's share and will give different interpolating polynomials. It can be shown that fewer than $\binom{m-1}{k}$ of these interpolations can result in the same(but incorrect) $\hat{g}(x)$, that is, in the $\binom{m}{k}$ interpolating polynomials resulting from the $\binom{m}{k}$ different subsets, the correctpolynomial $f(x)$ will occur most often(win by a plurality but not necessarily a majority). Having found $f(x)$, simply reconstruct the $m$ shares $(x_i,f(x_i))$ and compare to the ones "turned in" to identify the cheater. Alternatively, the unionof the $\binom{m-1}{k}$ subsets of shares that gave $g(x) = f(x)$ is the set of $m-1$ correct shares, and the complement is the unique cheater's share. More generally, if there can be as many as $t$ cheaters it can be shown that the secret can be recovered and the cheaters identified if $k+2t$ shares are available for reconstruction. Note that as many as $t$ of these $k+2t$ shares might be cheaters' shares. Put another way, if there can be as many as $t$ cheaters, we need at least $k+t$ honest shares available (in addition to the $t$ possible cheaters) to be able to recover the secret and identify the cheaters. The proofs of assertions about plurality can be foundin the paper I.S. Reed and G. Solomon, "Polynomial codes over certain finite fields," SIAM J. Appl. Math.,v. 8, pp.300-304, 1960. The original application of Reed-Solomon codes to secret sharing (and suggestionsfor more efficient recovery than multiple Lagrange interpolations)can be found in R.J. McEliece and D.V. Sarwate, On sharing secrets and Reed-Solomon codes, Communications of the ACM, v. 24, Issue 9, Sep 1981. In the scenario you describe, any of the non-cheating participants can contact each of the others and arrange to swap shares and reconstruct the secret. (Equivalently, all the participants can agree to publish their shares, at which point any of them can pair their share with each of the others.) If there's only one cheater, the participant who does this will obtain three reconstructed secrets, of which two will agree. The one that does not agree will be the one constructed using the cheater's share. For a generalization of this method, see "Detection and identification of cheaters in $(t, n)$ secret sharing scheme" by Lein Harn and Changlu Lin (2009), Designs, Codes and Cryptography 52(1), pp. 15–24. (But see also this response by Hossein Ghodosi.) Not enough information was provided in the question, so I'm going to assume something to fill in the hole. Let me know if this is not what you envisioned. Assumption: The party trying to detect the cheater knows the original polynomial used to share the secret. In the initialization phase, each party $p_i$ is given a pair $x_i, y_i$ where $y_i = f(x_i)$. In the reconstruction phase, the cheating party (say $p_c$ would submit $x_c, y'_c$ (where $y'_c\neq y_c$). The party trying to detect the cheater could easily compute $y_c=f(x_c)$, see that what was submitted ($y'_i$) is not equal to $y_i$. Note: In the scenario you gave, two non-cheating parties could reconstruct the original polynomial and catch the cheater. Those two parties would have to either 1) be able to verify that they correctly reconstructed the secret (e.g., by producing a valid decryption of something), or 2) trust each other enough to believe that the other isn't cheating. Edit: Given the additional information in the comments below, I chose to expand my answer. Let $R$ be a function which maps two pairs of input to an alleged secret using the reconstruction step. Since you have all 4 pairs $\{(x_1,y_1),(x_2,y_2),(x_3,y_3),(x_c,y_c)\}$, you can compute the following: $S_{1,2}=R(x_1,y_1,x_2,y_2)$ $S_{1,3}=R(x_1,y_1,x_3,y_3)$ $S_{1,c}=R(x_1,y_1,x_c,y_c)$ $S_{2,3}=R(x_2,y_2,x_3,y_3)$ $S_{2,c}=R(x_2,y_2,x_c,y_c)$ $S_{3,c}=R(x_3,y_3,x_c,y_c)$ Comparing these values you would notice three important things: $S_{1,2}=S_{1,3}=S_{2,3}$ $S_{1,c}\neq S_{2,c}\neq S_{3,c}$ $S_{i,c}$ is not equal to the value in (1) Since when $c$ participates, the output changes depending on the other party, clearly $c$ is the cheater. The best answer is to use verifiable secret sharing (VSS), as I describe here: https://crypto.stackexchange.com/a/6618/351 VSS gives the best parameters and best solution to this problem. If you have a $k$-out-of-$n$ secret sharing scheme, VSS can detect any cheater and enable you to reconstruct the secret as long as you have at least $k$ good shares (even if they are mixed together with any number of bogus shares, even if you don't know which are which). This is a better, more effective solution than any of the other schemes proposed here.
There are two ways that lookaheads 'come into being'. The first is that the start production $S' \rightarrow S$ has lookahead $\$$ in the initial state of the $LR(1)$ automaton. Hence, $S' \rightarrow \bullet S, \$$ is an item in the initial state. Pendantry note: we therefore accept in the state containing the item $S' \rightarrow S \bullet, \$$ on lookahead $\$$ and we pad any input with $\$$. The rest of the lookaheads are computed from lookaheads which we have already computed. If we have in some state an item $A \rightarrow \alpha \bullet X \beta, l$ (so $l$ is the lookahead and $X$ is a nonterminal) and another production $B \rightarrow \gamma$, then for every $a \in \mathsf{FIRST}(X \beta l)$ we add in the completion step for that state the item $B \rightarrow \bullet \gamma, a$. In other words, the lookahead is some terminal that can appear as the first terminal in a string derived from $X \beta l$. As $l$ is a terminal and appears at the end of $X \beta l$, this means that $X \beta l$ does not derive the empty string (so we don't need $\mathsf{FOLLOW}$). Also note that we only ever derive items and therefore new lookaheads from items that already have lookaheads, so there is no problem there. The precise definition of $\mathsf{FIRST}$ is: $\mathsf{FIRST}(a \alpha) = \{a\}$ if $a$ is a terminal, $\mathsf{FIRST}(A \alpha) = \mathsf{FIRST}(A)$ if $A$ is a nonterminal and $A$ does not derive the empty string, and $\mathsf{FIRST}(A \alpha) = \mathsf{FIRST}(A) \bigcup \mathsf{FIRST}(\alpha)$ if it does. $\mathsf{FIRST}(A)$ is the well-known relation denoting the first terminals in the strings derivable from $A$ (which is also used in $LL(1)$).
The parameter $\frac{1}{12}$ is kind of weird, it has the factor of 3, when the problem is about integration or derivation,we can assume it's about quadratic function. what's more, as $f(0) = f(1) = 0$, we assume the equality stands when the function is symmetrical. after trying, we can see function $f(x) = x(1-x)$ can satisfy the equality, while $f'(x) = -2x+1 = -2(x-\frac{1}{2})$. in the light of the fact above, we substitute $\frac{1}{12}$ with $\int_{0}^{1}|x-\frac{1}{2}|dx$, using C-S inequality , we have: $$\frac{1}{12}\int_{0}^{1}|f'(x)|^2dx = \int_{0}^{1}|x-\frac{1}{2}|^2dx\int_{0}^{1}|f'(x)|^2dx \geq (\int_{0}^{1}|f'(x)(x-\frac{1}{2})|dx)^2$$ $$\int_{0}^{1}|f'(x)(x-\frac{1}{2})|dx \geq |\int_{0}^{1}f'(x)(x-\frac{1}{2})dx| = |\int_{0}^{1}f(x)dx|$$ so, $$(\int_{0}^{1}f(x)dx)^2 \leq \frac{1}{12}\int_{0}^{1}|f'(x)|^2dx$$
I'm studying Hopf-Rinow theorem and I don't see a step in the proof. Could someone help me, please? (Definition) Let's $(M, \langle,\rangle)$ an ANII(axiom numerability 2) and Hausdorff Riemannian manifold. If $M$ is connected and $p,q \in M$. We define: $$d_L:M\times M \longrightarrow [0,\infty)$$ $$(p,q)\longmapsto inf\{l(C)\}$$ where C is a piecewise differentiable curve joining $p$ and $q$ and $l$ is the length of the curve. I've proved that $d_L$ is a distance and that the topology induced by $d_L$ is the original topology on $M$. Considering $(M, \langle,\rangle)$ an ANII and Hausdorff Riemannian manifold. If $M$ is connected and $p \in M$. Then prove that the following statements are equivalent: (a) If $A$ is a closed and bounded subset of $M$ then $A$ is compact. (b) $\exists \{K_n\}_{n\in \mathbb{N}}, K_n \subset M$ compact and $K_n \subset K_{n+1}, \forall n \in \mathbb{N}$ and $\bigcup_n K_n=M$ with the following property: If $(q_n)_{ n \in \mathbb{N}}\subset M$ sequence / $(q_n)\notin K_n, \forall n \in \mathbb{N}\Rightarrow lim_{n\rightarrow \infty} d_L(q_n,p)=\infty$. Thanks.
SIR model: swine flu From JSXGraph Wiki Revision as of 14:46, 8 June 2011 by Michael The SIR model (see also Epidemiology: The SIR model) tries to predict influenza epidemics. Here, we try to model the spreading of the H1N1 virus, aka swine flu. According to the CDC Centers of Disease Control and Prevention: "Adults shed influenza virus from the day before symptoms begin through 5-10 days after illness onset. However, the amount of virus shed, and presumably infectivity, decreases rapidly by 3-5 days after onset in an experimental human infection model." So, here we set [math]\gamma=1/7=0.1428[/math] as the recovery rate. This means, on average an infected person sheds the virus for 7 days. In Modeling influenza epidemics and pandemics: insights into the future of swine flu (H1N1) the authors estimate the reproduction rate [math]R_0[/math] of the virus to be about [math]2[/math]. For the SIR model this means: the reproduction rate [math]R_0[/math] for influenza is equal to the infection rate of the strain ([math]\beta[/math]) multiplied by the duration of the infectious period ([math]1/\gamma[/math]), i.e. [math]\beta = R_0\cdot \gamma[/math]. Therefore, we set [math]\beta = 2\cdot 1/7 = 0.2857.[/math] For the 1918–1919 pandemic [math]R_0[/math] is estimated to be between 2 and 3, whereas for the seasonal flu the range for [math]R_0[/math] is 0.9 to 2.1. In [1] the mortality is estimated to be approximately 0.4 per cent. We run the simulation for a population of 1 million people, where 1 person is infected initially, i.e. [math]s=1E{-6}[/math]. Thus, S(0) = 1, I(0) = 1.E-6, R(0) = 0. The lines in the JSXGraph-simulation below have the following meaning: * Blue: Rate of susceptible population * Red: Rate of infected population * Green: Rate of recovered population Clinical Signs and Symptoms of Influenza Modeling influenza epidemics and pandemics: insights into the future of swine flu (H1N1) First analysis of swine flu spread supports pandemic plan JSXGraph Homepage The underlying JavaScript code <html><form><input type="button" value="clear and run a simulation of 200 days" onClick="clearturtle();run()"><input type="button" value="stop" onClick="stop()"><input type="button" value="continue" onClick="goOn()"></form></html> var brd = JXG.JSXGraph.initBoard('jxgbox', {boundingbox: [-6.66, 1.2, 226.66, -0.8], axis:true}); var S = brd.create('turtle',[],{strokeColor:'blue',strokeWidth:3});var I = brd.create('turtle',[],{strokeColor:'red',strokeWidth:3});var R = brd.create('turtle',[],{strokeColor:'green',strokeWidth:3}); var s = brd.create('slider', [[0,-0.3], [60,-0.3],[0,1E-6,1]], {name:'s'});var beta = brd.create('slider', [[0,-0.4], [60,-0.4],[0,0.2857,1]], {name:'β'});var gamma = brd.create('slider', [[0,-0.5], [60,-0.5],[0,0.1428,0.5]], {name:'γ'});var mort = brd.create('slider', [[0,-0.6], [60,-0.6],[0,0.4,10.0]], {name:'% mortality'});brd.create('text', [90,-0.3, "initially infected population rate"]);brd.create('text', [90,-0.4, function(){ return "β: infection rate, R<sub>0</sub>="+(beta.Value()/gamma.Value()).toFixed(2);}]);brd.create('text', [90,-0.5, function(){ return "γ: recovery rate = 1/(days of infection), days of infection= "+(1/gamma.Value()).toFixed(1);}]); var t = 0; // global brd.create('text', [100,-0.2, function() {return "Day "+t+ ": infected="+(1000000*I.Y()).toFixed(1)+ " recovered="+(1000000*R.Y()).toFixed(1)+ " dead="+(1000000*R.Y()*mort.Value()*0.01).toFixed(0);}]); S.hideTurtle();I.hideTurtle();R.hideTurtle(); function clearturtle() { S.cs(); I.cs(); R.cs(); S.hideTurtle(); I.hideTurtle(); R.hideTurtle();} function run() { S.setPos(0,1.0-s.Value()); R.setPos(0,0); I.setPos(0,s.Value()); delta = 1; // global t = 0; // global loop();} function turtleMove(turtle,dx,dy) { turtle.moveTo([dx+turtle.X(),dy+turtle.Y()]);} function loop() { var dS = -beta.Value()*S.Y()*I.Y(); var dR = gamma.Value()*I.Y(); var dI = -(dS+dR); turtleMove(S,delta,dS); turtleMove(R,delta,dR); turtleMove(I,delta,dI); t += delta; if (t<200.0) { active = setTimeout(loop,10); }}function stop() { if (active) clearTimeout(active); active = null;}function goOn() { if (t>0) { if (active==null) { active = setTimeout(loop,10); } } else { run(); } }
Classic examples of Taylor polynomials Some of the most famous (andimportant) examples are the expansions of ${1\over 1-x}$, $e^x$,$\cos x$, $\sin x$, and $\log(1+x)$ at $0$: right from the formula,although simplifying a little, we get\begin{align*}{1\over 1-x}&=1+x+x^2+x^3+x^4+x^5+x^6+\ldots\\e^x&=1+{x\over 1!}+{x^2\over 2!}+{x^3\over 3!}+{x^4\over 4!}+\ldots\\\cos x&=1-{x^2\over 2!}+{x^4\over 4!}-{x^6\over 6!}+{x^8\over 8!}\ldots\\\sin x&={x\over 1!}-{x^3\over 3!}+{x^5\over 5!}-{x^7\over 7!}+\ldots\\\log(1+x)&=x-{x^2\over 2}+{x^3\over 3}-{x^4\over 4}+{x^5\over 5}-{x^6\over 6}+\ldots\end{align*}where here the dots mean to continue to whatever term you want,then stop, and stick on the appropriate remainder term. It is entirely reasonable if you can't really see that these are what you'd get, but in any case you should do the computations to verify that these are right. It's not so hard. Note that the expansion for cosine has no odd powers of$x$ (meaning that the coefficients are zero), while theexpansion for sine has no even powers of $x$ (meaning that thecoefficients are zero). At this point it is worth repeating that we are nottalking about infinite sums (series) at all here, although we doallow arbitrarily large finite sums. Rather than worry over aninfinite sum that we can never truly evaluate, we use the erroror remainder term instead. Thus, while in other contexts thedots would mean ‘infinite sum’, that's not our concern here. The first of these formulas you might recognize as being a geometric series, or at least a part of one. The other threepatterns might be new to you. A person would want to be learn torecognize these on sight, as if by reflex!
Definition:Balanced Incomplete Block Design Definition A Balanced Incomplete Block Design or BIBD with parameters $v, b, r, k, \lambda$ is a block design where: $v$ is the number of points in the design $b$ is the number of blocks $k$ is the size of each block $r$ is the number of blocks any point can be in $\lambda$ is the number of times any two points can occur in the same block and has the following properties: Each block is of size $k$ All of the $\displaystyle \binom v 2$ pairs occur together in exactly $\lambda$ blocks. A BIBD with parameters $v, b, r, k, \lambda$ is commonly written several ways, for example: $\operatorname{BIBD} \left({v, k, \lambda}\right)$ $\left ({v, k, \lambda}\right)$-$\operatorname{BIBD}$ Properties For any $\operatorname{BIBD} \left({v, k, \lambda}\right)$ the following are true: $b k = r v$ $\lambda (v-1) = r (k-1)$ $\displaystyle \left({v, k, \lambda}\right)b = \frac{\binom v 2}{\binom k 2} \lambda = \frac{v(v-1)\lambda} {k(k-1)}$ $k < v$ $r > \lambda$ Note: All of the above are integers. See Necessary Condition for Existence of BIBD for proofs of the above.
I now introduce the problem: Let assume $\mathbf{z} = (z_1, z_2,z_3)$ be a trivariate normal variable. I want to find the covariance matrix of $\mathbf{z}$. I now that the density of $(z_1, z_2)$ is a bivariate normal with mean vector $(\mu_1, \mu_2)$ and covariance matrix $$ \left( \begin{array}{cc} \sigma_1^2 & \sigma_1\sigma_2 \rho \\ \sigma_1\sigma_2 \rho & \sigma_2^2 \\ \end{array} \right) $$ and i now that $z_3 | z_1, z_2 = \beta_0+\beta_1z_1+\beta_2 z_2+\epsilon$ where $\epsilon \sim N(0, \sigma_3^2)$ and then the distribution of $z_3 | z_1, z_2$ is normal with mean $\beta_0+\beta_1z_1+\beta_2 z_2$ and variance $\sigma_3^2$. I want to find the distribution of $z_1, z_2, z_3$. I think that is a trivariate normal with mean $\mu_1, \mu_2, \beta_0$ and covariance matrix $$ \left( \begin{array}{ccc} \sigma_1^2 & \sigma_1\sigma_2 \rho& \beta_1\sigma_1^2+\beta_2 \sigma_1\sigma_2 \rho \\ \sigma_1\sigma_2 \rho & \sigma_2^2 & \beta_1\sigma_1\sigma_2 \rho +\beta_2\sigma_2^2 \\ \beta_1\sigma_1^2+\beta_2 \sigma_1\sigma_2 \rho & \beta_1\sigma_1\sigma_2 \rho +\beta_2\sigma_2^2 & \beta_1^2\sigma_1^2+\beta_2^2\sigma_2^2+\sigma_3^2 \end{array} \right) $$ I think that this is right but with $\sigma_1^2=0.4$, $\sigma_2^2=1$, $\sigma_3^2=0.4$, $\rho=-0.1$, $\beta_1=-2$ and $\beta_2 = 3$ the matrix is not positive definite definite, i.e if i try to simulate a trivariate normal variable in R i get the following message: Error in chol.default(V): the leading minor of order 3 is not positive definite Where is the error?
I'm having trouble proving the following inequality: $$\forall p>1 \quad \forall m\geq 0 \quad \dfrac{m^2\Gamma(\dfrac{2m}{p})\Gamma(\dfrac{2m}{q})}{\Gamma(\dfrac{2m+2}{p})\Gamma(\dfrac{2m+2}{q})}\geq\dfrac{1}{4}p^2(p-1)^{\frac{2}{p}-2},$$ where as usual $q=\dfrac{p}{p-1}$. In fact, it seems clear from Mathematica that for a fixed $p$, the LHS is a decreasing function of $m$ (strictly unless $p=2$, in which case it's constant). The RHS can be seen to be the limit as $m\to \infty$. I actually only care about integer $m\geq 0$, but I don't find that helpful. I have tried both a direct approach (three known inequalities that are nice enough to apply here, but lead to wrong inequalities) and working with the derivative, which naturally involves instances of the digamma function. Proving that the LHS is decreasing is equivalent to the following inequality: $$\forall p>1 \quad \forall m\geq 0 \quad \dfrac{1}{m}+\dfrac{1}{p}(\psi(\dfrac{2m}{p})-\psi(\dfrac{2m+2}{p}))+\dfrac{1}{q}(\psi(\dfrac{2m}{q})-\psi(\dfrac{2m+2}{q}))\leq0,$$ which again seems to be correct (if you're wondering, the limit as $m\to 0$ is negative for $p\neq2$). Much like before, I tried using two inequalities (for the digamma function), as well as the series representation. They seemed promising at first, but the inequalities gave me positive upper bounds, while the series converges too slowly to be useful (I suspect that any partial sum is positive for large enough $m$). Any advice would be much appreciated. I'll be glad to explain more about the inequalities I've tried if requested.
There is exist tradition to formulate axiom schemes such as induction, comprehension or replacment in a way like "for each formula $\phi(y,\overline{x})$ with free variables $y$ and $\overline{x}=x_1,...,x_n$ we have $\forall \overline{x} ( \text{text of axiom scheme which include } \phi(y,\overline x))$". To not be vague, let take as archetypical example comprehension scheme for second order arithmetics from ncatlab article. $ \forall \overline m \forall \overline X \exists Z \forall n (n \in Z \leftrightarrow \phi(n,\overline m, \overline X))$ I choose that example because I don't know it is equivalent to axiom without parameters or not, whenever equivalence to $ZFC$ or first order $PA$ to versions without parameters is classical result. I understand that apriori allowing parameters could increase expressible power of our system, but I don't understand why we not allowing not only $\Pi_1$ parameters but parameters of all type like, for example, next $\Sigma_3$ axiom scheme $\exists \overline m_1 \exists \overline X_1 \forall \overline m_2 \forall \overline X_2 \exists \overline m_3 \exists \overline X_3 \exists Z \forall n (n \in Z \leftrightarrow \phi(n,\overline m_1, \overline X_1, \overline m_2, \overline X_2, \overline m_3, \overline X_3))$ it seems for me like it could increase our expressible power a lot, so, why it is not traditional?
The expected value of colMeans will allways be unbiased, no matter whether you inrease NObs or Niter, but the difficult part of your question is, whether the variances will change. We know that estimates $\sim N_2(\mu = \beta, ~ \Sigma = \sigma_\varepsilon^2 \cdot (X_{NObs}^T X_{NObs})^{-1})$ with $\beta$ being beta, $X_{NObs}$ being cbind(1, x) with length(x) == NObs and $\sigma_\varepsilon^2$ being 1. So colMeans(estimates)) is a linear combination of normally distributed random variables. In formula colMeans(estimates)) $\sim N_2(\mu = \beta,~\Sigma = \sigma_\varepsilon^2 \cdot (X_{NObs}^T X_{NObs})^{-1} /Niter )$ In your case $(X_{NObs}^T X_{NObs})^{-1} = (n \cdot \sum(x^2) - (\sum(x))^2)^{-1} \cdot \begin{pmatrix} \sum (x^2) & -\sum(x) \\ -\sum(x) & n \end{pmatrix}$ what can be written as $(n^2 \cdot var(x))^{-1} \cdot \begin{pmatrix} \sum (x^2) & -\sum(x) \\ -\sum(x) & n \end{pmatrix}$ with $var(x)$ being the uncorrected empirical variance - here $1$. For reasons of clarity I use $n = NObs$. So your question is whether the change in $Niter$ from $100$ to $101$ changes $\begin{pmatrix} \sum (x^2)/n^2 & -\sum(x)/n^2 \\ -\sum(x)/n^2 & 1/n \end{pmatrix}/Niter$ in the same way as a change in $NObs$ from $1000$ to $1010$.If we assume, that $\sum_1^{1010}(x^2) = \sum_1^{1000}(x^2)+ \sum_{1001}^{1010}(x^2) = 1.01 \cdot \sum_{1}^{1000} (x^2)$ and $-\sum_1^{1010}(x) = -\sum_1^{1000}(x) - \sum_{1001}^{1010}(x) = - 1.01 \cdot \sum_{1}^{1000} (x)$, incresing NObs by 10 is equivalent to multiply the covariance matrix by a factor $1.01^{-1}$ (note that $n_{new} = 1010 = 1.01 \cdot n$ for $n=1000$). So $\begin{pmatrix} \sum (x^2)/n^2 & -\sum(x)/n^2 \\ -\sum(x)/n^2 & 1/n \end{pmatrix}/(100+1)$ is equivalent to $\begin{pmatrix} \sum (x^2)/n^2 & -\sum(x)/n^2 \\ -\sum(x)/n^2 & 1/n \end{pmatrix}\cdot (1.01)^{-1} /100$ (because $1/(100+1) = 1/101 = 1/(1.01 \cdot 100)$). I did a small "meta"-simulation study, where I ran your simulation 100 times under both Settings. Setting 1 was to increase Niter by 1, and setting 2 was to increase NObs by 10. In each meta-Simulation run, I saved the colMeans in a matrix colmeans1 and colmeans2. Plotting `the entries from both matrices doesn't contradict the theoretical finding of both settings being equivalent. set.seed(1) S <- 100 colmeans1 <- matrix(NA, nrow = S, ncol = 2) colmeans2 <- matrix(NA, nrow = S, ncol = 2) #Setting 1: increasing Niter by 1 for(s in 1:S){ beta <- c(1, 2) Niter <- 100 + 1 NObs <- 1000 estimates <- matrix(as.numeric(NA), nrow = Niter, ncol = 2) for(i in 1:Niter) { x <- rnorm(n = NObs) u <- rnorm(n = NObs) y <- beta[1] + beta[2] * x + u reg <- lm(y ~ x) estimates[i, ] <- coef(reg) } colmeans1[s,] <- colMeans(estimates) } #Setting 1: increasing NObs by 10 for(s in 1:S){ beta <- c(1, 2) Niter <- 100 NObs <- 1000 + 10 estimates <- matrix(as.numeric(NA), nrow = Niter, ncol = 2) for(i in 1:Niter) { x <- rnorm(n = NObs) u <- rnorm(n = NObs) y <- beta[1] + beta[2] * x + u reg <- lm(y ~ x) estimates[i, ] <- coef(reg) } colmeans2[s,] <- colMeans(estimates) }
Journal of Differential Geometry J. Differential Geom. Volume 99, Number 1 (2015), 77-123. On the algebra of parallel endomorphisms of a pseudo-Riemannian metric Abstract On a (pseudo-)Riemannian manifold $(\mathcal{M}, g)$, some fields of endomorphisms, i.e. sections of $\mathrm{End}(T \mathcal{M})$, may be parallel for $g$. They form an associative algebra $\mathfrak{e}$, which is also the commutant of the holonomy group of $g$. As any associative algebra, $\mathfrak{e}$ is the sum of its radical and of a semi-simple algebra $\mathfrak{s}$. This $\mathfrak{s}$ may be of eight different types; see C. Boubel, The algebra of the parallel endomorphisms of a pseudo-Riemannian metric: semi-simple part. Then, for any self adjoint nilpotent element $N$ of the commutant of such an $\mathfrak{s}$ in $\mathrm{End}(T \mathcal{M})$, the set of germs of metrics such that $\mathfrak{e} \supset \mathfrak{s} \cup \{ N \}$ is non-empty. We parametrize it. Generically, the holonomy algebra of those metrics is the full commutant $\mathfrak{o}(g)^{\mathfrak{s} \cup \{ N \} }$ and then, apart from some “degenerate” cases, $\mathfrak{e} = \mathfrak{s} \oplus (N)$, where $(N)$ is the ideal spanned by $N$. To prove it, we introduce an analogy with complex differential calculus, the ring $\mathbb{R}[X]/(X^n)$ replacing the field $\mathbb{C}$. This treats the case where the radical of $\mathfrak{e}$ is principal and consists of self adjoint elements. We add a glimpse in the case where this radical is not principal. Article information Source J. Differential Geom., Volume 99, Number 1 (2015), 77-123. Dates First available in Project Euclid: 12 December 2014 Permanent link to this document https://projecteuclid.org/euclid.jdg/1418345538 Digital Object Identifier doi:10.4310/jdg/1418345538 Mathematical Reviews number (MathSciNet) MR3299823 Zentralblatt MATH identifier 1321.53023 Citation Boubel, Charles. On the algebra of parallel endomorphisms of a pseudo-Riemannian metric. J. Differential Geom. 99 (2015), no. 1, 77--123. doi:10.4310/jdg/1418345538. https://projecteuclid.org/euclid.jdg/1418345538
I want to find a non-deterministic $2$-tape Turing Machine, that accepts the language $L$ over $\Sigma=\{0,1,\#\}$ in linear time, $$L=\{x\#y \mid x,y \in \{0,1\}^\star, x \text{ is contained in } y\}$$ $$$$ Does the Turing Machine look as follows? The tape $1$ contains the input $w$ and the tape $2$ contains blanks. The machine reads the symbols in tape $1$ before the symbol $\#$, it writes all these symbols in tape $2$. Then after having reached the symbol # in tape $1$, the machine checks if the symbols of tape $2$ are in tape $1$. $$$$ EDIT: For the example $11\# 100011$ we have the following: Then it compared if the symbol in tape $1$ after the symbol # is the same as the first symbol of the tape $2$. Since they are the same both heads go one step to the right: Since these symbols are not the same, the head of the first tape goes one step to the right and the head of the second tape goes again at the beginning: These symbols are not the same. So, the head of the first tape goes one step to the right and the head of the second tape steays at the first symbol: These symbols are again not the same. So, the head of the first tape goes one step to the right and the head of the second tape steays at the first symbol: Since these two symbols are the same both eads go one step to the right: These symbols are agian the same. Now, when the heads are going one step to the right, they are showing to a blank state. Therefore, this input is accepted. Is this correct?
Let $(V, \| \cdot \|)$ be a normed vector space and $W$ be a linear subspace. Prove that $T: V^*/W^{\perp} \to W^*, \ T(x^* + W^{\perp}) = y^*$ where $y^*(x) = x^*(x)$ for all $x \in W$, is an isometric isomorphism. $\perp$ denotes the annihilator, and $*$ the dual. There was a hint included that said "First show that $W^{\perp}$ is a closed linear subspace of $V^*$. Prove that $T$ is a well-defined linear operator. To show that $T$ is an isometric isomorphism apply the Hahn-Banach theorem." I'm stuck at the last part. Let $y^* \in W^*$ then from Hahn-Banach we have that $\exists \ x^* \in V^*$ s.t $x^* = y^*$ on $W$, and $\|x^*\|_{V^*} = \|y^* \|_{W^*}$. But how can I arrive at $\|T(x^* + W^{\perp})\|_{W^*} = \|x^*+ W^{\perp}\|_{V^*/W^{\perp}}$?
Probabilists often work with Polish spaces, though it is not always very clear where this assumption is needed. Question: What can go wrong when doing probability on non-Polish spaces? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community One simple thing that can go wrong is purely related to the size of the space (Polish spaces are all size $\leq 2^{\aleph_0}$). When spaces are large enough product measures become surprisingly badly behaved. Consider Nedoma's pathology: Let $X$ be a measurable space with $|X| > 2^{\aleph_0}$. Then the diagonal in $X^2$ is not measurable. We'll prove this by way of a theorem: Let $U \subseteq X^2$ be measurable. $U$ can be written as a union of at most $2^{\aleph_0}$ subsets of the form $A \times B$. Proof: First note that we can find some countable collection $(A_i)_{i\ge 0}$ of subsets of $X$, such that $U \subseteq \sigma(\{A_i \times A_j:i,j\ge 0\})$, where $\sigma(\cdot)$ denotes the $\sigma$-algebra generated by the given subsets (proof: The set of $V$ such that we can find such $A_i$ is a $\sigma$-algebra containing the basis sets). For $x \in \{0, 1\}^\mathbb{N}$ define $B_x = \bigcap \{ A_i : x_i = 1 \} \cap \bigcap \{ A_i^c : x_i = 0 \}$. Consider all subsets of $X^2$ which can be written as a (possibly uncountable) union of $B_x \times B_y$ for some $y$. This is a $\sigma$-algebra and obviously contains all the $A_i \times A_j$, so contains $U$. But now we're done. There are at most $2^{\aleph_0}$ of the $B_x$, and each is certainly measurable in $X$, so $U$ can be written as a union of $2^{\aleph_0}$ subsets of the form $A \times B$. QED Corollary: The diagonal is not measurable. Evidently the diagonal cannot be written as a union of at most $2^{\aleph_0}$ rectangles, as they would all have to be single points, and the diagonal has size $|X| > 2^{\aleph_0}$. Separability is a key technical property used to avoid measure-theoretic difficulties for processes with uncountable index sets. The general problem is that measures are only countably additive and $\sigma$-algebras are closed under only countably many primitive set operations. In a variety of scenarios, uncountable collections of measure zero events can bite you; separability ensures you can use a countable sequence as a proxy for the entire process without losing probabilistic content. Here are two examples. Weak convergence: the classical theory of weak convergence utilizes Borel-measurable maps. When dealing with some function-valued random elements, such as cadlag functions endowed with the supremum norm, Borel-measurability fails to hold. See the motivation for Weak Convergence and Empirical Processes. The $J1$ topology is basically a hack which ensures the function space is separable and thereby avoids measurability issues. The parallel theory of weak convergence described in the book embraces non-measurability. Existence of stochastic processes with nice properties: a key property of Brownian motion is continuity of the sample paths. Continuity, however, is a property involving uncountably many indices. The existence of a continuous version of a process can be ensured with separable modifications. See this lecture and the one that follows. Metrizability allows us to introduce concepts such as convergence in probability. Completeness (the Cauchy convergence kind, not the null subsets kind) makes it easier to conduct analysis. There's already been some good responses, but I think it's worth adding a very simple example showing what can go wrong if you don't use Polish spaces. Consider $\mathbb{R}$ under its usual topology, and let X be a non-Lebesgue measurable set. e.g., a Vitali set. Using the subspace topology on X, the diagonal $D\subseteq\mathbb{R}\times X$, $D=\{(x,x)\colon x\in X\}$ is Borel Measurable. However, its projection onto $\mathbb{R}$ is X, which is not Lebesgue measurable. Problems like this are avoided by keeping to Polish spaces. A measurable function between Polish spaces always take Borel sets to analytic sets which are, at least, universally measurable. The space X in this example is a separable metrizable metric space, whereas Polish spaces are separable completely metrizable spaces. So things can go badly wrong if just the completeness requirement is dropped. Below is a copy of an answer I gave here https://stats.stackexchange.com/questions/2932/metric-spaces-and-the-support-of-a-random-variable/20769#20769 Here are some technical conveniences of separable metric spaces (a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define random variables in the elegant way: a random variable is the equivalence class of $X$ for the "almost surely equals" relation (note that the normed vector space $L^p$ is a set of equivalence class) (b) The distance $d(X,X')$ between the two $E$-valued r.v. $X, X'$ is measurable; in passing this allows to define the space $L^0$ of random variables equipped with the topology of convergence in probability (c) Simple r.v. (those taking only finitely many values) are dense in $L^0$ And some techical conveniences of complete separable (Polish) metric spaces : (d) Existence of the conditional law of a Polish-valued r.v. (e) Given a morphism between probability spaces, a Polish-valued r.v. on the first probability space always has a copy in the second one (f) Doob-Dynkin functional representation: if $Y$ is a Polish-valued r.v. measurable w.r.t. the $\sigma$-field $\sigma(X)$ generated by a random element $X$ in any measurable space, then $Y = h(X)$ for some measurable function $h$. We know by Ulam's theorem that a Borel measure on a Polish space is necessarily tight. If we just assume that the metric space is separable, we have that each Borel probability measure on $X$ is tight if and only if $X$ is universally measurable (that is, given a probability measure $\mu$ on the metric completion $\widehat X$, there are two measurable subsets $S_1$ and $S_2$ of $\widehat X$ such that $S_1\subset X\subset S_2$ and $\mu(S_1)=\mu(S_2)$. So a probability measure is not necessarily tight (take $S\subset [0,1]$ of inner Lebesgue measure $0$ and outer measure $1$), see Dudley's book Real Analysis and Probability. An other issue related to tightness. We know by Prokhorov theorem that if $(X,d)$ is Polish and if for all sequence of Borel probability measures $\{\mu_n\}$ we can extract a subsequence which converges in law, then $\{\mu_n\}$ is necessarily uniformly tight. It may be not true if we remove the assumption of "Polishness". And it may be problematic when we want results as "$\mu_n\to \mu$ in law if and only if there is uniform tightness and convergence of finite dimensional laws." Google "image measure catastrophe" with quotation marks. It can also be useful to have the set of Borel probability measures on $X$ (with weak* convergence, a.k.a. convergence in law) to be metrizable, for instance to be able to treat the convergences sequentially. For this you need the space $X$ to be separable and metrizable (see Lévy-Prohorov metric). Fun fact: you can find a non-separable Banach space and a Gaussian probability measure on it which gives measure $0$ to every ball of radius $1$. (In particular your intuition about notions like the "support" of a measure goes pretty badly wrong.) Consider i.i.d. $\xi_n$ and take as your norm $|\xi|^2 = \sup_{k\ge 0}2^{-k}\sum_{n=1}^{2^{k}} |\xi_n|^2$. This is almost surely finite by Borel-Cantelli and almost surely at least $1$ by the law of large numbers. The fact that it gives measure $0$ to every ball of radius $1$ is left as an exercise. This norm isn't even very exotic: if you interpret the $\xi_n$'s as Fourier coefficients, then $B$ is really just the Besov space $B^{1/2}_{2,\infty}$. As far as I remember, the projection of a measurable set may fail to be measurable, so something very natural may become not an event. Besides, constructing conditional probabilities as measures on sections becomes problematic. Perhaps, there are more reasons but these two are already good enough.
Let $X$ be a smooth compact 4-manifold. Then every element of $H_2(X;\mathbb{Z})$ can be represented by a smooth embedded orientable surface and we have the so called genus function $G: H_2(X; \mathbb{Z}) \to \mathbb{Z}_{\geq 0}$ which assigns to a homology class the smallest genus of such a smooth surface needed to represent it. Suppose that $x$ is a nontorision element of $H_2(X; \mathbb{Z})$. Does the sequence $G(x), G(2x), G(3x),...$ limit to infinity? Can there be arbitrarily large zeroes? Is there always a limit? In the case that $x\cdot x \neq 0$, topological methods based on the G-signature show that the genus goes to infinity more or less quadratically in $n$. (I'll be more specific below.) This goes back to Rochlin (Two-dimensional submanifolds of four-dimensional manifolds) and Hsiang-Szczarba (On embedding surfaces in 4-manifolds) in the 1970s. Following Rochlin's version (since I don't have the other at hand): if a homology class $\xi$ is divisible by $h$, an odd prime power, then $$ g \geq \left|\frac{(h^2-1)(\xi \cdot \xi)- \sigma(X)}{4 h^2}\right| - \frac{b_2(X)}{2}. $$ Writing $\xi = h \alpha$ we see that the right side grows quadratically in such $h$. (Generally this grows as the square of the largest prime power dividing $n$ where $\xi = n \alpha$; presumably the growth rate of that quantity in $n$ is known.) By looking in a neighborhood (and sticking to prime powers), you can see that you'd expect quadratic growth, but the estimate above looks off by a factor of two. For instance, when it holds, the adjunction formula (as quoted by Marco above) gives a bound that is roughly twice the G-signature bound. Work of Strle (Bounds on genus and geometric intersections from cylindrical end moduli spaces) gives stronger results for surfaces of positive self-intersection in the case that $b_2^+(X) =1$, without the assumption of non-vanishing Seiberg-Witten invariants. See also recent work of Konno (Bounds on genus and configurations of embedded surfaces in 4-manifolds). Finally, in the case of self-intersection $0$, the growth is at most linear (and possibly $0$, as Marco notes). This follows by tubing together parallel copies of a given surface. Sometimes the function $G$ can be constantly 0: consider the class $x = [S^2\times\{p\}]$ in $H_2(S^2\times F)$, where $F$ is a surface. Then $G(nx)$ can be realised by an embedded sphere for all $n$: just pick $n$ distinct points $p_1,\dots,p_n$ in $F$, and tube $S^2\times\{p_i\}$ to $S^2\times\{p_{i+1}\}$ (using pairwise disjoint tubes). As for the existence of a limit, to me this is a lot less clear. Certainly something is known when $b^+(X) > 1$ and some Seiberg–Witten invariant of $X$ does not vanish, at least in the case when $x\cdot x > 0$. Then there is the adjunction inequality (Kronheimer–Mrowka), telling you that (for some second cohomology class $K$, corresponding to a non-vanishing SW invariant) $$ 2G(nx) - 2 \ge |\langle K, x\rangle| + n^2x\cdot x. $$ The right-hand side of the inequality grows quadratically, so $G(nx)$ goes to $\infty$. I'd be very curious to know of "interesting" behaviours of the function $n \mapsto G(nx)$ (e.g. non-monotonicity, frequent non-monotonicity, eventual constant non-zero behaviour, periodicity/aperiodicity).
Answer $$\cos t=\sqrt{1-\sin^2t}$$ The equation is not an identity, which can be proved by trying $t=\pi$. Work Step by Step $$\cos t=\sqrt{1-\sin^2t}$$ To disprove this equation is an identity, we can take $t=\pi$. At $t=\pi$, $$\cos t=\cos\pi=-1$$ and $$\sqrt{1-\sin^2t}=\sqrt{1-\sin^2\pi}=\sqrt{1-0^2}=\sqrt1=1$$ As $-1\ne1$, at $t=\pi$, $\cos t\ne\sqrt{1-\sin^2t}$. The equation, therefore, is not an identity.
Answer a) The car can navigate the curve. b) $18.4\ m/s$ Work Step by Step a) We know that 27.78 meters per second equals 100 kilometers per hour. Using the equation on page 209, we find: $=(27.7778^2)/(85\times 9.81)=.92$ This is less than the SSF, so the car can navigate the curve. b) We use the equation on page 209 to find: $\frac{v^2}{rg}=\frac{t}{2h}$ $ v = \sqrt{\frac{trg}{2h}}$ $ v = \sqrt{\frac{(9.81)(85)(1.71)}{2(2.1)}}=18.4\ m/s$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
There are indeed several approximations, depending on the shape of the wing. Generally, the lift curve slope is $2\pi$ only for a flat plate in inviscid 2D flow (with Kutta condition fulfilled). With thicker airfoils, the lift curve slope in 2D increases slightly. It also increases with Mach number proportional to the Prandtl-Glauert factor $\frac{1}{\sqrt{1-Ma^2}}$ and the Reynolds number. Now to 3D flow: Once you move away from infinite aspect ratios, the lift curve slope drops. With very small aspect ratios $AR$ the lift curve slope becomes $c_{L\alpha} = \frac{\pi \cdot AR}{2}$. See the plot below for the ideal lift curve slope of an unswept wing: Please note that the red line is only valid for AR = 0! Then the lift curve slope increases up to $c_{L\alpha} = 2\cdot\pi$ for $AR = \infty$ (and zero airfoil thickness and no friction effect), as shown by the blue line. If you know your airfoil lift curve slope, modify the result from the plot above by the ratio between the airfoil lift curve slope and $2\pi$. Now your lift coefficient will become: $$c_L = c_{L\alpha_{3D}}\frac{c_{L\alpha_{2D}}}{2\pi}\cdot\alpha$$ with your angle of attack $\alpha$ in radians. For an analytic approach you may use the formulas below, but stay away from the region close to Mach 1. If those (rather precise) approximations look too daunting, feel free to simplify them: Nomenclature: $c_{L\alpha} \:\:$ lift coefficient gradient over angle of attack $c_{L\alpha\:ic} \:$ lift coefficient gradient over angle of attack in incompressible flow $\pi \:\:\:\:\:$ 3.14159$\dots$ $AR \:\:$ aspect ratio of the wing $\nu \:\:\:\:\:$ the wing's dihedral angle $\varphi_m \:\:$ sweep angle of wing at mid chord $\varphi_{LE} \:$ sweep angle of wing at leading edge $\lambda \:\:\:\:\:$ taper ratio (ratio of tip chord to root chord) $(\frac{x}{l})_{d\:max} \:$ chordwise position of maximum airfoil thickness $Ma \:\:$ Mach number Note that you do not need the planform efficiency (Oswald factor) $\epsilon$ for calculating lift curve slope. That only comes into play when you compute the induced drag of the wing.
In my last post, I multiplied a 2 × 2 matrix by another 2 × 2 matrix. Now let’s multiply a 2 × 2 matrix by a 2 × 1 matrix. If you were paying attention last time, this is possible because the inside dimensions are the same (2= 2) and the resulting matrix will be a 2 × 1 matrix, that is, the outside dimensions. This is done exactly the same way as illustrated in my last post, only there is only 1 column in the second matrix to multiply with the first matrix:\[ \left[{\begin{array}{cc}{1}&{{-}{2}}\\{3}&{{-}{4}}\end{array}}\right]\hspace{0.33em}\times\hspace{0.33em}\left[{\begin{array}{c}{5}\\{6}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{{(}{5}\times{1}{)}{+}{(}{6}\times{(}{-}{2}{))}}\\{{(}{5}\times{3}{)}{+}{(}{6}\times{(}{-}{4}{))}}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{{-}{7}}\\{{-}{9}}\end{array}}\right] \] Now let the second matrix be composed of variables. This does not change the method at all. It just means that the result is a matrix with algebraic expressions:\[ \left[{\begin{array}{cc}{1}&{{-}{2}}\\{3}&{{-}{4}}\end{array}}\right]\hspace{0.33em}\times\hspace{0.33em}\left[{\begin{array}{c}{x}\\{y}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{{(}{x}\times{1}{)}{+}{(}{y}\times{(}{-}{2}{))}}\\{{(}{x}\times{3}{)}{+}{(}{y}\times{(}{-}{4}{))}}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{{x}{-}{2}{y}}\\{{3}{x}{-}{4}{y}}\end{array}}\right] \] Please keep this example in mind for my next post when I use matrices to solve a system of equations. The last skill I need to present is matrix division. When using matrices, you do not actually divide a matrix by another matrix. Rather, you multiply by the inverse of a matrix. In scalar arithmetic, you can think of dividing a number say 4, by another number, say 2, as multiplying the 4 by the inverse (reciprocal) of 2:\[ \frac{4}{2}\hspace{0.33em}{=}\hspace{0.33em}{4}\hspace{0.33em}\times\hspace{0.33em}\frac{1}{2}\hspace{0.33em}{=}\hspace{0.33em}{2} \] The same thing is done with matrices. However, finding the inverse of a matrix is a little involved and I will not cover that in this set of posts. Rather I will just give you the result when needed. However I will say a few things about the properties of matrix inverses. In scalar arithmetic, multiplying a number by it’s reciprocal (inverse) equals 1:\[ \frac{2}{2}\hspace{0.33em}{=}\hspace{0.33em}{2}\hspace{0.33em}\times\hspace{0.33em}\frac{1}{2}\hspace{0.33em}{=}\hspace{0.33em}{1} \] The same thing is true with matrices, only what is “1” in the matrix world? The equivalent “1” for matrices is the Identity Matrix. This is a square (rows = columns) matrix with 1’s down its diagonal and 0’s everywhere else: \left[{\begin{array}{cc}{1}&{0}\\{0}&{1}\end{array}}\right] \] This is the identity matrix for a 2 × 2 matrix. The inverse of a matrix is that matrix where multiplying it by the original matrix, results in the identity matrix. The inverse of a matrix A, is denoted as A -1. {\mathbf{A}}\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{\mathbf{A}}^{{-}{1}}\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{{-}{2}}&{1}\\{\frac{3}{2}}&{{-}\frac{1}{2}}\end{array}}\right] \] \[ \begin{array}{l} {{\mathbf{A}}\times{\mathbf{A}}^{{-}{1}}{=}\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]\times\left[{\begin{array}{cc}{{-}{2}}&{1}\\{\frac{3}{2}}&{{-}\frac{1}{2}}\end{array}}\right]{=}\left[{\begin{array}{cc}{{(}{-}{2}\times{1}{)}{+}{(}\frac{3}{2}\times{2}{)}}&{{(}{1}\times{1}{)}{+}{(}{-}\frac{1}{2}\times{2}{)}}\\{{(}{-}{2}\times{3}{)}{+}{(}\frac{3}{2}\times{4}{)}}&{{(}{1}\times{3}{)}{+}{(}{-}\frac{1}{2}\times{4}{)}}\end{array}}\right]}\\ {{=}\hspace{0.33em}\left[{\begin{array}{cc}{1}&{0}\\{0}&{1}\end{array}}\right]} \end{array} \] And it turns out that when multiplying a matrix by its inverse, order does not matter: A × A -1 = A -1 × A. In my next post, I will put all this talk about matrices in practice and use them to solve a system of equations.
I think Ced is correct here; parabolas are going to get you the most bang for your buck where minimizing the second derivative is concerned. This answer gives a slightly more generalized solution for achieving that parabolic shape, and other shapes (in case your requirements change a bit, maybe). I'm going to make a few assumptions based on the image in your question: Your input is something like a step function; that is, the same value appears several times in a row before a different value appears. Your maximum allowed second derivative is large enough that your output signal can usually "catch up" to the input signal before that value changes. You don't want any overshoot (ringing/clipping). Parabolic shape First, let's express the parabolic shape as a simple equation. We'll have it begin at $(-1,-1)$ and end at $(1,1)$; in other words we'll have it cover two distance units (Y) over two time units (X). $$y = x\left(2-\left|x\right|\right)$$ It looks like this, along with its first and second derivatives. From now on I'll refer to the first derivative as "velocity" and the second as "acceleration." The maximum acceleration here is $2$; call that $A_0$. The distance along Y is also $2$; call that $D_0$. We can scale this thing along X and Y to get a maximum acceleration of your choosing (call it $A_1$), over a desired distance (call it $D_1$). The X-scale and Y-scale can be calculated like this: $$x_s = \sqrt{\frac{A_1D_0}{A_0D_1}}\\y_s = \frac{D_1}{D_0}$$ Now we'll modify that first equation to multiply all $x$ by $x_s$, and multiply the entire thing by $y_s$. $$y = (x_sx)\left(2-\left|x_sx\right|\right)y_s$$ So, let's say you want the maximum acceleration to be $\frac{1}{4}$, and need to travel a distance of $1$. $$x_s = \sqrt{\frac{A_1D_0}{A_0D_1}} = \sqrt{\frac{\frac{1}{4}\cdot2}{2\cdot1}} = \sqrt{\frac{1}{4}} = 0.5$$ $$y_s = \frac{D_1}{D_0} = \frac{1}{2} = 0.5$$ At this rate of acceleration, it takes four units of time (X axis) to travel one unit of distance (Y axis). You can calculate the time it will take like this: $$t = 2\sqrt{\frac{A_0D_1}{A_1D_0}}$$ Other shapes The nice thing about this is you can use pretty much any shape you want and those calculations will still work. Let's say you want to use a sinusoidal curve instead of parabolic (for aesthetic reasons, maybe). $$y = \sin\left(\frac{\pi}{2}x\right)$$ Again, we have it begin at $(-1,-1)$ and end at $(1,1)$. The maximum acceleration here (our $A_0$) is $\frac{\pi^2}{4}$, or $\approx 2.4674$ as we can see from the plot. Our $D_0$ is always $2$. You can calculate the X-scale, Y-scale and time just like we did for the parabola; it'll still work fine. Or instead of the sine shape, try something like this: $$y = \frac{15x-10x^3+3x^5}{8}$$ Acceleration is zero at each end; no jerky "takeoffs" or "landings." (work in progress...)
In many studies, we measure more than one variable for each individual. For example, we measure precipitation and plant growth, or number of young with nesting habitat, or soil erosion and volume of water. We collect pairs of data and instead of examining each variable separately (univariate data), we want to find ways to describe bivariate data, in which two variables are measured on each subject in our sample. Given such data, we begin by determining if there is a relationship between these two variables. As the values of one variable change, do we see corresponding changes in the other variable? We can describe the relationship between these two variables graphically and numerically. We begin by considering the concept of correlation. Definition: Correlation Correlation is defined as the statistical association between two variables. A correlation exists between two variables when one of them is related to the other in some way. A scatterplot is the best place to start. A scatterplot (or scatter diagram) is a graph of the paired (x, y) sample data with a horizontal x-axis and a vertical y-axis. Each individual (x, y) pair is plotted as a single point. Figure 1. Scatterplot of chest girth versus length. In this example, we plot bear chest girth (y) against bear length (x). When examining a scatterplot, we should study the overall pattern of the plotted points. In this example, we see that the value for chest girth does tend to increase as the value of length increases. We can see an upward slope and a straight-line pattern in the plotted data points. A scatterplot can identify several different types of relationships between two variables. A relationship has no correlationwhen the points on a scatterplot do not show any pattern. A relationship is non-linearwhen the points on a scatterplot follow a pattern but not a straight line. A relationship is linearwhen the points on a scatterplot follow a somewhat straight line pattern. This is the relationship that we will examine. Linear relationships can be either positive or negative. Positive relationships have points that incline upwards to the right. As x values increase, y values increase. As x values decrease, y values decrease. For example, when studying plants, height typically increases as diameter increases. Figure 2. Scatterplot of height versus diameter. Negative relationships have points that decline downward to the right. As x values increase, yvalues decrease. As x values decrease, y values increase. For example, as wind speed increases, wind chill temperature decreases. Figure 3. Scatterplot of temperature versus wind speed. Non-linear relationships have an apparent pattern, just not linear. For example, as age increases height increases up to a point then levels off after reaching a maximum height. Figure 4. Scatterplot of height versus age. When two variables have no relationship, there is no straight-line relationship or non-linear relationship. When one variable changes, it does not influence the other variable. Figure 5. Scatterplot of growth versus area. Linear Correlation Coefficient Because visual examinations are largely subjective, we need a more precise and objective measure to define the correlation between the two variables. To quantify the strength and direction of the relationship between two variables, we use the linear correlation coefficient: $$r = \dfrac {\sum \dfrac {(x_i-\bar x)}{s_x} \dfrac {(y_i - \bar y)}{s_y}}{n-1}$$ where \(\bar x\) and \(s_x\) are the sample mean and sample standard deviation of the x’s, and \(\bar y\) and \(s_y\) are the mean and standard deviation of the y’s. The sample size is n. An alternate computation of the correlation coefficient is: $$r = \dfrac {S_{xy}}{\sqrt {S_{xx}S_{yy}}}$$ where $$S_{xx} = \sum x^2 - \dfrac {(\sum x)^2}{n}$$ $$S_{xy} = \sum xy - \dfrac {(\sum x)(\sum y )}{n}$$ $$S_{yy} = \sum y^2 - \dfrac {(\sum x)^2}{n}$$ The linear correlation coefficient is also referred to as Pearson’s product moment correlation coefficient in honor of Karl Pearson, who originally developed it. This statistic numerically describes how strong the straight-line or linear relationship is between the two variables and the direction, positive or negative. The properties of “r”: It is always between -1 and +1. It is a unitless measure so “r” would be the same value whether you measured the two variables in pounds and inches or in grams and centimeters. Positive values of “r” are associated with positive relationships. Negative values of “r” are associated with negative relationships. Examples of Positive Correlation Figure 6. Examples of positive correlation. Examples of Negative Correlation Figure 7. Examples of negative correlation. Note Correlation is not causation!!! Just because two variables are correlated does not mean that one variable causes another variable to change. Examine these next two scatterplots. Both of these data sets have an r = 0.01, but they are very different. Plot 1 shows little linear relationship between x and y variables. Plot 2 shows a strong non-linear relationship. Pearson’s linear correlation coefficient only measures the strength and direction of a linear relationship. Ignoring the scatterplot could result in a serious mistake when describing the relationship between two variables. Figure 8. Comparison of scatterplots. When you investigate the relationship between two variables, always begin with a scatterplot. This graph allows you to look for patterns (both linear and non-linear). The next step is to quantitatively describe the strength and direction of the linear relationship using “r”. Once you have established that a linear relationship exists, you can take the next step in model building.
Reindeer are a feature of Christmas season. They were added with the introduction of the Christmas season in version 1.04. Reindeer will occasionally jump across the screen (from left to right) and give cookies when clicked. As of v.2.002, they are of occasionally variable size and now wobble while bouncing across the screen. There are four achievements related to Reindeer as well as three upgrades obtainable from Santa which make Reindeer twice as common, move twice as slow, and give twice as many cookies. RewardEdit You will get a minute worth of cookie production initially or 25 cookies, whichever is highest (if you have Wrinklers the unwithered production rate will be used for calculation). With the "Ho ho ho-flavored frosting" upgrade, you will receive twice as many cookies (two minutes worth of production). The amount dropped is furthermore affected by Golden Cookie effects like Frenzy, Clot, Dragon Harvest or Elder Frenzy. Note that Reindeer rewards are, in contrast to Golden Cookie rewards, not limited by your current cookies in the bank. A deer during a Frenzy giving cookies equal to 10 minutes and 30 seconds of regular production (two minutes * (7 * 0.75)) A deer during an Elder Frenzy, also known as "Eldeer" (itself an Achievement), is one of the highest rewarding effect combos in the game, giving cookies equal to 12 hours and 6 minutes of regular production (two minutes * (666*0.5)). Therefore, it's often advised to "synchronize" Golden Cookies and Reindeer during Christmas season, meaning to not click the Golden Cookie (or Wrath Cookie) until a Reindeer appears or might appear before a potential Elder Frenzy would end. Since the 2.002 update, it is now possible for Elder Frenzies to overlap with Frenzies or Dragon Harvests (or even Building Specials, if the Distilled Essence of Redoubled Luck heavenly upgrade lets a Golden/Wrath Cookie with each spawn at once), making it possible to get boosted Eldeers. Frenzy + Elder Frenzy Reindeer can produce almost 2.5 days of regular production. Spawn ratesEdit The Reindeer follows Spawning mechanism, and can be approximated by following functions: $ \bar{p}(t)=\begin{cases}0&T_\text{min}>t\\ p(t-T_\text{min})&T_\text{max}>t\ge T_\text{min}\\ 0&t\ge T_\text{max}\\ \end{cases} $ $ \bar{P}(t)=\begin{cases}0&T_\text{min}>t\\ P(t-T_\text{min})&T_\text{max}>t\ge T_\text{min}\\ 1&t\ge T_\text{max}\\ \end{cases} $ $ \bar{P}^{-1}(c)=\begin{cases} \left(\frac{-T^5}{5}\ln(1-c)\right)^{\frac{1}{6}}+T_\text{min}&1-\exp(-5T)\ge c> 0\\ T_\text{max}&c> 1-\exp(-5T)\\ \end{cases} $ Where p, P and P -1 are probability density function, cumulative density function and inverse cumulative density function respectively. All time unit is second. For reindeer, T min=180 seconds and T=360 seconds initially. But there are many factors can alternate max Tand min T. The time can be halved with the "Reindeer baking grounds" upgrade. The heavenly upgrade "Starsnow" also reduces the time by 5%. max See the table below for some common probabilities, assuming 30 FPS: Time between Reindeer Spawns Chance Without upgrade With Starsnow With Reindeer baking grounds With both 0% 180 seconds 171 seconds 90 seconds 85.5 seconds .1% 198.3 seconds 188.5 seconds 100.3 seconds 95.3 seconds 1% 206.9 seconds 196.8 seconds 105.1 seconds 100.0 seconds 10% 219.8 seconds 209.1 seconds 112.3 seconds 106.9 seconds 25% 227.1 seconds 216.1 seconds 116.4 seconds 110.8 seconds 33. 3% 229.8 seconds 218.7 seconds 117.9 seconds 112.3 seconds Average 233.733 seconds 222.484 seconds 120.153 seconds 114.391 seconds 50% 234.5 seconds 223.2 seconds 120.6 seconds 114.8 seconds 66. 6% 238.8 seconds 227.4 seconds 123.0 seconds 117.1 seconds 75% 241.2 seconds 229.6 seconds 124.3 seconds 118.4 seconds 90% 246.5 seconds 234.7 seconds 127.3 seconds 121.3 seconds 99% 254.7 seconds 242.5 seconds 131.9 seconds 125.6 seconds 99.9% 259.9 seconds 247.5 seconds 134.8 seconds 128.4 seconds 100% 360 seconds 342 seconds 180 seconds 171 seconds Some have reported that reindeer and Golden Cookies do not appear unless the browser is refreshed, due to a bug. UpgradesEdit When clicking a Reindeer there is also a chance that it will drop one of the seven Christmas-themed flavored cookies. Initially, there is a 20% chance of receiving one of the cookies. The "Santa's bottomless bag" upgrade, the heavenly upgrade "Starsnow" and the "Let it snow" achievement each will increase the chance. Base rate for Christmas-Themed cookie drops Without "Santa's bottomless bag" With "Santa's bottomless bag" Without "Let it snow" 20% 28% With "Let it snow" 40% 46% Like the Halloween-themed cookies, if you already have the cookie chosen at random, it will not unlock a new cookie. So on each Reindeer click, the actual chance of unlocking a new cookie type is equal to: $ r \cdot \left(1-\frac{N}{7}\right) $ Where N is the number of upgrades already unlocked and r is the rate above. Probabilities for Christmas-Themed Cookies Cookies Unlocked Probability of Appearance r=20% r=28% r=40% r=46% 0 20% 28% 40% 46% 1 17.1% 24% 34.3% 39.4% 2 14.3% 20% 28.6% 32.9% 3 11.4% 16% 22.9% 26.3% 4 8.57% 12% 17.1% 19.7% 5 5.71% 8% 11.4% 13.1% 6 2.86% 4% 5.71% 6.57% These are the flavored cookies that can be dropped: Reindeer cookies Icon Name Unlock condition Base price Description ID Christmas tree biscuits Minimum 20% drop chance from finding a reindeer during Christmas Season Cookie production multiplier +2%. "Whose pine is it anyway?" 143 Snowflake biscuits Cookie production multiplier +2%. "Mass-produced to be unique in every way." 144 Snowman biscuits Cookie production multiplier +2%. "It's frosted. Doubly so." 145 Holly biscuits Cookie production multiplier +2%. "You don't smooch under these ones. That would be the mistletoe (which, botanically, is a smellier variant of the mistlefinger)." 146 Candy cane biscuits Cookie production multiplier +2%. "It's two treats in one! (Further inspection reveals the frosting does not actually taste like peppermint, but like mundane sugary frosting.)" 147 Bell biscuits Cookie production multiplier +2%. "What do these even have to do with christmas? Who cares, ring them in!" 148 Present biscuits Cookie production multiplier +2%. "The prequel to future biscuits. Watch out!" 149 AchievementsEdit There are five Achievements related to Reindeer: Icon Name Description ID Let it snow [note 1] Unlock every Christmas-themed cookie. Owning this achievement makes Christmas-themed cookies drop more frequently in future playthroughs. 111 Oh deer Pop 1 . reindeer 112 Sleigh of hand [note 2] Pop 50 reindeer. 113 Reindeer sleigher [note 2] Pop 200 reindeer. 114 Eldeer Pop a reindeer during an elder frenzy. 265 DebugEdit You may use the debug upgrade "Reindeer season" to spawn reindeer faster (every 0.6 seconds). You can check the Cheating page on details on how to activate it. NotesEdit ↑ 2.0 2.1Unlike most "number of actions" achievements, the Wrinklerand Reindeerachievements are counted in a single game, not all time: Ascending will reset the counter. TriviaEdit Reindeer have random names which will show up when clicked. They are the traditional names of Santa's reindeer: Dasher and Dancer, Prancer and Vixen, Comet and Cupid and Donner and Blitzen, and the most famous reindeer of all, Rudolph, which is a relatively modern one (first appeared in a 1939 booklet written by Robert L. May). The flavor text for Christmas tree biscuits refers to the improv TV show Whose Line is it Anyway?hosted by Clive Anderson (original UK version) and Drew Carey or Aisha Tyler (US version). The Holly biscuits were temporarily renamed Mistletoe biscuits in version 1.0403, before being changed back in the Valentine's Day update. The icon still had red berries during this time, even though mistletoe berries are white. Reindeer are not affected by the Golden Switch, and will still spawn even when it's on. Despite this, Distilled Essence of Redoubled Luck can make 2 reindeer appear at once just like it does with Golden/Wrath Cookies. If you click a reindeer while Holobore is slotted, Holobore's negative effect does not trigger. Shortly after loading the game, it is impossible for a reindeer to appear without any form of cheating. Using the command 'Game.seasonPopup.spawn ()' Would spawn a bouncing golden cookie instead of a reindeer. The golden cookie would do nothing whatsoever when clicked as it does not count as a reindeer nor a golden cookie. In order to spawn a reindeer in that amount of time, use the command: Game.seasonPopup.time = Game.seasonPopup.maxTime; Interactive Objects Big Cookie Golden Cookie Wrath Cookie Wrinkler Reindeer Santa Krumblor Shiny wrinkler Category:Interactive Objects
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
This elliptic curve has smallest conductor amongst elliptic curves over $\Q$ of rank 3. magma: E := EllipticCurve("5077a1"); sage: E = EllipticCurve("5077a1") gp: E = ellinit("5077a1") \( y^2 + y = x^{3} - 7 x + 6 \) Mordell-Weil group structure Infinite order Mordell-Weil generators and heights \(P\) = \( \left(-2, 3\right) \) \( \left(-1, 3\right) \) \( \left(0, 2\right) \) \(\hat{h}(P)\) ≈ 1.36857250535 1.20508110419 0.990906333153 \( \left(-3, 0\right) \), \( \left(-2, 3\right) \), \( \left(-1, 3\right) \), \( \left(0, 2\right) \), \( \left(1, 0\right) \), \( \left(2, 0\right) \), \( \left(3, 3\right) \), \( \left(4, 6\right) \), \( \left(8, 21\right) \), \( \left(11, 35\right) \), \( \left(14, 51\right) \), \( \left(21, 95\right) \), \( \left(37, 224\right) \), \( \left(52, 374\right) \), \( \left(93, 896\right) \), \( \left(342, 6324\right) \), \( \left(406, 8180\right) \), \( \left(816, 23309\right) \) Invariants magma: Conductor(E); sage: E.conductor().factor() gp: ellglobalred(E)[1] Conductor: \( 5077 \) = \(5077\) magma: Discriminant(E); sage: E.discriminant().factor() gp: E.disc Discriminant: \(5077 \) = \(5077 \) magma: jInvariant(E); sage: E.j_invariant().factor() gp: E.j j-invariant: \( \frac{37933056}{5077} \) = \(2^{12} \cdot 3^{3} \cdot 7^{3} \cdot 5077^{-1}\) Endomorphism ring: \(\Z\) (no Complex Multiplication) Sato-Tate Group: $\mathrm{SU}(2)$ BSD invariants magma: Rank(E); sage: E.rank() Rank: \(3\) magma: Regulator(E); sage: E.regulator() Regulator: \(0.417143558758\) magma: RealPeriod(E); sage: E.period_lattice().omega() gp: E.omega[1] Real period: \(4.15168798309\) magma: TamagawaNumbers(E); sage: E.tamagawa_numbers() gp: gr=ellglobalred(E); [[gr[4][i,1],gr[5][i][4]] | i<-[1..#gr[4][,1]]] Tamagawa product: \( 1 \) = \( 1 \) magma: Order(TorsionSubgroup(E)); sage: E.torsion_order() gp: elltors(E)[1] Torsion order: \(1\) magma: MordellWeilShaInformation(E); sage: E.sha().an_numerical() Analytic order of Ш: \(1\) (rounded) gp: x*deriv(xy[1])/(2*xy[2]+E.a1*xy[1]+E.a3) magma: ModularDegree(E); sage: E.modular_degree() Modular degree: 1984 \( \Gamma_0(N) \)-optimal: yes Manin constant: 1 sage: E.lseries().dokchitser().derivative(1,r)/r.factorial() gp: ar[2]/factorial(ar[1]) \( L^{(3)}(E,1)/3! \) ≈ \( 1.73184990012 \) Local data prime Tamagawa number Kodaira symbol Reduction type Root number ord(\(N\)) ord(\(\Delta\)) ord\((j)_{-}\) \(5077\) \(1\) \( I_{1} \) Non-split multiplicative 1 1 1 1 The 2-adic representation attached to this elliptic curve is surjective. sage: [rho.image_type(p) for p in rho.non_surjective()] $p$-adic data $p$-adic regulators Note: \(p\)-adic regulator data only exists for primes \(p\ge5\) of good ordinary reduction. $p$ 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 5077 Reduction type ss ss ordinary ordinary ordinary ordinary ordinary ordinary ordinary ordinary ordinary ss ss ordinary ordinary nonsplit $\lambda$-invariant(s) 4,3 3,3 3 3 3 3 3 3 3 3 3 3,3 3,3 3 3 ? $\mu$-invariant(s) 0,0 0,0 0 0 0 0 0 0 0 0 0 0,0 0,0 0 0 ? An entry ? indicates that the invariants have not yet been computed. Isogenies This curve has no rational isogenies. Its isogeny class 5077a consists of this curve only. Growth of torsion in number fields The number fields $K$ of degree up to 7 such that $E(K)_{\rm tors}$ is strictly larger than $E(\Q)_{\rm tors}$ (which is trivial) are as follows: $[K:\Q]$ $K$ $E(K)_{\rm tors}$ Base-change curve 3 3.3.20308.1 \(\Z/2\Z\) Not in database 6 6.6.2093830264528.1 \(\Z/2\Z \times \Z/2\Z\) Not in database We only show fields where the torsion growth is primitive. For each field $K$ we either show its label, or a defining polynomial when $K$ is not in the database. Additional information Historical Information about the Gauss elliptic curve In 1985, Buhler, Gross and Zagier used the celebrated Gross-Zagier Theorem on heights of Heegner points (see Gross, Benedict H.; Zagier, Don B. (1986), "Heegner points and derivatives of L-series", Inventiones Mathematicae 84 (2): 225–320, [10.1007/BF01388809]) to prove that the L-function of this curve has a zero of order 3 at its critical point $s=1$, thus establishing the first part of the Birch and Swinnerton-Dyer conjecture for this curve (see Math. Comp. 44 (1985), 473-481: [10.1090/S0025-5718-1985-0777279-X]). This was the first time that BSD had been established for any elliptic curve of rank $3$. To this day, it is not possible, even in principle, to establish BSD for any curve of rank $4$ or greater since there is no known method for rigourously establishing the value of the analytic rank when it is greater than $3$. Via Goldfeld's method, which required the use of an L-function of analytic rank at least $3$, this elliptic curve also found an application in the context of obtaining explicit lower bounds for the class numbers of imaginary quadratic fields. This solved Gauss's Class Number Problem first posed by Gauss in 1801 is his book Disquisitiones Arithmeticae (Section V, Articles 303 and 304).
Let $$z(x,y)=\int_{1}^{x^{2}-y^{2}}[\int_{0}^{u}\sin(t^{2})dt]du.$$ Calculate $$\frac{\partial^{2}z}{\partial x\partial y}$$ I tried to solve this using the Fundamental Theorem of Calculus. I also found an solution like this: using Fundamental Theorem of Calculus, we get: $$\frac{\partial z}{\partial y}=\left[\int_{0}^{x^{2}-y^{2}}\sin(t^{2})dt\right]\cdot(-2y)$$ I can't understand why the extremes have changed and why I have to multiply by the partial derivate of $y$ in $x^{2}-y^{2}$. Thanks.
$\mathbf{Question:}$ $(H,+)$ is a subgroup of $(\mathbb{R},+)$ such that $H \cap [-1,1]$ is finite and contains elements other than $0$. Show that $(H,+)$ must be cyclic. $\mathbf{Attempt:}$ Since $\{0\}\cup T= H \cap [-1,1]$ is finite [$0 \not\in T$, $\emptyset \subsetneq T$], so we can completely enumerate $T$. Let $T =\{a_1, a_2, ..., a_m\} $ be the complete 'list' of elements. Let $Q= \{a_i \in T: a_i>0 \}$. Let $a_t$ be the minimal element in $Q$. Claim: $H=\langle a_t \rangle $. Suppose, the claim is false. Then for some $h \in H$, $h \notin \langle a_t \rangle $. [we pick $h$ such that it is $>0$. If not, then we pick $-h$]. So, we can find a positive integer $p$ such that $p a_t< h <(p+1)a_t \implies 0<h-pa_t<a_t $ which contradicts the fact that $a_t$ is the minimal element in $Q$. Is this Proof valid? Kindly verify.
Hypothesis Test about the Population Mean ( μ) when the Population Standard Deviation ( σ) is Known We are going to examine two equivalent ways to perform a hypothesis test: the classical approach and the p-value approach. The classical approach is based on standard deviations. This method compares the test statistic (Z-score) to a critical value (Z-score) from the standard normal table. If the test statistic falls in the rejection zone, you reject the null hypothesis. The p-value approach is based on area under the normal curve. This method compares the area associated with the test statistic to alpha (α), the level of significance (which is also area under the normal curve). If the p-value is less than alpha, you would reject the null hypothesis. As a past student poetically said: If the p-value is a wee value, Reject Ho Both methods must have: Data from a random sample. Verification of the assumption of normality. A null and alternative hypothesis. A criterion that determines if we reject or fail to reject the null hypothesis. A conclusion that answers the question. There are four steps required for a hypothesis test: State the null and alternative hypotheses. State the level of significance and the critical value. Compute the test statistic. State a conclusion. The Classical Method for Testing a Claim about the Population Mean (μ) when the Population Standard Deviation ( σ) is Known Example \(\PageIndex{1}\): A Two-sided Test A forester studying diameter growth of red pine believes that the mean diameter growth will be different from the known mean growth of 1.35 inches/year if a fertilization treatment is applied to the stand. He conducts his experiment, collects data from a sample of 32 plots, and gets a sample mean diameter growth of 1.6 in./year. The population standard deviation for this stand is known to be 0.46 in./year. Does he have enough evidence to support his claim? Solution Step 1) State the null and alternative hypotheses. Ho: μ = 1.35 in./year H1: μ ≠ 1.35 in./year Step 2) State the level of significance and the critical value. We will choose a level of significance of 5% (α = 0.05). For a two-sided question, we need a two-sided critical value – Z α/2 and + Z α/2. The level of significance is divided by 2 (since we are only testing “not equal”). We must have two rejection zones that can deal with either a greater than or less than outcome (to the right (+) or to the left (-)). We need to find the Z-score associated with the area of 0.025. The red areas are equal to α/2 = 0.05/2 = 0.025 or 2.5% of the area under the normal curve. Go into the body of values and find the negative Z-score associated with the area 0.025. Figure 1. The rejection zone for a two-sided test. The negative critical value is -1.96. Since the curve is symmetric, we know that the positive critical value is 1.96. ±1.96 are the critical values. These values set up the rejection zone. If the test statistic falls within these red rejection zones, we reject the null hypothesis. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$z = \frac {\bar {x} -\mu}{\frac {\sigma}{\sqrt {n}}}$$ For this problem, the test statistic is $$z = \frac {1.6-1.35}{\frac {0.46}{\sqrt {32}}} =3.07$$ Step 4) State a conclusion. Compare the test statistic to the critical value. If the test statistic falls into the rejection zones, reject the null hypothesis. In other words, if the test statistic is greater than +1.96 or less than -1.96, reject the null hypothesis. Figure 2. The critical values for a two-sided test when α = 0.05. In this problem, the test statistic falls in the red rejection zone. The test statistic of 3.07 is greater than the critical value of 1.96.We will reject the null hypothesis. We have enough evidence to support the claim that the mean diameter growth is different from (not equal to) 1.35 in./year. Example \(\PageIndex{2}\): A Right-sided Test A researcher believes that there has been an increase in the average farm size in his state since the last study five years ago. The previous study reported a mean size of 450 acres with a population standard deviation ( σ) of 167 acres. He samples 45 farms and gets a sample mean of 485.8 acres. Is there enough information to support his claim? Solution Step 1) State the null and alternative hypotheses. Ho: μ = 450 acres H1: μ >450 acres Step 2) State the level of significance and the critical value. We will choose a level of significance of 5% (α = 0.05). For a one-sided question, we need a one-sided positive critical value Zα. The level of significance is all in the right side (the rejection zone is just on the right side). We need to find the Z-score associated with the 5% area in the right tail. Figure 3. Rejection zone for a right-sided hypothesis test. Go into the body of values in the standard normal table and find the Z-score that separates the lower 95% from the upper 5%. The critical value is 1.645. This value sets up the rejection zone. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$z = \frac {\bar {x} -\mu}{\frac {\sigma}{\sqrt {n}}}$$ For this problem, the test statistic is $$z = \frac {485.8-450}{\frac {167}{\sqrt {45}}} =1.44$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 4. The critical value for a right-sided test when α = 0.05. The test statistic does not fall in the rejection zone. It is less than the critical value. We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean farm size has increased from 450 acres. Example \(\PageIndex{3}\):A Left-sided Test A researcher believes that there has been a reduction in the mean number of hours that college students spend preparing for final exams. A national study stated that students at a 4-year college spend an average of 23 hours preparing for 5 final exams each semester with a population standard deviation of 7.3 hours. The researcher sampled 227 students and found a sample mean study time of 19.6 hours. Does this indicate that the average study time for final exams has decreased? Use a 1% level of significance to test this claim. Solution Step 1) State the null and alternative hypotheses. Ho: μ = 23 hours H1: μ < 23 hours Step 2) State the level of significance and the critical value. This is a left-sided test so alpha (0.01) is all in the left tail. Figure 9. The rejection zone for a left-sided hypothesis test. Go into the body of values in the standard normal table and find the Z-score that defines the lower 1% of the area. The critical value is -2.33. This value sets up the rejection zone. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$z = \frac {\bar {x} - \mu}{\frac {\sigma} {\sqrt {n}}}$$ For this problem, the test statistic is $$z= \frac {19.6-23}{\frac {7.3}{\sqrt {277}}}$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 10. The critical value for a left-sided test when α = 0.01. The test statistic falls in the rejection zone. The test statistic of -7.02 is less than the critical value of -2.33. We reject the null hypothesis. We have sufficient evidence to support the claim that the mean final exam study time has decreased below 23 hours. Testing a Hypothesis using P-values The p-value is the probability of observing our sample mean given that the null hypothesis is true. It is the area under the curve to the left or right of the test statistic. If the probability of observing such a sample mean is very small (less than the level of significance), we would reject the null hypothesis. Computations for the p-value depend on whether it is a one- or two-sided test. Steps for a hypothesis test using p-values: State the null and alternative hypotheses. State the level of significance. Compute the test statistic and find the area associated with it (this is the p-value). Compare the p-value to alpha (α) and state a conclusion. Instead of comparing Z-score test statistic to Z-score critical value, as in the classical method, we compare area of the test statistic to area of the level of significance. Note:The Decision Rule If the p-value is less than alpha, we reject the null hypothesis. Computing P-values If it is a two-sided test (the alternative claim is ≠), the p-value is equal to two times the probability of the absolute value of the test statistic. If the test is a left-sided test (the alternative claim is “<”), then the p-value is equal to the area to the left of the test statistic. If the test is a right-sided test (the alternative claim is “>”), then the p-value is equal to the area to the right of the test statistic. Let’s look at Example 6 again. A forester studying diameter growth of red pine believes that the mean diameter growth will be different from the known mean growth of 1.35 in./year if a fertilization treatment is applied to the stand. He conducts his experiment, collects data from a sample of 32 plots, and gets a sample mean diameter growth of 1.6 in./year. The population standard deviation for this stand is known to be 0.46 in./year. Does he have enough evidence to support his claim? Step 1) State the null and alternative hypotheses. Ho: μ = 1.35 in./year H1: μ ≠ 1.35 in./year Step 2) State the level of significance. We will choose a level of significance of 5% (α = 0.05). Step 3) Compute the test statistic. For this problem, the test statistic is: $$z=\frac{1.6-1.35}{\frac{0.46}{\sqrt {32}}}=3.07$$ The p-value is two times the area of the absolute value of the test statistic (because the alternative claim is “not equal”). Figure 11. The p-value compared to the level of significance. Look up the area for the Z-score 3.07 in the standard normal table. The area (probability) is equal to 1 – 0.9989 = 0.0011. Multiply this by 2 to get the p-value = 2 * 0.0011 = 0.0022. Step 4) Compare the p-value to alpha and state a conclusion. Use the Decision Rule (if the p-value is less than α, reject H0). In this problem, the p-value (0.0022) is less than alpha (0.05). We reject the H0. We have enough evidence to support the claim that the mean diameter growth is different from 1.35 inches/year. Let’s look at Example 7 again. A researcher believes that there has been an increase in the average farm size in his state since the last study five years ago. The previous study reported a mean size of 450 acres with a population standard deviation ( σ) of 167 acres. He samples 45 farms and gets a sample mean of 485.8 acres. Is there enough information to support his claim? Step 1) State the null and alternative hypotheses. Ho: μ = 450 acres H1: μ >450 acres Step 2) State the level of significance. We will choose a level of significance of 5% (α = 0.05). Step 3) Compute the test statistic. For this problem, the test statistic is $$z= \frac {485.8-450}{\frac {167}{\sqrt {45}}}=1.44$$ The p-value is the area to the right of the Z-score 1.44 (the hatched area). This is equal to 1 – 0.9251 = 0.0749. The p-value is 0.0749. Figure 12. The p-value compared to the level of significance for a right-sided test. Step 4) Compare the p-value to alpha and state a conclusion. Use the Decision Rule. In this problem, the p-value (0.0749) is greater than alpha (0.05), so we Fail to Reject the H0. The area of the test statistic is greater than the area of alpha (α). We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean farm size has increased. Let’s look at Example 8 again. A researcher believes that there has been a reduction in the mean number of hours that college students spend preparing for final exams. A national study stated that students at a 4-year college spend an average of 23 hours preparing for 5 final exams each semester with a population standard deviation of 7.3 hours. The researcher sampled 227 students and found a sample mean study time of 19.6 hours. Does this indicate that the average study time for final exams has decreased? Use a 1% level of significance to test this claim. Step 1) State the null and alternative hypotheses. H0: μ = 23 hours H1: μ < 23 hours Step 2) State the level of significance. This is a left-sided test so alpha (0.01) is all in the left tail. Step 3) Compute the test statistic. For this problem, the test statistic is $$z=\frac {19.6-23}{\frac {7.3}{\sqrt {227}}}=-7.02$$ The p-value is the area to the left of the test statistic (the little black area to the left of -7.02). The Z-score of -7.02 is not on the standard normal table. The smallest probability on the table is 0.0002. We know that the area for the Z-score -7.02 is smaller than this area (probability). Therefore, the p-value is <0.0002. Figure 13. The p-value compared to the level of significance for a left-sided test. Step 4) Compare the p-value to alpha and state a conclusion. Use the Decision Rule. In this problem, the p-value (p<0.0002) is less than alpha (0.01), so we Reject the H0. The area of the test statistic is much less than the area of alpha (α). We reject the null hypothesis. We have enough evidence to support the claim that the mean final exam study time has decreased below 23 hours. Both the classical method and p-value method for testing a hypothesis will arrive at the same conclusion. In the classical method, the critical Z-score is the number on the z-axis that defines the level of significance (α). The test statistic converts the sample mean to units of standard deviation (a Z-score). If the test statistic falls in the rejection zone defined by the critical value, we will reject the null hypothesis. In this approach, two Z-scores, which are numbers on the z-axis, are compared. In the p-value approach, the p-value is the area associated with the test statistic. In this method, we compare α (which is also area under the curve) to the p-value. If the p-value is less than α, we reject the null hypothesis. The p-value is the probability of observing such a sample mean when the null hypothesis is true. If the probability is too small (less than the level of significance), then we believe we have enough statistical evidence to reject the null hypothesis and support the alternative claim. Software Solutions Minitab (referring to Ex. 8) One-Sample Z Test of mu = 23 vs. < 23 The assumed standard deviation = 7.3 99% Upper N Mean SE Mean Bound Z P 227 19.600 0.485 20.727 -7.02 0.000 Excel Excel does not offer 1-sample hypothesis testing.
Aldehydes, Ketones and Carboxylic Acids Properties and Uses of Carboxylic Acids Chemical properties: (ii) Reducing with Tollen's reagent: only formic acid will give this test (iii) Fehling test (iv) Formic acid does not react with NH 2OH PHYSICAL PROPERTIES Up to nine carbons of carboxylic acid are liquid in state with unpleasant order. The higher members of carboxylic acids are waxy solids. Carboxylic acid having higher boiling points than aldehydes, ketones and even alcohols due to their strong inter molecular hydrogen bond. The 1st four members of carboxylic acids are highly soluble in water. They undergo dimerisation in non-polar solvents like benzene or CCl 4. They undergo ionisation in polar solvents like water. Acidic Nature of carboxylic acid. When carboxylic acids are treated with Na metal liberates dihydrogen. Similarly when COOH treated with base like NaOH they form salt and water. When COOH are treated with NaHCO 3 (Sodium hydrogen carbonate) they liberate brisk effervescence of CO 2. This reaction is used to distinguish COOH from other organic compounds. Above all reactions inference that COOH has acidic nature. \tt R-COOH + Na \rightarrow RCOO^{-}Na^{+} + \frac{1}{2}H_{2} \uparrow \tt RCOOH + NaOH \rightarrow RCOO^{-}Na^{+} + H_{2}O \tt RCOOH + NaHCO_{3} \rightarrow RCOO^{-}Na^{+} + H_{2}O + CO_{2} \uparrow Acidic strength Acidic strength of any COOH acid can be understood with the help of acid dissociation constant value k a or power of acid dissociation constant Pk a This can be illustrate as follows:- From the above general reaction it is very clear that, \tt k = \frac{\left[RCOO^{-}\right]\left[H_{3}O^{+}\right]}{\left[RCOOH\right]\left[H_{2}O\right]} Higher the k a value stronger the acid. \tt k\left[H_{2}O\right] = \frac{\left[RCOO^{-}\right]\left[H_{3}O^{+}\right]}{\left[RCOOH\right]} Pk a = −log k a acid dissociation constant \tt k_{a} = \frac{\left[RCOO^{-}\right]\left[H_{3}O^{+}\right]}{\left[RCOOH\right]} lesser the Pk a value stronger the acid ∴ Acidic strength ∝ k a \tt \propto \frac{1}{P^{k_{a}}} Carboxylic acids are weaker acids than mineral acids (HCl, HNO 3, H 2SO 4) but stronger acids than alcohols and some phenols. Carboxylic acids are stronger acids than phenol due to identical resonance structure in carboxylic acids whereas phenoxide ion having non-identical resonance structures Acidic strength can also be understood by the type of group attached to the carbon of carboxylic acid. CH 3COOH (i) F — CH 2 — COOH (ii) Cl — CH 2 — COOH (iii) Br — CH 2 — COOH Acidic Strength order (i) > (ii) > (iii) When electron withdrawing group is present on the carboxylic carbon, it will increase the polarity of OH bond and increases the acidic strength. As a result Pk a value decreases. If electron releasing group is present, polarity of OH bond decreases, acidic strength decreases. Consequently Pk a value increases. The effect of the following groups in increasing acidity order is:- Ph < I < Br < Cl < F < CN < NO 2 < CF 3 Thus, the following acids are arranged in order of increasing acidity (based on Pk a value) As distance increases from the carboxylic group to electron withdrawing group, acidic strength decreases. Arrange the following in decreasing order of acidic strength:- Formic acid is more acidic than benzoic acid. Benzoic acid is more acidic than any other carboxylic acid. HCOOH > C 6H 5COOH > CH 3COOH When electron withdrawing group is present at the para position in benzoic acid increases the acidic strength than benzoic acid. If electron releasing or donating group is present at the para position decreases the acidic strength of benzoic acid. ORTHO EFFECT Whether an electron releasing group or withdrawing group present at ortho position in benzoic acid it makes more acidic than benzoic acid. This effect is known as ortho effect. Special Case:- In case of para fluro benzoic acid and para chloro benzoic acid, para chloro benzoic acid is more acidic than para fluro benzoic acid. This is due to more effective back donation of fluorine than chlorine as shown above. Carboxylic acids are more acidic than alcohol but picric acid is more acidic than benzoic acid. Part1: View the Topic in this Video from 37:41 to 45:10 Part2: View the Topic in this Video from 0:40 to 53:50 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
skills to develop To understand what \(F\)-distributions are. To understand how to use an \(F\)-test to judge whether two population variances are equal. \(F\)-Distributions Another important and useful family of distributions in statistics is the family of \(F\)-distributions. Each member of the \(F\)-distribution family is specified by a pair of parameters called degrees of freedom and denoted \(df_1\) and \(df_2\). Figure \(\PageIndex{1}\) shows several \(F\)-distributions for different pairs of degrees of freedom. An \(F\) random variable is a random variable that assumes only positive values and follows an \(F\)-distribution. \(F\) Figure \(\PageIndex{1}\): Many -Distributions The parameter \(df_1\) is often referred to as the numerator degrees of freedom and the parameter \(df_2\) as the denominator degrees of freedom. It is important to keep in mind that they are not interchangeable. For example, the \(F\)-distribution with degrees of freedom \(df_1=3\) and \(df_2=8\) is a different distribution from the \(F\)-distribution with degrees of freedom \(df_1=8\) and \(df_2=3\). Definition: critical value The value of the \(F\) random variable \(F\) with degrees of freedom \(df_1\) and \(df_2\) that cuts off a right tail of area \(c\) is denoted \(F_c\) and is called a critical value (Figure \(\PageIndex{2}\)). Figure \(\PageIndex{2}\): Tables containing the values of \(F_c\) are given in Chapter 11. Each of the tables is for a fixed collection of values of \(c\), either \(0.900,\; 0.950,\; 0.975,\; 0.990,\; \text{and}\; 0.995\) (yielding what are called “lower” critical values), or \(0.005,\; 0.010,\; 0.025,\; 0.050,\; \text{and}\; 0.100\) (yielding what are called “upper” critical values). In each table critical values are given for various pairs \((df_1,\: df_2)\). We illustrate the use of the tables with several examples. Example \(\PageIndex{1}\): an \(F\) random variable Suppose \(F\) is an \(F\) random variable with degrees of freedom \(df_1=5\) and \(df_2=4\). Use the tables to find \(F_{0.10}\) \(F_{0.95}\) Solution: The column headings of all the tables contain \(df_1=5\). Look for the table for which \(0.10\) is one of the entries on the extreme left (a table of upper critical values) and that has a row heading \(df_2=4\) in the left margin of the table. A portion of the relevant table is provided. The entry in the intersection of the column with heading \(df_1=5\) and the row with the headings \(0.10\) and \(df_2=4\), which is shaded in the table provided, is the answer, F0.10=4.05. \(F\) Tail Area \(\frac{df_1}{df_2}\) \(1\) \(2\) \(\cdots\) \(5\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(0.005\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(22.5\) \(\cdots\) \(0.01\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(15.5\) \(\cdots\) \(0.025\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(9.36\) \(\cdots\) \(0.05\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(6.26\) \(\cdots\) \(0.10\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(4.05\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) Look for the table for which \(0.95\) is one of the entries on the extreme left (a table of lower critical values) and that has a row heading \(df_2=4\) in the left margin of the table. A portion of the relevant table is provided. The entry in the intersection of the column with heading \(df_1=5\) and the row with the headings \(0.95\) and \(df_2=4\), which is shaded in the table provided, is the answer, F0.95=0.19. \(F\) Tail Area \(\frac{df_1}{df_2}\) \(1\) \(2\) \(\cdots\) \(5\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(0.90\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(0.28\) \(\cdots\) \(0.95\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(0.19\) \(\cdots\) \(0.975\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(0.14\) \(\cdots\) \(0.99\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(0.09\) \(\cdots\) \(0.995\) \(4\) \(\cdots\) \(\cdots\) \(\cdots\) \(0.06\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) Example \(\PageIndex{2}\) Suppose \(F\) is an \(F\) random variable with degrees of freedom \(df_1=2\) and \(df_2=20\). Let \(α=0.05\). Use the tables to find \(F_{\alpha }\) \(F_{\alpha /2}\) \(F_{1-\alpha }\) \(F_{1-\alpha /2}\) Solution: The column headings of all the tables contain \(df_1=2\). Look for the table for which \(\alpha =0.05\) is one of the entries on the extreme left (a table of upper critical values) and that has a row heading \(df_2=20\) in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading \(df_1=2\) and the row with the headings \(0.05\) and \(df_2=20\) is the answer, F0.05=3.49. \(F\) Tail Area \(\frac{df_1}{df_2}\) \(1\) \(2\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(0.005\) \(20\) \(\cdots\) \(6.99\) \(\cdots\) \(0.01\) \(20\) \(\cdots\) \(5.85\) \(\cdots\) \(0.025\) \(20\) \(\cdots\) \(4.46\) \(\cdots\) \(0.05\) \(20\) \(\cdots\) \(3.49\) \(\cdots\) \(0.10\) \(20\) \(\cdots\) \(2.59\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) Look for the table for which \(\alpha /2=0.025\) is one of the entries on the extreme left (a table of upper critical values) and that has a row heading \(df_2=20\) in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading \(df_1=2\) and the row with the headings \(0.025\) and \(df_2=20\) is the answer, \(F_{0.025}=4.46\). \(F\) Tail Area \(\frac{df_1}{df_2}\) \(1\) \(2\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(0.005\) \(20\) \(\cdots\) \(6.99\) \(\cdots\) \(0.01\) \(20\) \(\cdots\) \(5.85\) \(\cdots\) \(0.025\) \(20\) \(\cdots\) \(4.46\) \(\cdots\) \(0.05\) \(20\) \(\cdots\) \(3.49\) \(\cdots\) \(0.10\) \(20\) \(\cdots\) \(2.59\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) Look for the table for which \(1-\alpha =0.95\) is one of the entries on the extreme left (a table of lower critical values) and that has a row heading \(df_2=20\) in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading \(df_1=2\) and the row with the headings \(0.95\) and \(df_2=20\) is the answer, \(F_{0.95}=0.05\). \(F\) Tail Area \(\frac{df_1}{df_2}\) \(1\) \(2\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(0.90\) \(20\) \(\cdots\) \(0.11\) \(\cdots\) \(0.95\) \(20\) \(\cdots\) \(0.05\) \(\cdots\) \(0.975\) \(20\) \(\cdots\) \(0.03\) \(\cdots\) \(0.99\) \(20\) \(\cdots\) \(0.01\) \(\cdots\) \(0.995\) \(20\) \(\cdots\) \(0.01\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) Look for the table for which \(1-\alpha /2=0.975\) is one of the entries on the extreme left (a table of lower critical values) and that has a row heading \(df_2=20\) in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading \(df_1=2\) and the row with the headings \(0.975\) and \(df_2=20\) is the answer, \(F_{0.975}=0.03\). \(F\) Tail Area \(\frac{df_1}{df_2}\) \(1\) \(2\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(0.90\) \(20\) \(\cdots\) \(0.11\) \(\cdots\) \(0.95\) \(20\) \(\cdots\) \(0.05\) \(\cdots\) \(0.975\) \(20\) \(\cdots\) \(0.03\) \(\cdots\) \(0.99\) \(20\) \(\cdots\) \(0.01\) \(\cdots\) \(0.995\) \(20\) \(\cdots\) \(0.01\) \(\cdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) A fact that sometimes allows us to find a critical value from a table that we could not read otherwise is: If \(F_u(r,s)\) denotes the value of the \(F\)-distribution with degrees of freedom \(df_1=r\) and \(df_2=s\) that cuts off a right tail of area \(u\), then \[F_c(k,l)=\frac{1}{F_{1-c}(l,k)}\] Example \(\PageIndex{3}\) Use the tables to find \(F_{0.01}\) for an \(F\) random variable with \(df_1=13\) and \(df_2=8\). \(F_{0.975}\) for an \(F\) random variable with \(df_1=40\) and \(df_2=10\). Solution: There is no table with \(df_1=13\), but there is one with \(df_1=8\). Thus we use the fact that \[F_{0.01}(13,8)=\frac{1}{F_{0.99}(8,13)}\] Using the relevant table we find that \(F_{0.99}(8,13)=0.18\), hence \(F_{0.01}(13,8)=0.18^{-1}=5.556\). There is no table with \(df_1=40\), but there is one with \(df_1=10\). Thus we use the fact that \[F_{0.975}(40,10)=\frac{1}{F_{0.025}(10,40)}\] Using the relevant table we find that \(F_{0.025}(10,40)=3.31\), hence \(F_{0.975}(40,10)=3.31^{-1}=0.302\). \(F\)-Tests for Equality of Two Variances In Chapter 9 we saw how to test hypotheses about the difference between two population means \(μ_1\) and \(μ_2\). In some practical situations the difference between the population standard deviations \(σ_1\) and \(σ_2\) is also of interest. Standard deviation measures the variability of a random variable. For example, if the random variable measures the size of a machined part in a manufacturing process, the size of standard deviation is one indicator of product quality. A smaller standard deviation among items produced in the manufacturing process is desirable since it indicates consistency in product quality. For theoretical reasons it is easier to compare the squares of the population standard deviations, the population variances \(\sigma _{1}^{2}\) and \(\sigma _{2}^{2}\). This is not a problem, since \(σ_1=σ_2\) precisely when \(\sigma _{1}^{2}=\sigma _{2}^{2}\), \(σ_1<σ_2\) precisely when \(\sigma _{1}^{2}<\sigma _{2}^{2}\), and \(σ_1>σ_2\) precisely when \(\sigma _{1}^{2}>\sigma _{2}^{2}\). The null hypothesis always has the form \(H_0: \sigma _{1}^{2}=\sigma _{2}^{2}\). The three forms of the alternative hypothesis, with the terminology for each case, are: Form of \(H_a\) Terminology \(H_a: \sigma _{1}^{2}>\sigma _{2}^{2}\) Right-tailed \(H_a: \sigma _{1}^{2}<\sigma _{2}^{2}\) Left-tailed \(H_a: \sigma _{1}^{2}\neq \sigma _{2}^{2}\) Two-tailed Just as when we test hypotheses concerning two population means, we take a random sample from each population, of sizes \(n_1\) and \(n_2\), and compute the sample standard deviations \(s_1\) and \(s_2\). In this context the samples are always independent. The populations themselves must be normally distributed. Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Variances \[F=\frac{s_{1}^{2}}{s_{2}^{2}}\] If the two populations are normally distributed and if \(H_0: \sigma _{1}^{2}=\sigma _{2}^{2}\) is true then under independent sampling \(F\) approximately follows an \(F\)-distribution with degrees of freedom \(df_1=n_1-1\) and \(df_2=n_2-1\). A test based on the test statistic \(F\) is called an -test. A most important point is that while the rejection region for a right-tailed test is exactly as in every other situation that we have encountered, because of the asymmetry in the \(F\)-distribution the critical value for a left-tailed test and the lower critical value for a two-tailed test have the special forms shown in the following table: Terminology Alternative Hypothesis Rejection Region Right-tailed \(H_a: \sigma _{1}^{2}>\sigma _{2}^{2}\) \(F\geq F_\alpha\) Left-tailed \(H_a: \sigma _{1}^{2}<\sigma _{2}^{2}\) \(F\leq F_{1-\alpha }\) Two-tailed \(H_a: \sigma _{1}^{2}\neq \sigma _{2}^{2}\) \(F\leq F_{1-\alpha /2}\; \text{or}\; F\geq F_{\alpha /2}\) Figure \(\PageIndex{3}\) illustrates these rejection regions. Figure \(\PageIndex{3}\): Rejection Regions: (a) Right-Tailed; (b) Left-Tailed; (c) Two-Tailed The test is performed using the usual five-step procedure described at the end of Section 8.1. Example \(\PageIndex{4}\) One of the quality measures of blood glucose meter strips is the consistency of the test results on the same sample of blood. The consistency is measured by the variance of the readings in repeated testing. Suppose two types of strips, \(A\) and \(B\), are compared for their respective consistencies. We arbitrarily label the population of Type \(A\) strips Population \(1\) and the population of Type \(B\) strips Population \(2\). Suppose \(15\) Type \(A\) strips were tested with blood drops from a well-shaken vial and \(20\) Type \(B\) strips were tested with the blood from the same vial. The results are summarized in Table \(\PageIndex{3}\). Assume the glucose readings using Type \(A\) strips follow a normal distribution with variance \(\sigma _{1}^{2}\)and those using Type \(B\) strips follow a normal distribution with variance with \(\sigma _{2}^{2}\). Test, at the \(10\%\) level of significance, whether the data provide sufficient evidence to conclude that the consistencies of the two types of strips are different. Strip Type Sample Size Sample Variance \(A\) \(n_1=16\) \(s_{1}^{2}=2.09\) \(B\) \(n_2=21\) \(s_{2}^{2}=1.10\) Solution: Step 1. The test of hypotheses is \[H_0: \sigma _{1}^{2}=\sigma _{2}^{2}\\ vs.\\ H_a: \sigma _{1}^{2}\neq \sigma _{2}^{2}\; @\; \alpha =0.10\] Step 2. The distribution is the \(F\)-distribution with degrees of freedom \(df_1=16-1=15\) and \(df_2=21-1=20\). Step 3. The test is two-tailed. The left or lower critical value is \(F_{1-\alpha }=F_{0.95}=0.43\). The right or upper critical value is \(F_{\alpha /2}=F_{0.05}=2.20\). Thus the rejection region is \([0,-0.43]\cup [2.20,\infty )\), as illustrated in Figure \(\PageIndex{4}\). : Figure \(\PageIndex{4}\) Rejection Region and Test Statistic for "Example \(\PageIndex{4}\)" Step 4. The value of the test statistic is \[F=\frac{s_{1}^{2}}{s_{2}^{2}}=\frac{2.09}{1.10}=1.90\] Step 5. As shown in Figure \(\PageIndex{4}\), the test statistic \(1.90\) does not lie in the rejection region, so the decision is not to reject \(H_0\). The data do not provide sufficient evidence, at the \(10\%\) level of significance, to conclude that there is a difference in the consistency, as measured by the variance, of the two types of test strips. Example \(\PageIndex{5}\) In the context of "Example \(\PageIndex{4}\)", suppose Type \(A\) test strips are the current market leader and Type \(B\) test strips are a newly improved version of Type \(A\). Test, at the \(10\%\) level of significance, whether the data given in Table \(\PageIndex{3}\) provide sufficient evidence to conclude that Type \(B\) test strips have better consistency (lower variance) than Type \(A\) test strips. Solution: Step 1. The test of hypotheses is now \[H_0: \sigma _{1}^{2}=\sigma _{2}^{2}\\ vs.\\ H_a: \sigma _{1}^{2}>\sigma _{2}^{2}\; @\; \alpha =0.10\] Step 2. The distribution is the \(F\)-distribution with degrees of freedom \(df_1=16-1=15\) and \(df_2=21-1=20\). Step 3. The value of the test statistic is \[F=\frac{s_{1}^{2}}{s_{2}^{2}}=\frac{2.09}{1.10}=1.90\] Step 4. The test is right-tailed. The single critical value is \(F_\alpha =F_{0.10}=1.84\). Thus the rejection region is \([1.84,\infty )\), as illustrated in Figure \(\PageIndex{5}\). \(\PageIndex{5}\) Figure \(\PageIndex{5}\): Rejection Region and Test Statistic for "Example " Step 5. As shown in Figure \(\PageIndex{5}\), the test statistic \(1.90\) lies in the rejection region, so the decision is to reject \(H_0\) The data provide sufficient evidence, at the \(10\%\) level of significance, to conclude that Type \(B\) test strips have better consistency (lower variance) than Type \(A\) test strips do. Lower Critical Values of \(F\)-Distributions Key Takeaway Critical values of an \(F\)-distribution with degrees of freedom \(df_1\) and \(df_2\) are found in tables above. An \(F\)-test can be used to evaluate the hypothesis of two identical normal population variances. Contributor Anonymous
I've been thinking about, implementing and using the Extreme Learning Machine (ELM) paradigm for more than a year now, and the longer I do, the more I doubt that it is really a good thing. My opinion, however, seems to be in contrast with scientific community where -- when using citations and new publications as a measure -- it seems to be a hot topic. The ELM has been introduced by Huang et. al. around 2003. The underlying idea is rather simple: start with a 2-layer artificial neural network and randomly assign the coefficients in the first layer. This, one transforms the non-linear optimization problem which is usually handled via backpropagation into a simple linear regression problem. More detailed, for $\mathbf x \in \mathbb R^D$, the model is $$ f(\mathbf x) = \sum_{i=1}^{N_\text{hidden}} w_i \, \sigma\left(v_{i0} + \sum_{k=1}^{D} v_{ik} x_k \right)\,.$$ Now, only the $w_i$ are adjusted (in order to minimize squared-error-loss), whereas the $v_{ik}$'s are all chosen randomly. As a compensation for the loss in degrees-of-freedom, the usual suggestion is to use a rather large number of hidden nodes (i.e. free parameters $w_i$). From another perspective (not the one usually promoted in the literature, which comes from the neural network side), the whole procedure is simply linear regression, but one where you choose your basis functions $\phi$ randomly, for example $$ \phi_i(\mathbf x) = \sigma\left(v_{i0} + \sum_{k=1}^{D} v_{ik} x_k \right)\,.$$ (Many other choices beside the sigmoid are possible for the random functions. For instance, the same principle has also been applied using radial basis functions.) From this viewpoint, the whole method becomes almost too simplistic, and this is also the point where I start to doubt that the method is really a good one (... whereas its scientific marketing certainly is). So, here are my questions: The idea to raster the input space using random basis functions is, in my opinion, good for low dimensions. In high dimensions, I think it is just not possible to find a good choice using random selection with a reasonable number of basisfunctions. Therefore, does the ELM degrade in high-dimensions (due to the curse of dimensionality)? Do you know of experimental results supporting/contradicting this opinion? In the linked paper there is only one 27-dimensional regression data set (PYRIM) where the method performs similar to SVMs (whereas I would rather like to see a comparison to a backpropagation ANN) More generally, I would like to here your comments about the ELM method.
Search Now showing items 11-20 of 27 Pseudorapidity dependence of the anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-11) We present measurements of the elliptic ($\mathrm{v}_2$), triangular ($\mathrm{v}_3$) and quadrangular ($\mathrm{v}_4$) anisotropic azimuthal flow over a wide range of pseudorapidities ($-3.5< \eta < 5$). The measurements ... Correlated event-by-event fluctuations of flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2016-10) We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus–nucleus collisions, obtained for the first time using a new analysis method based on ... Centrality dependence of $\mathbf{\psi}$(2S) suppression in p-Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}}$ = 5.02 TeV (Springer, 2016-06) The inclusive production of the $\psi$(2S) charmonium state was studied as a function of centrality in p-Pb collisions at the nucleon-nucleon center of mass energy $\sqrt{s_{\rm NN}}$ = 5.02 TeV at the CERN LHC. The ... Transverse momentum dependence of D-meson production in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-03) The production of prompt charmed mesons D$^0$, D$^+$ and D$^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb–Pb collisions at the centre-of-mass energy per nucleon pair, $\sqrt{s_{\rm NN}}$ of ... Multiplicity and transverse momentum evolution of charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions at the LHC (Springer, 2016) We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, $\mathrm{\Delta}\eta$ and $\mathrm{\Delta}\varphi$ respectively. ... Charge-dependent flow and the search for the chiral magnetic wave in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2016-04) We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb–Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral ... Pseudorapidity and transverse-momentum distributions of charged particles in proton-proton collisions at $\mathbf{\sqrt{\textit s}}$ = 13 TeV (Elsevier, 2016-02) The pseudorapidity ($\eta$) and transverse-momentum ($p_{\rm T}$) distributions of charged particles produced in proton-proton collisions are measured at the centre-of-mass energy $\sqrt{s}$ = 13 TeV. The pseudorapidity ... Differential studies of inclusive J/$\psi$ and $\psi$(2S) production at forward rapidity in Pb-Pb collisions at $\mathbf{\sqrt{{\textit s}_{_{NN}}}}$ = 2.76 TeV (Springer, 2016-05) The production of J/$\psi$ and $\psi(2S)$ was measured with the ALICE detector in Pb-Pb collisions at the LHC. The measurement was performed at forward rapidity ($2.5 < y < 4 $) down to zero transverse momentum ($p_{\rm ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2016-04) We report the first results of elliptic ($v_2$), triangular ($v_3$) and quadrangular flow ($v_4$) of charged particles in Pb--Pb collisions at $\sqrt{s_{_{\rm NN}}}=$ 5.02 TeV with the ALICE detector at the CERN Large ...
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
Hypothesis Test for a Population Proportion ( p) Frequently, the parameter we are testing is the population proportion. We are studying the proportion of trees with cavities for wildlife habitat. We need to know if the proportion of people who support green building materials has changed. Has the proportion of wolves that died last year in Yellowstone increased from the year before? Recall that the best point estimate of p, the population proportion, is given by $$\hat {p} = \dfrac {x}{n}$$ where x is the number of individuals in the sample with the characteristic studied and n is the sample size. The sampling distribution of p̂ is approximately normal with a mean \(\mu_{\hat {p}} = p\) and a standard deviation $$\sigma_{\hat {p}} = \sqrt {\dfrac {p(1-p)}{n}}$$ when np(1 – p)≥10. We can use both the classical approach and the p-value approach for testing. The steps for a hypothesis test are the same that we covered in Section 2. State the null and alternative hypotheses. State the level of significance and the critical value. Compute the test statistic. State a conclusion. The test statistic follows the standard normal distribution. Notice that the standard error (the denominator) uses p instead of p̂, which was used when constructing a confidence interval about the population proportion. In a hypothesis test, the null hypothesis is assumed to be true, so the known proportion is used. $$ z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$$ The critical value comes from the standard normal table, just as in Section 2. We will still use the same three pairs of null and alternative hypotheses as we used in the previous sections, but the parameter is now pinstead of μ: For a two-sided test, alpha will be divided by 2 giving a ± Zα/2 critical value. For a left-sided test, alpha will be all in the left tail giving a – Zα critical value. For a right-sided test, alpha will be all in the right tail giving a Zα critical value. Example \(\PageIndex{1}\) botanist has produced a new variety of hybrid soy plant that is better able to withstand drought than other varieties. The botanist knows the seed germination for the parent plants is 75%, but does not know the seed germination for the new hybrid. He tests the claim that it is different from the parent plants. To test this claim, 450 seeds from the hybrid plant are tested and 321 have germinated. Use a 5% level of significance to test this claim that the germination rate is different from 75%. Solution Step 1) State the null and alternative hypotheses. Ho: p = 0.75 H1: p ≠ 0.75 Step 2) State the level of significance and the critical value. This is a two-sided question so alpha is divided by 2. Alpha is 0.05 so the critical values are ± Zα/2 = ± Z.025. Look on the negative side of the standard normal table, in the body of values for 0.025. The critical values are ± 1.96. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value. $$ z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$$ For this problem, the test statistic is $$z=\dfrac {0.713-0.75}{\sqrt {\dfrac {0.75(1-0.75)}{450}}} = -1.81$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 19. Critical values for a two-sided test when α = 0.05. The test statistic does not fall in the rejection zone. We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants. Let’s answer this question using the p-value approach. Remember, for a two-sided alternative hypothesis (“not equal”), the p-value is two times the area of the test statistic. The test statistic is -1.81 and we want to find the area to the left of -1.81 from the standard normal table. On the negative page, find the Z-score -1.81. Find the area associated with this Z-score. The area = 0.0351. This is a two-sided test so multiply the area times 2 to get the p-value = 0.0351 x 2 = 0.0702. Now compare the p-value to alpha. The Decision Rule states that if the p-value is less than alpha, reject the H0. In this case, the p-value (0.0702) is greater than alpha (0.05) so we will fail to reject H0. We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants. Example \(\PageIndex{2}\): You are a biologist studying the wildlife habitat in the Monongahela National Forest. Cavities in older trees provide excellent habitat for a variety of birds and small mammals. A study five years ago stated that 32% of the trees in this forest had suitable cavities for this type of wildlife. You believe that the proportion of cavity trees has increased. You sample 196 trees and find that 79 trees have cavities. Does this evidence support your claim that there has been an increase in the proportion of cavity trees? Use a 10% level of significance to test this claim. Solution Step 1) State the null and alternative hypotheses. Ho: p = 0.32 H1: p > 0.32 Step 2) State the level of significance and the critical value. This is a one-sided question so alpha is divided by 1. Alpha is 0.10 so the critical value is Zα = Z .10 Look on the positive side of the standard normal table, in the body of values for 0.90. The critical value is 1.28. Figure 20. Critical value for a right-sided test where α = 0.10. Step 3) Compute the test statistic. The test statistic is the number of standard deviations the sample proportion is from the known proportion. It is also a Z-score, just like the critical value. $$ z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$$ For this problem, the test statistic is: $$z= \frac {0.403-0.32}{\sqrt {\frac {0.32(1-0.32)}{196}}}=2.49$$ Step 4) State a conclusion. Compare the test statistic to the critical value. Figure 21. Comparison of the test statistic and the critical value. The test statistic is larger than the critical value (it falls in the rejection zone). We will reject the null hypothesis. We have enough evidence to support the claim that there has been an increase in the proportion of cavity trees. Now use the p-value approach to answer the question. This is a right-sided question (“greater than”), so the p-value is equal to the area to the right of the test statistic. Go to the positive side of the standard normal table and find the area associated with the Z-score of 2.49. The area is 0.9936. Remember that this table is cumulative from the left. To find the area to the right of 2.49, we subtract from one. p-value = (1 – 0.9936) = 0.0064 The p-value is less than the level of significance (0.10), so we reject the null hypothesis. We have enough evidence to support the claim that the proportion of cavity trees has increased. Software Solutions Minitab (referring to Ex. 15) Test and CI for One Proportion Test of p = 0.32 vs. p > 0.32 90% Lower Sample X N Sample p Bound Z-Value p-Value 1 79 196 0.403061 0.358160 2.49 0.006 Using the normal approximation. Excel Excel does not offer 1-sample hypothesis testing.
In standard $f(R)$ gravity we consider the Lagrangian of the form $L=\frac{1}{16\pi G}f(R)\epsilon$, where $\epsilon$ is the spacetime volume form and similarly, we consider the boundary term to be of the form $l=\frac{1}{8\pi G}f'(R)\epsilon_{\partial \mathcal{M}}$, where $\epsilon_{\partial \mathcal{M}}$ is the spacetime boundary form. Now, in $f(R,T)$ consider the following action $$S = \int_{\mathcal{M}}L +\int_{\partial \mathcal{M}}l$$ where $L= \frac{1}{16\pi G}f(R,T)\epsilon$ and $l=\frac{1}{8\pi G}f'(R,T)\epsilon_{\partial \mathcal{M}}$, where $f'(R,T)=f_{R}\delta R+f_{T}\frac{\delta\left(g_{\alpha \beta}T^{\alpha \beta}\right)}{\delta g_{\mu \nu}}\delta g_{\mu \nu}$. Upon varying the Lagrangian we would obtain the following form $$\delta L =\underbrace{\frac{1}{16 \pi G}\left(-R^{\mu\nu}f_{R}+f_{T}\frac{\delta\left(g_{\alpha \beta}T^{\alpha \beta}\right)}{\delta g_{\mu \nu}}\delta g_{\mu \nu} + \frac{1}{2}g^{\mu\nu}f\right)\epsilon\cdot \delta g_{\mu \nu}}_{=E^{\mu\nu}\delta g_{\mu \nu}} + d\Theta,$$ where $$\theta^{\mu} = \frac{1}{16\pi G}\left(g^{\mu\nu}\nabla^{\nu}g_{\alpha \nu} - g^{\alpha\beta}\nabla^{\mu}\delta g_{\alpha \beta}\right)f_{R},$$ such that $\Theta = \theta\cdot \epsilon$. Now, the variation of the boundary term $l$ is quite messy and considering $\delta f'(R, T) = f_{RR}\delta R +f_{TT}\delta T +f_{RT}\left(\delta R + \delta T \right)$, terms with $\left(\delta g_{\mu\nu}\right)^{2}$ appear which cause problems in trying to fix the pullback of the variation of the metric tensor to the spatial slice after decomposing the boundary term. Computing $\Theta|_{\partial \mathcal{M}}+\delta l$ and imposing Dirichlet's boundary condition, i.e., fixing the pullback of $g_{\mu\nu}$ to $\Gamma$, the stationarity requirement $\left(\Theta +\delta l \right)|_{\Gamma} = dC$, where $C$ is a local $(d-2)$-form on $\Gamma$, is to be fixed. Thus, finally we have the variation of the action to be $$\delta S = \int_{\mathcal{M}}E^{\mu\nu}\delta g_{\mu\nu} +\int_{\Sigma_{+}\Sigma_{-}}\left(\Theta +\delta l -dC\right), $$ where the following decomposition has been done: $\partial \mathcal{M} = \Gamma\cup\Sigma_{-}\cup\Sigma_{+}$. Firstly, is my boundary term correct, and how am I to fix the squared metric tensor variation term.
Hypothesis Test about a Variance When people think of statistical inference, they usually think of inferences involving population means or proportions. However, the particular population parameter needed to answer an experimenter’s practical questions varies from one situation to another, and sometimes a population’s variability is more important than its mean. Thus, product quality is often defined in terms of low variability. Sample variance \(s^2\) can be used for inferences concerning a population variance \(\sigma^2\). For a random sample of n measurements drawn from a normal population with mean μ and variance \(\sigma^2\), the value \(s^2\) provides a point estimate for \(\sigma^2\). In addition, the quantity \(\frac {(n-1)s^2}{\sigma^2}\) follows a Chi-square(\(\chi^{2}\)) distribution, with \(df = n – 1\). The properties of Chi-square (\(\chi^{2}\) ) distribution are: Unlike Z and t distributions, the values in a chi-square distribution are all positive. The chi-square distribution is asymmetric, unlike the Z and t distributions. There are many chi-square distributions. We obtain a particular one by specifying the degrees of freedom \((df = n – 1)\) associated with the sample variances \(s^2\). Figure 22. The chi-square distribution. One-sample (\(\chi^{2}\) ) test for testing the hypotheses: Null hypothesis: \(H_0: \sigma^{2} = \sigma^{2}_{0}\)(constant) Alternative hypothesis: \(H_a: σ^2 > \sigma_{0}^{2}\)(one-tailed), reject \(H_0\) if the observed \(\chi^2 > \chi_{U}^{2}\)(upper-tail value at α). \(H_a: σ^2 <\sigma_{0}^{2}\) (one-tailed), reject \(H_0\) if the observed \(\chi^2 < \chi_{L}^{2}\)(lower-tail value at α). \(H_a: σ^2 ≠ \sigma_{0}^{2}\) (two-tailed), reject \(H_0\) if the observed \(\chi^2 > \chi_{U}^{2}\)or \(\chi^{2} < \chi_{L}^{2}\)at α/2. where the \(\chi^2\) critical value in the rejection region is based on degrees of freedom \(df = n – 1\) and a specified significance level of α . Test statistic: $$\chi^2 = \frac{(n-1)S^2}{\sigma _{0}^{2}}$$ As with previous sections, if the test statistic falls in the rejection zone set by the critical value, you will reject the null hypothesis. Example \(\PageIndex{1}\): A forester wants to control a dense understory of striped maple that is interfering with desirable hardwood regeneration using a mist blower to apply an herbicide treatment. She wants to make sure that treatment has a consistent application rate, in other words, low variability not exceeding 0.25 gal./acre (0.06 gal.2). She collects sample data (n = 11) on this type of mist blower and gets a sample variance of 0.064 gal.2 Using a 5% level of significance, test the claim that the variance is significantly greater than 0.06 gal.2 \(H_0: \sigma^{2} = 0.06\) \(H_1: \sigma^{2} >0.06\) The critical value is 18.307. Any test statistic greater than this value will cause you to reject the null hypothesis. The test statistic is $$\chi^2 = \frac {(n-1)S^2}{\sigma_{0}^{2}}=\frac {(11-1)0.064}{0.06}=10.667$$ We fail to reject the null hypothesis. The forester does NOT have enough evidence to support the claim that the variance is greater than 0.06 gal.2 You can also estimate the p-value using the same method as for the student t-table. Go across the row for degrees of freedom until you find the two values that your test statistic falls between. In this case going across the row 10, the two table values are 4.865 and 15.987. Now go up those two columns to the top row to estimate the p-value (0.1-0.9). The p-value is greater than 0.1 and less than 0.9. Both are greater than the level of significance (0.05) causing us to fail to reject the null hypothesis. Software Solutions Minitab (referring to Ex. 16) Test and CI for One Variance Method Null hypothesis Sigma-squared = 0.06 Alternative hypothesis Sigma-squared > 0.06 The chi-square method is only for the normal distribution. Tests Test Method Statistic DF P-Value Chi-Square 10.67 10 0.384 Excel Excel does not offer 1-sample \(\chi^2\) testing.
If a variable, say X, is normally distributed with mean ##\mu## and variance ##\sigma^2## then mathematicians write ##X \sim \mathcal{N}(\mu, \sigma^2)##. If a variable, say Y, is log-normally distributed and the underlying normal distribution has mean ##\mu## and variance ##\sigma^2## then mathematicians write ## Y \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)##. The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Select the most correct statement:
Waves can be in the group and such groups are called as wave packets, so the velocity with a wave packet travels is called group velocity. Velocity with which the phase of a wave travels is called phase velocity. The relation between group velocity and phase velocity are proportionate. Group Velocity And Phase Velocity The Group Velocity and Phase Velocity relation can be mathematically written as- \(V_{g}=V_{p}+k\frac{dV_{p}}{dk}\) Where, V gis the group velocity. V pis the phase velocity. k is the angular wave number. Group Velocity and Phase Velocity relation for Dispersive wave Non-dispersive wave \(\frac{dV_{p}}{dk}\neq 0\) \(V_{p}\neq V_{g}\) \(\frac{dV_{p}}{dk}=0\) \(V_{p}= V_{g}\) Group Velocity And Phase Velocity Relation The group velocity is directly proportional to phase velocity. which means- When group velocity increases, proportionately phase velocity will also increase. When phase velocity increases, proportionately group velocity will also increase. Thus we see direct dependence of group velocity on phase velocity and vise-versa. Relation Between Group Velocity And Phase Velocity Equation For the amplitude of wave packet let- ?? is the angular velocity given by ??=2???? k is the angular wave number given by – \(k=\frac{2\pi }{\lambda }\) t is time x be the position V pphase velocity V gbe the group velocity For any propagating wave packet –\(\frac{\Delta \omega }{2}t-\frac{\Delta k}{2}x=constant\) \(\Rightarrow x=constant+\frac{\frac{\Delta \omega }{2}}{\frac{\Delta k}{2}}t\)—–(1) Velocity is the rate of change of displacement given by Hence group velocity is got by differentiating equation (1) with respect to time\(V_{g}=\frac{dx}{dt}=\frac{\frac{\Delta \omega }{2}}{\frac{\Delta k}{2}}=\frac{\Delta \omega }{\Delta k}\) \(V_{g}=\lim_{\omega _{1}\rightarrow\omega _{2}} \frac{\Delta \omega }{\Delta k}=\frac{d\omega }{dk}\) ——(2) We know that phase velocity is given by \(V_{g}=\frac{\omega }{k}\;\;\;\Rightarrow \omega =kV_{p}\) Substituting ?? = kV p in equation (2) we get- Thus, we arrive at the equation relating group velocity and phase velocity –\(V_{g}=V_{p}+k\frac{dV_{p}}{dk}\) Hope you understood the relation between group velocity and phase velocity of an progressive wave. Physics Related Topics: Stay tuned with BYJU’S for more such interesting articles. Also, register to “BYJU’S-The Learning App” for loads of interactive, engaging physics related videos and an unlimited academic assist.
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
skills to develop To understand how to use an \(F\)-test to judge whether several population means are all equal In Chapter 9, we saw how to compare two population means \(\mu _1\) and \(\mu _2\). In this section we will learn to compare three or more population means at the same time, which is often of interest in practical applications. For example, an administrator at a university may be interested in knowing whether student grade point averages are the same for different majors. In another example, an oncologist may be interested in knowing whether patients with the same type of cancer have the same average survival times under several different competing cancer treatments. In general, suppose there are \(K\) normal populations with possibly different means, \(μ_1 , μ_2 , \ldots, μ_K\), but all with the same variance \(σ^2\). The study question is whether all the \(K\) population means are the same. We formulate this question as the test of hypotheses \[H_0: \mu _1=\mu _2=\cdots =\mu _K\\ vs.\\ H_a: \text{not all K population means are equal}\] To perform the test \(K\) independent random samples are taken from the \(K\) normal populations. The \(K\) sample means, the \(K\) sample variances, and the \(K\) sample sizes are summarized in the table: Population Sample Size Sample Mean Sample Variance \(1\) \(n_1\) \(\bar{x_1}\) \(s_{1}^{2}\) \(2\) \(n_2\) \(\bar{x_2}\) \(s_{2}^{2}\) \(\vdots \) \(\vdots \) \(\vdots \) \(\vdots \) \(K\) \(n_K\) \(\bar{x_K}\) \(s_{K}^{2}\) Define the following quantities: Definitions The combined sample size: The of all \(n\) observations: mean of the combined sample The mean square for treatment: The mean square for error: \[MSE= \dfrac{(n_1−1)s^2_1 + (n_2−1)s^2_2 + \ldots + (n_{K−1})s^2_K}{n−K}\] \(MST\) can be thought of as the variance between the \(K\) individual independent random samples and \(MSE\) as the variance within the samples. This is the reason for the name “analysis of variance,” universally abbreviated ANOVA. The adjective “one-way” has to do with the fact that the sampling scheme is the simplest possible, that of taking one random sample from each population under consideration. If the means of the \(K\) populations are all the same then the two quantities \(MST\) and \(MSE\) should be close to the same, so the null hypothesis will be rejected if the ratio of these two quantities is significantly greater than \(1\). This yields the following test statistic and methods and conditions for its use. Test Statistic for Testing the Null Hypothesis that \(K\) Population Means Are Equal \[F =\dfrac{MST}{MSE}\] If the \(K\) populations are normally distributed with a common variance and if \(H_0: \mu _1=\mu _2=\cdots =\mu _K\) is true then under independent random sampling \(F\) approximately follows an \(F\)-distribution with degrees of freedom \(df_1=K-1\) and \(df_2=n-K\). The test is right-tailed: \(H_0\) is rejected at level of significance α if \(F≥F_α\). As always the test is performed using the usual five-step procedure. Example \(\PageIndex{1}\) The average of grade point averages (GPAs) of college courses in a specific major is a measure of difficulty of the major. An educator wishes to conduct a study to find out whether the difficulty levels of different majors are the same. For such a study, a random sample of major grade point averages (GPA) of \(11\) graduating seniors at a large university is selected for each of the four majors mathematics, English, education, and biology. The data are given in Table \(\PageIndex{1}\). Test, at the \(5\%\) level of significance, whether the data contain sufficient evidence to conclude that there are differences among the average major GPAs of these four majors. Mathematics English Education Biology 2.59 3.64 4.00 2.78 3.13 3.19 3.59 3.51 2.97 3.15 2.80 2.65 2.50 3.78 2.39 3.16 2.53 3.03 3.47 2.94 3.29 2.61 3.59 2.32 2.53 3.20 3.74 2.58 3.17 3.30 3.77 3.21 2.70 3.54 3.13 3.23 3.88 3.25 3.00 3.57 2.64 4.00 3.47 3.22 Solution: Step 1. The test of hypotheses is \[H_0: \mu _1=\mu _2=\mu _3=\mu _4\\ vs.\\ H_a: \text{not all four population means are equal}\; @\; \alpha =0.05\] Step 2. The test statistic is \(F=MST/MSE\) with (since \(n=44\) and \(K=4\)) degrees of freedom \(df_1=K-1=4-1=3\) and \(df_2=n-K=44-4=40\). Step 3. If we index the population of mathematics majors by \(1\), English majors by \(2\), education majors by \(3\), and biology majors by \(4\), then the sample sizes, sample means, and sample variances of the four samples in Table \(\PageIndex{1}\) are summarized (after rounding for simplicity) by: Major Sample Size Sample Mean Sample Variance Mathematics \(n_1=11\) \(\bar{x_1}=2.90\) \(s_{1}^{2}=0.188\) English \(n_2=11\) \(\bar{x_2}=3.34\) \(s_{2}^{2}=0.148\) Education \(n_3=11\) \(\bar{x_3}=3.36\) \(s_{3}^{2}=0.229\) Biology \(n_4=11\) \(\bar{x_4}=3.02\) \(s_{4}^{2}=0.157\) The average of all \(44\) observations is (after rounding for simplicity) \(\overline{x}=3.15\). We compute (rounding for simplicity) and so that Step 4. The test is right-tailed. The single critical value is (since \(df_1=3\) and \(df_2=40\)) \(F_\alpha =F_{0.05}=2.84\). Thus the rejection region is \([2.84,\infty )\), as illustrated in Figure \(\PageIndex{1}\). Figure \(\PageIndex{1}\): Rejection Region Step 5. Since \(F=3.232>2.84\), we reject \(H_0\). The data provide sufficient evidence, at the \(5\%\) level of significance, to conclude that the averages of major GPAs for the four majors considered are not all equal. Example \(\PageIndex{2}\): Mice Survival Times A research laboratory developed two treatments which are believed to have the potential of prolonging the survival times of patients with an acute form of thymic leukemia. To evaluate the potential treatment effects \(33\) laboratory mice with thymic leukemia were randomly divided into three groups. One group received Treatment \(1\), one received Treatment \(2\), and the third was observed as a control group. The survival times of these mice are given in Table \(\PageIndex{2}\). Test, at the \(1\%\) level of significance, whether these data provide sufficient evidence to confirm the belief that at least one of the two treatments affects the average survival time of mice with thymic leukemia. Treatment \(1\) Treatment \(2\) Control 71 75 77 81 72 73 67 79 75 72 79 73 80 65 78 71 60 63 81 75 65 69 72 84 63 64 71 77 78 71 84 67 91 Solution: Step 1. The test of hypotheses is \[H_0: \mu _1=\mu _2=\mu _3\\ vs.\\ H_a: \text{not all three population means are equal}\; @\; \alpha =0.01\] Step 2. The test statistic is \(F=\dfrac{MST}{MSE}\) with (since \(n=33\) and \(K=3\)) degrees of freedom \(df_1=K-1=3-1=2\) and \(df_2=n-K=33-3=30\). Step 3. If we index the population of mice receiving Treatment \(1\) by \(1\), Treatment \(2\) by \(2\), and no treatment by \(3\), then the sample sizes, sample means, and sample variances of the three samples in Table \(\PageIndex{2}\) are summarized (after rounding for simplicity) by: Group Sample Size Sample Mean Sample Variance Treatment \(1\) \(n_1=16\) \(\bar{x_1}=69.75\) \(s_{1}^{2}=34.47\) Treatment \(2\) \(n_2=9\) \(\bar{x_2}=77.78\) \(s_{2}^{2}=52.69\) Control \(n_3=8\) \(\bar{x_3}=75.88\) \(s_{3}^{2}=30.69\) The average of all \(33\) observations is (after rounding for simplicity) \(\overline{x}=73.42\). We compute (rounding for simplicity) and so that Step 4. The test is right-tailed. The single critical value is \(F_\alpha =F_{0.01}=5.39\). Thus the rejection region is \([5.39,\infty )\), as illustrated in Figure \(\PageIndex{2}\). Figure \(\PageIndex{2}\): Rejection Region Step 5. Since \(F=5.65>5.39\), we reject \(H_0\). The data provide sufficient evidence, at the \(1\%\) level of significance, to conclude that a treatment effect exists at least for one of the two treatments in increasing the mean survival time of mice with thymic leukemia. It is important to to note that, if the null hypothesis of equal population means is rejected, the statistical implication is that not all population means are equal. It does not however tell which population mean is different from which. The inference about where the suggested difference lies is most frequently made by a follow-up study. Key Takeaway An \(F\)-test can be used to evaluate the hypothesis that the means of several normal populations, all with the same standard deviation, are identical. Contributor Anonymous
Can you tell me if my answer is correct: Show that the set $P^{\omega_\omega}(\omega)$ exists. My answer: Let $P^0 (\omega) = \omega$, $P^{\alpha + 1}(\omega) = P(P^\alpha (\omega))$ and for a limit ordinal $\lambda$ let $P^\lambda (\omega) =\bigcup_{\beta < \lambda} P^\beta (\omega)$. The transfinite recursion theorem tells us that if $G: V \to V$ is a class function from the class of all sets to the class of all sets then there exists a unique function $F: ON \to V$ with $F(\alpha) = G(F\mid_\alpha)$. Hence to show the existence of $P^{\omega_\omega}(\omega)$ we need to define a $G: V \to V$ with $G(F\mid_{\omega_\omega}) = P^{\omega_\omega}(\omega) = F(\omega_\omega)$. Also we know that for successor ordinals $\alpha$ we want $F\mid_{\beta + 1} = F(\beta) = P^{\beta}(\omega)$. So for successor ordinals $\alpha = \beta + 1$ we define $G(F\mid_{\alpha})= G(F\mid_{\beta + 1}) = G( P^{\beta}(\omega)) := P( P^{\beta}(\omega))$. We also know that $F\mid_\varnothing = \varnothing$ so $G(F\mid_\varnothing) = G(\varnothing) := P^0 (\omega)$. Finally, if $\lambda$ is a limit ordinal we want $G(F\mid_\lambda) := \bigcup_{\alpha < \lambda} P^\alpha (\omega)$. For all other sets we define $G$ to map to the empty set. Thanks for your help.
How to Select the Right Light Pipe Homogenizing Rod Much like optical fibers, light pipe homogenizing rods utilize total internal reflection (TIR) to transmit light from the entrance to the exit of the light pipe. The substrate’s refractive index is the only factor that influences the light pipe’s critical angle, which defines the angle of acceptance at which TIR will occur. Light pipes designed specifically for high NA, standard NA, and low NA light sources will have the same acceptance angle if designed with the same substrate. The critical angle (θ c) calculated using Equation 1: (1)$$ \theta_c= \sin^{-1} \! \left(\frac{1}{n}\right) $$ In the case of an N-BK7 light pipe, the index of refraction at Helium’s d-line of 587.9nm is 1.517, resulting in a θ c of 41°. If the incident angle is greater than θ c, TIR will occur and light will be transmitted throughout the light pipe (Figure 1). Since light rays enter a light pipe at a variety of incident angles, the number of reflections inside the light pipe differs from ray to ray. The smallest incident angle, which is equal to θ c, will have more reflections than light entering at an incident angle that is many times larger than θc within the same length light pipe (Figure 2). Due to low NA light sources having a larger number of rays greater than θ c than high NA light sources, low NA light pipes are longer than high NA light pipes, and are recommended for light sources with narrow beam divergence. Note: Light pipes are not designed for use with collimated laser sources. To homogenize collimated light sources, Microlens Arrays or flat top laser beam shapers are recommended. Light pipes are ideal for homogenizing polychromatic sources that emit non-collimated light.
I have a question regarding formula in SURF article by Bay et al. Theory Given a point $p=(x,y)$ in an image $I$, the Hessian matrix $\mathcal{H}$ in $x$ at scale $\sigma$ is defined as follows $$ \mathcal{H}(p, \sigma) = \begin{bmatrix}L_{xx}(p, \sigma) & L_{xy}(p, \sigma)\\L_{xy}(p, \sigma)& L_{yy}(p, \sigma)\end{bmatrix}, $$ where $L_{xx}(p,\sigma)$ is the convolution of the Gaussian second order derivative $\frac{\partial^2}{\partial x^2}g(\sigma)$ with the image $I$ in point p, and similarly for $L_{xy}(p,\sigma), L_{yy}(p,\sigma)$. The $9\times9$ box filters in Fig. 2 are approximations of a Gaussian with $\sigma=1.2$ and represent the lowest scale (i.e. highest spatial resolution) for computing the blob response maps. We will denote them by $D_{xx}$, $D_{yy}$, and $D_{xy}$. The weights applied to the rectangular regions are kept simple for computational efficiency. This yields $$ \det(\mathcal{H}_{approx}) = D_{xx}D_{yy} - (wD_{xy})^2. $$ The relative weight $w$ of the filter responses is used to balance the expression for the Hessian’s determinant. This is needed for the energy conservation between the Gaussian kernels and the approximated Gaussian kernels, $$ w = \frac{\big\lvert L_{xy}(1.2)\big\rvert_F\big\lvert D_{yy}(9)\big\rvert_F}{\big\lvert L_{yy}(1.2)\big\rvert_F\big\lvert D_{xy}(9)\big\rvert_F} = 0.912\ldots \approx 0.9, $$ where $|X|_F$ is the Frobenius norm. Notice that for theoretical correctness, the weighting changes depending on the scale. In practice, we keep this factor constant, as this did not have a significant impact on the results in our experiments. Questions Main question is how this formula for $w$ is obtained? And my another concern is to why $w$ is within the brackets with $D_{xy}$. If it's just a balancing coefficient wouldn't it make more sense to write like this $wD_{xy}^2$? So I guess there is some motivation behind it. My general purpose is to generalize this method for 3D, so it would be great if anyone could share some thoughts about balancing coefficients in that case or some useful links containing relevant information.
The Annals of Applied Probability Ann. Appl. Probab. Volume 27, Number 3 (2017), 1452-1477. Model-free superhedging duality Abstract In a model-free discrete time financial market, we prove the superhedging duality theorem, where trading is allowed with dynamic and semistatic strategies. We also show that the initial cost of the cheapest portfolio that dominates a contingent claim on every possible path $\omega \in \Omega$, might be strictly greater than the upper bound of the no-arbitrage prices. We therefore characterize the subset of trajectories on which this duality gap disappears and prove that it is an analytic set. Article information Source Ann. Appl. Probab., Volume 27, Number 3 (2017), 1452-1477. Dates Received: June 2015 Revised: May 2016 First available in Project Euclid: 19 July 2017 Permanent link to this document https://projecteuclid.org/euclid.aoap/1500451228 Digital Object Identifier doi:10.1214/16-AAP1235 Mathematical Reviews number (MathSciNet) MR3678476 Zentralblatt MATH identifier 1370.60004 Subjects Primary: 60B05: Probability measures on topological spaces 60G42: Martingales with discrete parameter 28A05: Classes of sets (Borel fields, $\sigma$-rings, etc.), measurable sets, Suslin sets, analytic sets [See also 03E15, 26A21, 54H05] 28B20: Set-valued set functions and measures; integration of set-valued functions; measurable selections [See also 26E25, 54C60, 54C65, 91B14] 46A20: Duality theory 91B70: Stochastic models 91B24: Price theory and market structure Citation Burzoni, Matteo; Frittelli, Marco; Maggis, Marco. Model-free superhedging duality. Ann. Appl. Probab. 27 (2017), no. 3, 1452--1477. doi:10.1214/16-AAP1235. https://projecteuclid.org/euclid.aoap/1500451228
Someone asked on another site about ways to evaluate $\int \ln x \ dx $ without using integration by parts. My response was the following: $$ \begin{align} \int\ln x \, dx & =\int\lim_{t\to 0}\frac{x^{t}-1}{t} \, dx \\ &= \lim_{t\to 0}\frac{1}{t}\int (x^{t}-1) \, dx \\ &=\lim_{t\to 0}\frac{1}{t}\left(\frac{x^{t+1}}{t+1}-x\right)+C \\ &=\lim_{t\to 0}\frac{x^{t+1}-x(t+1)}{t(t+1)}+C \\ &=\lim_{t\to 0}\frac{x^{t+1}\ln x-x}{2t+1}+C \\ &= x\ln x-x+C \end{align}$$ But I don't know how to justify moving the limit outside the indefinite integral.
So now that you know about matrices, we can use them to add a third way to solve a system of equations. You will need to read my previous 3 posts on matrices if you are unfamiliar with how to multiply matrices. In the last post on System of Equations, I looked at the system: 2 x + 3 y = 51 3 x + 2 y = 49 And in my last post on Matrices, I showed you how a 2×2 matrix of numbers and a 2×1 matrix of unknowns can be multiplied together to get a 2×1 matrix that looks suspiciously like the left sides of a system of equations. This is , in fact, true. If I form a matrix using the coefficients on the left side of the above system, I get a matrix which I will call A: {\textbf{A}}\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{2}&{3}\\{3}&{2}\end{array}}\right] \] Let me now define a matrix x (which is different from the single variable x which is in italics and not bold): {\textbf{x}}\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{x}\\{y}\end{array}}\right] \] Now I will define a matrix b: {\textbf{b}}\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{51}\\{49}\end{array}}\right] \] Now see what happens if I multiply A by x: {\textbf{A}}{\textbf{x}}\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{2}&{3}\\{3}&{2}\end{array}}\right]\times\left[{\begin{array}{c}{x}\\{y}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{{2}{x}{+}{3}{y}}\\{{3}{x}{+}{2}{y}}\end{array}}\right] \] The rows of this result look just like the left side of our system of equations. And b is the right side. So the matrix equivalent of the system is \begin{array}{c} {{\textbf{A}}{\textbf{x}}\hspace{0.33em}{=}\hspace{0.33em}{\textbf{b}}}\\ {\left[{\begin{array}{cc}{2}&{3}\\{3}&{2}\end{array}}\right]\left[{\begin{array}{c}{x}\\{y}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{51}\\{49}\end{array}}\right]} \end{array} \] This is easy to form directly. You just form A as the matrix of coefficients (with the unknowns in the same order in each equation), x is the matrix of unknowns, and b is the matrix of the numbers on the right sides. So how do we solve this? From my last post, I defined the inverse of a matrix A as A -1. This is the matrix that if I multiply A by its inverse, I get the identity matrix which is the equivalent of “1” in scalar maths. The process of isolating (solving) for variables in a matrix equation is exactly the same as for scalar equations: you do the same thing to both sides with the goal of having the unknowns by themselves on one side. So if I pre-multiply (remember, order of multiplication in matrix maths is important) both sides of our matrix equation by A -1, the left side is the identity matrix times x which is equal to just x. The right side multiplies out to form the solution. As I said before, finding A -1 is beyond the scope of this set of posts. I will just tell you what it is. However, many modern calculators will do this for you, and you can also use the internet and search for “matrix inverse calculator”. It turns out that A -1 is: \left[{\begin{array}{cc}{\frac{{-}{2}}{5}}&{\frac{3}{5}}\\{\frac{3}{5}}&{\frac{{-}{2}}{5}}\end{array}}\right] \] So taking the matrix equation and pre-multiply both sides by A -1 gives \[ A -1 Ax = A -1 b ⟹ Ix = A -1 b ⟹ x = A -1 b \left[{\begin{array}{cc}{\frac{{-}{2}}{5}}&{\frac{3}{5}}\\{\frac{3}{5}}&{\frac{{-}{2}}{5}}\end{array}}\right]\left[{\begin{array}{cc}{2}&{3}\\{3}&{2}\end{array}}\right]\left[{\begin{array}{c}{x}\\{y}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{\frac{{-}{2}}{5}}&{\frac{3}{5}}\\{\frac{3}{5}}&{\frac{{-}{2}}{5}}\end{array}}\right]\left[{\begin{array}{c}{51}\\{49}\end{array}}\right] \] \[ \Longrightarrow\hspace{0.33em}\left[{\begin{array}{c}{x}\\{y}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{\frac{{-}{2}}{5}}&{\frac{3}{5}}\\{\frac{3}{5}}&{\frac{{-}{2}}{5}}\end{array}}\right]\left[{\begin{array}{c}{51}\\{49}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{c}{9}\\{11}\end{array}}\right] \] Which is the same answer as before, x = 9 and y = 11. This is a very powerful method for large systems of equations. Next time I will solve a system of 4 equations with 4 unknowns. For those of you who have done this manually, you will appreciate the ease matrix algebra provides.
I have 8 country stock indexes and 1 world stock index. I do not actually have time series data but I'm given the following data: $\mu$, the vector of expected future returns for all 8 country indexes and world index (9 indexes). $\Omega$, the variance covariance matrix of all 9 indexes. I'm forming a MV efficient and Michaud resampling portfolio over the 8 country indexes - the world index is not considered an investable asset class. I want to compare the two portfolios by looking at the systematic risk and unsystematic risk of both portfolios w.r.t. the world market index. So we have the two weights vectors produced by the two methodologies: $_1w$ (MV) $_2w$ (REF, Resampled Efficient Frontier). We can calculate the betas of both portfolios by going $_j\beta_p = \sum_{i=1}^8 (_jw_i )\frac{\sigma_{i,world}}{\sigma^2_{world}}$ for $j = 1,2$. Being able to sum the coefficients like this follows from OLS. How do I get from here to the unsystematic and systematic risk of the portfolios? I can't get the error from the specification that generates the betas so it seems I'm stuck?
Event detail Seminar | March 4 | 12:10-1 p.m. | 939 Evans Hall Nicolai Reshetikhin, UC Berkeley For finite dimensional representations $V_1, \dots , V_m$ of a simple finite dimensional Lie algebra $\mathfrak g$ consider the tensor product $W=\otimes _{I=1}^m V_i^{\otimes N_i}$. The first result, which will be presented in the talk, is the asymptotic of the multiplicity of an irreducible representation $V_\lambda $ with the highest weight λ in this tensor product when $N_i=\tau _i/\epsilon , \lambda =\xi /\epsilon $ and $\epsilon \to 0$. Then we will discuss the asymptotical distribution of irreducible components with respect to the character probability measure $Prob(\lambda )=\frac {m_\lambda \chi _{V_\lambda }(e^t)}{\chi _W(e^t)}$. Here $\chi _V(e^t)$ is the character of representation $V$ evaluated on $e^t$ where $t$ is an element of the Cartan subalgebra of the split real form of the Lie algebra $\mathfrak g$. This is a joint work with O. Postnova.
I want to know if there is a way to find for example $\ln(2)$, without using the calculator ? Thanks Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community And let's not forget this method (read off of the Ln scale). $$\log 2 = 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\ldots$$ In the general case $$\log \frac{1+x}{1-x} = 2(x+\frac{x^3}{3}+\frac{x^5}{5}+\frac{x^7}{7}+\ldots)$$ How precise do you need the calculation to be? As a quick and dirty approximation, we know that $2^3 = 8$ and $e^2 \approx 2.7^2 = 7.29$, and so $\ln(2)$ should be just over $\frac{2}{3} \approx 0.67$. Contiuing to match powers, we find $2^{10} = 1024$, and $e^7 \approx (2.7)^7 = (3 - 0.3)^7 = 3^7 -7(3)^6(.3) + 21(3)^5(.3)^2 - 35(3)^4(.3)^3 \dots$ $= 3^7 (1 - .7 + .21 - .035 \dots)$ $\approx 2187(.475) = 1038.825$. Therefore, $e^7 \approx 2^{10}$ and so $\ln(2)$ should be just under $0.7$. The operations that are relatively easy to compute by hand are addition, multiplication, and their inverses, subtraction, and division. With these operations we can compute all rational functions, e.g. $\frac{2x^2-1}{x^3+x-1}$. We know that $$\ln(x)=\sum_{k=1}^{\infty}(-1)^k\frac{(x-1)^k}{k}$$ for values of $x$ close to $1$. So, if we take partial sums of this series we get approximations to logarithm that only require multiplications and sum and subtractions. Notice that we only need to be able to compute values of logarithm for numbers close to $1$, since using $\ln(e^kx)=k+\ln(x)$ can allow us to reduce to this case. $$\log2=\frac{2}{3}\left(1+\frac{1}{27}+\frac{1}{405}+\frac{1}{5103}+\frac{1}{59049}+\frac{1}{649539}+...\right)$$ The denominator is $(2k+1)9^k$. Gourdon and Sebah discuss the efficiency of this formula in http://plouffe.fr/simon/articles/log2.pdf (page 11) A "little more effort" is required to compute $log(2)$ using this formula than to compute $\pi$ using Machin's relation. We have the CORDIC method, which can be quite effective for by-hand computation as it requires additions/subtractions only (an one multiply by a small integer). There are two limitations though: it is better performed in base $2$, so a preliminary change of base is needed for the input argument (you can do it in base $10$ as well but it takes about $3$ times more operations); you need a small table of constants. It is based on the identity $\log(ab)=\log(a)+\log(b)$. You first normalize the binary number as $x=z\cdot2^e$, with $1\le z<10_b$. You have $\log(x)=\log(z)+e\cdot\log(2)$. Then $$\log(z)=\log(0.11_bz)-\log(0.11_b)\\ \log(z)=\log(0.111_bz)-\log(0.111_b)\\ \log(z)=\log(0.1111_bz)-\log(0.1111_b)\\ \cdots$$ You will use these equalities as follows. Initialize an accumulator $l\leftarrow0$ and if $0.11_bz>1$ (i.e. $z>1.01010101_b\cdots$) let $z\leftarrow 0.11_bz$, $l\leftarrow l-\log(0.11_b)$; if $0.111_bz>1$ (i.e. $z>1.00100100_b\cdots$) let $z\leftarrow 0.111_bz$, $l\leftarrow l-\log(0.111_b)$; if $0.1111_bz>1$ (i.e. $z>1.00010001_b\cdots$) let $z\leftarrow 0.1111_bz$, $l\leftarrow l-\log(0.1111_b)$; $\cdots$ The multiplies are actually performed as shifts and subtractions (f.i. $0.111_bz=z-0.001_bz$). This way, we progressively reduce $z$ to bring it closer and closer to $1$, while $l$ gets closer and closer to the logarithm of the initial $z$. On every step we gain one bit of accuracy. The table of constants ($\log(10_b)=-\log(0.1_b),-\log(0.11_b),-\log(0.111_b),\cdots$ up to the desired number of significant bits) is computed in the decimal base, so that the answer is readily available as such. $$\begin{align}z&\to-\log(z)\\ 0.1000000000000000000000000000000_b&\to 0.6931471806_d\\ 0.1100000000000000000000000000000_b&\to 0.2876820725_d\\ 0.1110000000000000000000000000000_b&\to 0.1335313926_d\\ 0.1111000000000000000000000000000_b&\to 0.0645385211_d\\ 0.1111100000000000000000000000000_b&\to 0.0317486983_d\\ 0.1111110000000000000000000000000_b&\to 0.0157483570_d\\ 0.1111111000000000000000000000000_b&\to 0.0078431775_d\\ 0.1111111100000000000000000000000_b&\to 0.0039138993_d\\ 0.1111111110000000000000000000000_b&\to 0.0019550348_d\\ 0.1111111111000000000000000000000_b&\to 0.0009770396_d\\ 0.1111111111100000000000000000000_b&\to 0.0004884005_d\\ 0.1111111111110000000000000000000_b&\to 0.0002441704_d\\ 0.1111111111111000000000000000000_b&\to 0.0001220778_d\\ 0.1111111111111100000000000000000_b&\to 0.0000610370_d\\ 0.1111111111111110000000000000000_b&\to 0.0000305180_d\\ 0.1111111111111111000000000000000_b&\to 0.0000152589_d\\ 0.1111111111111111100000000000000_b&\to 0.0000076294_d\\ 0.1111111111111111110000000000000_b&\to 0.0000038147_d\\ 0.1111111111111111111000000000000_b&\to 0.0000019074_d\\ 0.1111111111111111111100000000000_b&\to 0.0000009537_d\\ 0.1111111111111111111110000000000_b&\to 0.0000004768_d\\ 0.1111111111111111111111000000000_b&\to 0.0000002384_d\\ 0.1111111111111111111111100000000_b&\to 0.0000001192_d\\ 0.1111111111111111111111110000000_b&\to 0.0000000596_d\\ 0.1111111111111111111111111000000_b&\to 0.0000000298_d\\ 0.1111111111111111111111111100000_b&\to 0.0000000149_d\\ 0.1111111111111111111111111110000_b&\to 0.0000000075_d\\ 0.1111111111111111111111111111000_b&\to 0.0000000037_d\\ 0.1111111111111111111111111111100_b&\to 0.0000000019_d\\ 0.1111111111111111111111111111110_b&\to 0.0000000009_d\\ 0.1111111111111111111111111111111_b&\to 0.0000000005_d\\ \end{align}$$ One can use the fact that$$\log x=\lim_{n\to\infty}n\left(1-\frac{1}{\sqrt[n]{x}}\right)$$For $\log2$ a good approximation is$$1048576\left(1-\frac{1}{\sqrt[1048576]{2}}\right)$$where$$\sqrt[1048576]{x}$$can be computed by pressing twenty times the SQRT key on a pocket calculator, since $1048576=2^{20}$ (or computing it by hand, with much patience and time to spend). What I get doing those computations is $0.6931469565952$, while a real computer gives $0.69314718055994530941$, so we have five exact decimal digits. Of course bigger numbers won't do, since the $2^{20}$-th root of it will be too near $1$ and the necessary digits would have already been lost. (Note: $\log$ is the natural logarithm; I refuse to denote it in any other way. ;-)) $$\log (x)=\sum _{n=1}^{\infty } \frac{\left(\frac{x-1}{x}\right)^n}{n}$$ when $x>1$ What you can use is the Taylor expansion of $\ln (1+x)$: $$\ln (1+x) = \sum (-1)^{j+1}{x^j\over j}$$ which converges for $-1<x\le1$. It would be tempting to insert $x=1$ into it, but that would be a poor choice since the convergence for $x=1$ is painfully slow. Instead you use the fact that $\ln 2 = -\ln 1/2$ and insert $x=-1/2$ instead: $$\ln (1-{1\over 2}) = \sum (-1)^{j+1}{1\over j2^j} = -\sum {1\over j2^j}$$ So $$\ln 2 = \sum {1\over j2^j}$$ This is similar to how the calculator does it, but there's probably a few tricks more that's used. First it probably uses base two logarithm and have a stored value of $\lg_2 e$ to be able to produce the natural logarithm. The reason for this is to be able to handle logarithm of values outside the convergence region (and generally we want to use the series for as narrow region as possible). We generally can write any number on the form $x2^p$ (in fact the numbers are already represented on that form) with $x$ being near $1$ and then $\lg_2(x2^p) = p\lg_2(x)$ (similar trick is being done on all these kind of functions). The second trick is to approximate $\ln(1+x)$ on the interval $[1/\sqrt2, \sqrt2]$ even better than Taylor expansion, the trick is to find a polynomial that approximates it as uniformly good as possible. The McLaurin expansion has the property that it will yield a good approximation fast for values near zero at the expense of values further away. For generic case one uses a polynomial that yields a good enough approximation equally fast in the interval. We can represent the logarithm of positive rational numbers as follows. First, consider the following null conditionally convergent series (cancelled harmonic series): $$0=(1-1)+\left(\frac{1}{2}-\frac{1}{2}\right)+\left(\frac{1}{3}-\frac{1}{3}\right)+\left(\frac{1}{4}-\frac{1}{4}\right)+\left(\frac{1}{5}-\frac{1}{5}\right)+...$$ Note that we are computing $0=log(1)=log\left(\frac{1}{1}\right)$ by adding consecutive terms with 1 positive fraction and 1 negative fraction each, taken from the inverses of non-zero integers. This observation may sound trivial now, but it is interesting for what comes next. We can rearrange the terms of this series to compute $log(2)$ by taking two positive fractions and one negative for each term. $$log\left(2\right)=\left(1+\frac{1}{2}-1\right)+\left(\frac{1}{3}+\frac{1}{4}-\frac{1}{2}\right)+\left(\frac{1}{5}+\frac{1}{6}-\frac{1}{3}\right)+\left(\frac{1}{7}+\frac{1}{8}-\frac{1}{4}\right)+...$$ This can be easily seen to be the Mercator series in disguise, so we have discovered nothing new yet. But there is more. Similarly, we have $$log\left(3\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{2}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{3}\right)+\left(\frac{1}{10}+\frac{1}{11}+\frac{1}{12}-\frac{1}{4}\right)+...$$ This pattern holds for all positive integers, so the next step is applying the property that $log(p/q)=log(p)-log(q)$ on these representations. This leads to $log(p/q)$ by adding $p$ positive fractions and $q$ negative fractions at each step. For example, we have $$log\left(\frac{3}{2}\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1-\frac{1}{2}\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{3}-\frac{1}{4}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{5}-\frac{1}{6}\right)+...$$ as illustrated in http://oeis.org/A166871.
Let me describe the standard way to do this kind of computation. The key claim is that to obtain this kind of formulas, one can pretend that the vector bundle is a sum of line bundles (even if it is not true in general). This claim is called the splitting principle and can be justified but I will not do it here (unless if explicitely asked). Let me just show how it can be used in practice. We have our rank two vector bundle $E$. Let us pretend that $E=L_1 \oplus L_2$ where $L_1$ and $L_2$ are line bundles. Then $c_1(E)=c_1(L_1)+c_1(L_2)$ and $c_2(E)=c_1(L_1)c_1(L_2)$ are the elementary symmetric functions in $c_1(L_1)$ and $c_1(L_2)$. We have $Sym^n(E)=L_1^n \oplus (L_1^{n-1} \otimes L_2) \oplus \dots \oplus L_2^n$ hence $c(Sym^n(E))=\prod_{i=0}^n c(L_1^{n-i} \otimes L_2^i)=\prod_{i=0}^n (1+(n-i)c_1(L_1)+ic_1(L_2))$ This expression is a symmetric polynomial in the variables $c_1(L_1)$ and $c_1(L_2)$ and so can be written as a polynomial in the elementary symmetric polynomials in these variables, i.e. as a polynomial in $c_1(E)$ and $c_2(E)$.
$$ \frac{3}{1-\sin^6x-\cos^6x}=(\tan x + \cot x)^2$$ Need help with an identity I got for my high school homework. Can't seem to find a way to prove it. Please help with easiest way to do it. Thanks! The RHS reads $$\left(\frac sc+\frac cs\right)^2=\frac{1^2}{c^2s^2}$$ hints you to rework the denominator of the LHS. Terms $s^6+c^6$ can appear from the development of $$1^3=(s^2+c^2)^3=s^6+3s^4c^2+3s^2c^4+c^6=s^6+c^6+3s^2c^2.$$ The rest is easy.
Why are so many algebraists nowadays interested in cluster algebras? (This is a rewording of one half of the closed question Cluster algebras and teichmuller theory.) MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community One reason is that cluster algebras have motivated many recent developments in the representation theory of associative algebras. There is a lot one can say about this, so I will try to just give an overview of some of the key ideas, and suggest further reading. I recommend Keller's survey article (https://arxiv.org/abs/0807.1960) as a good source for a more detailed discussion. A simple case of a cluster algebra is the cluster algebra without frozen variables defined by a skew-symmetric matrix. A skew-symmetric matrix is essentially the same data (modulo some technicalities) as a quiver (directed graph) without loops or $2$-cycles, e.g. $$\begin{pmatrix}0&-2\\2&0\end{pmatrix}\ \longleftrightarrow\ 1\Rightarrow 2$$ where the '$\Rightarrow$' represents a pair of arrows. The matrix here is the skew-symmetrisation of the adjacency matrix of the quiver, the assumptions on loops and $2$-cycles ensuring that there is no cancellation when computing this matrix, so that the process is invertible. (For simplicity, to avoid algebras breaking up into direct products, I consider only the pairs in the above correspondence such that the quiver is connected; this is most important in the classification statements below.) Such quivers, when they have no oriented cycles, also define associative algebras; they correspond to the finite dimensional basic hereditary algebras over a field $k$ via $Q\leftrightarrow kQ$ (I take $k$ to be algebraically closed, just in case). The algebra $kQ$ has basis given by the paths of $Q$ (including the length $0$ paths at each vertex) with multiplication given by concatenation of paths when the result is a path and $0$ otherwise, extended via linearity. A classical result of Gabriel says that the algebra $kQ$ has finitely many isomorphism classes of indecomposable modules if and only if $Q$ is an orientation of one of the simply laced Dynkin graphs, i.e. those of type $\mathsf{A}_n$, $\mathsf{D}_n$ or $\mathsf{E}_{6,7,8}$. Suggestively, Fomin–Zelevinsky prove in their second paper on cluster algebras that the cluster algebras (without frozen variables, defined by a skew-symmetric matrix) with finitely many cluster variables are precisely those from matrices corresponding to these quivers. It turns out that, whenever $Q$ is an acyclic quiver, there is in fact a bijection between most of the cluster variables (precisely, those not appearing in the initial seed) of the cluster algebra $\mathcal{A}_Q$ given by $Q$, and the indecomposable representations of $kQ$. To make this connection stronger, one can introduce the 'cluster category' $\mathcal{C}_Q$, defined by Buan, Marsh, Reineke, Reiten and Todorov (https://arxiv.org/abs/math/0402054), which is a kind of extension of the category of $kQ$-modules by adding in some extra indecomposable objects corresponding to the initial cluster variables of $\mathcal{A}_Q$, so that each cluster variable $x$ of $\mathcal{A}_Q$ now corresponds to an indecomposable object $M_x$ of $\mathcal{C}_Q$. Various properties of the cluster variables may now be translated into properties of these objects. For example, two cluster variables $x$ and $y$ are compatible (appear in the same cluster) if and only if the corresponding objects have the homological property $\operatorname{Ext}^1_{\mathcal{C}_Q}(M_x,M_y)=0$. The seeds of $\mathcal{A}_Q$ correspond to basic cluster-tilting objects of $\mathcal{C}_Q$, which are direct sums $T$ of some of the $M_x$s such that $\operatorname{Ext}^1_{\mathcal{C}_Q}(T,M_x)=0$ if and only if $M_x$ appears as a summand of $T$. The quiver of the seed corresponding to such an object $T$ is the ordinary quiver of the endomorphism algebra $\operatorname{End}_{\mathcal{C}_Q}(T)$, i.e. the unique (up to isomorphism) quiver $Q_T$ such that $\operatorname{End}_{\mathcal{C}_Q}(T)\cong kQ_T/I$ for some ideal generated by linear combinations of paths of length at least $2$. The category $\mathcal{C}_Q$ is a '$2$-Calabi–Yau triangulated category', so its cluster-tilting objects have a mutation theory by work of Iyama–Yoshino (https://arxiv.org/abs/math/0607736) which turns out to correspond to mutations of seeds. There is considerably more work in this direction, including cluster categories for cluster algebras corresponding to non-acyclic quivers by Amiot (https://arxiv.org/abs/0805.1035), and for cluster algebras with frozen variables by, e.g. Geiß–Leclerc–Schröer (https://arxiv.org/abs/math/0609138), Jensen–King–Su (https://arxiv.org/abs/1309.7301), Demonet–Iyama (https://arxiv.org/abs/1503.02362) and myself (https://arxiv.org/abs/1510.06224, https://arxiv.org/abs/1702.05352). One can also develop some theory for skew-symmetrizable matrices, see e.g. Demonet (https://arxiv.org/abs/0909.1633), Labardini-Fragoso–Zelevinsky (https://arxiv.org/abs/1306.3495) and Geiß–Leclerc–Schröer again (https://arxiv.org/abs/1410.1403). While construction of these kinds of representation-theoretic models is useful for understanding cluster algebras, it also suggests lots of interesting representation theory that may not otherwise have been considered. This can make a precise connection difficult to pin down: I am not really aware of cluster algebras being used directly to solve open representation-theoretic problems, but they have suggested lots of new lines of representation-theoretic inquiry (e.g. the second reference to Geiß–Leclerc–Schröer above, which I think is not really about cluster algebras at all, a priori). One example of this is Adachi–Iyama–Reiten's $\tau$-tilting theory (https://arxiv.org/abs/1210.1036). I am far from an expert on this theory, but my understanding is that this is really a purely representation-theoretic theory that could have been developed entirely without cluster algebras (and, I am told, was almost discovered by Auslander and co-authors in the 80s!) but in the end it was, at least in part, thinking about cluster categories that provided the inspiration. I note that one slightly confusing part of this connection is that one quite rarely studies the cluster algebra itself as an algebra! There are some exceptions to this, such as work of Lampe (https://arxiv.org/abs/1210.1502).
In the MWE below, created following other threads from the website, I would like to know how to change the following two things: 1) How to add the buttons which link back to the question (E.g. the button "Back to problem 1.1" in the MWE below) at the end of the solution, as I have done with the button "Solution" which comes at the end of the question. 2) The hyperlinks from the solutions back to the questions seem to be a bit off, i.e. they link back somewhere lower than the actual start of the question. Thank you very much. \documentclass[10pt,A4paper]{article}\usepackage{answers}\usepackage{amsthm}\usepackage{hyperref}\usepackage{tcolorbox}\usepackage{ifthen}\usepackage{tikz}\usetikzlibrary{shadows}\tikzstyle{buttonstyle} = [rectangle, fill = black!30, draw = black!80, drop shadow, font={\sffamily\bfseries}, text=white]\newcommand*{\button}[1]{\tikz[baseline=(text.base)]{\node[buttonstyle] (text) {#1};}}\theoremstyle{definition}\newtheorem{problem}{% \hypertarget{soln:\theproblem}{} }[section]\Newassociation{soln}{mySoln}{Solutions}\renewenvironment{mySoln}[1] {\bigskip\noindent\phantomsection{\bfseries \hypertarget{problem:#1}{}{\bfseries Solution to problem #1}\hfill \hyperlink{soln:#1}{\button{Back to problem #1}}\\}\quad}\newcommand{\marksol}{\vspace{0.2cm}\hyperlink{problem:\theproblem}{\button{Solution}}}\newcommand{\bp}{\begin{problem}}\newcommand{\enp}{\end{problem}}\newcommand{\bs}{\marksol \begin{soln}}\begin{document}\newpage\section{Assigned problems}\Opensolutionfile{Solutions}\bp Let $a$ and $b$ be positive real numbers. Prove that\[\frac{a^2}{b}+\frac{b^2}{a}\geq a+b.\]\bs We have\[\frac{a^2}{b}+\frac{b^2}{a}-a-b=\frac{a^3+b^3-a^2b-ab^2}{ab}=\frac{(a-b)(a^2-b^2)}{ab}=\frac{(a-b)^2(a+b)}{ab}\geq 0.\]\end{soln}\enp\bp Let $a,b,c,d$ be positive real numbers such that $a>b>c>d$ and $ad=bc$. Prove that $a+d>b+c$.\bs Let $c=d\epsilon$, then $b=\frac{a}{\epsilon}$, where $\epsilon >1$. We needto prove \[a+d \geq \frac{a}{\epsilon}+ d\epsilon\] that is\[a\cdot \frac{\epsilon-1}{\epsilon}-d(\epsilon-1)\geq 0.\]But this is equivalent to\[\left(\frac{a}{\epsilon}-d\right)(\epsilon-1)\geq 0,\]which is true because $\frac{a}{\epsilon}=b>d$ and $\epsilon>1$.\end{soln}\enp\Closesolutionfile{Solutions}\eject\section{Solutions}\input{Solutions.tex}\end{document}
As pointed out by Peter K., it is important to distinguish between Laplace and Fourier transforms. The first few transform pairs in your question are Fourier transform pairs, whereas the last pair is a correspondence of the unilateral Laplace transform: $$F(s)=\int_{0}^{\infty}f(t)e^{-st}dt$$ In the last transform pair in your question the $\mathcal{F}$ symbol is wrong because it would imply that it is a general Fourier transform pair. To be precise, the (Laplace transform) correspondence is actually $$e^{at}u(t)\Longleftrightarrow \frac{1}{s-a}\tag{1}$$ because with the unilateral Laplace transform we only consider causal time functions, which satisfy $f(t)=0$ for $t<0$. Note that this is not necessarily the case with Fourier transforms. Let's now consider the Fourier transform of the (causal) time function in (1): $$F(j\omega)=\int_{-\infty}^{\infty}e^{at}u(t)e^{-j\omega t}dt=\int_{0}^{\infty}e^{-(j\omega - a)t}dt=\frac{1}{j\omega-a},\quad \textrm{for Re}\{a\}<0\tag{2}$$ Comparing (1) and (2) we see that, for $Re\{a\}<0$, the two transforms are indeed the same for $s=j\omega$. For $Re\{a\}>0$ the Fourier transform does not exist because the region of convergence of the Laplace transform $F(s)$ does not contain the imaginary axis. Let us finally consider the time function $f(t)=e^{j\omega_0 t}$. If we were to multiply it with the step function $u(t)$, its unilateral Laplace transform exists according to equation (1) with $a=j\omega_0$. However, if we consider the function for $-\infty<t<\infty$, then its (bilateral) Laplace transform does not exist, whereas its Fourier transform does exist. It is given by a delta impulse in the frequency domain as shown in your table. How can we now make sense of the relation between the Fourier and the (unilateral) Laplace transform? We must consider three cases: The region of convergence of the Laplace transform $F(s)$ contains the $j\omega$ axis: then the Fourier transform is simply the Laplace transform evaluated at $s=j\omega$. The region of convergence of the Laplace transform $F(s)$ does not contain the $j\omega$ axis: then the Fourier transform does not exist. The region of convergence is $Re\{s\}>0$ but there are singularities on the $j\omega$ axis: both transforms exist but they have different forms. The Fourier transform has additional delta impulses. Consider the function $f(t)=e^{j\omega_0 t}u(t)$. From (1), its Laplace transform is given by $$F(s)=\frac{1}{s-j\omega_0}$$However, due to the singularity on the $j\omega$ axis, its Fourier transform is $$F(j\omega)=\pi\delta(\omega-\omega_0)+\frac{1}{j\omega-j\omega_0}$$ It might seem that the Laplace transform is more general than the Fourier transform (when looking at the second point above), but this is actually not the case. In system theory, there are many important functions which are not causal, e.g. the impulse responses of ideal band-limiting (brick-wall) filters. For these functions the Laplace transform does not exist, but their Fourier transform exists. The same is of course true for sinusoidal functions defined for $-\infty<t<\infty$.
On thing to keep in mind is that IR and UV divergences appear in different kinematical regimes: UV divergences are basically due to the fact that in loop integrals there are not sufficient propagators to make the integral fall off at infinity. E.g for a bubble integral $\int d^4l \frac{1}{l^2(l-p)^2}$ will be logarithmically divergent. Do for instance a Taylor expansion of this expression for the loop momentum becoming large then this becomes obvious. IR divergences however live in a completely different regime: they appear either when two particles becoming collinear $p_1\sim p_2$ or because some particles become soft $p_i\sim0$. Or put a little more condensed: UV: loop momentum becomes large IR: external momenta become collinear/soft. This is one way to see why these two kinds of divergence are not connected. Nima and company propably meant just this but in fancier terms.This post imported from StackExchange Physics at 2014-04-15 16:45 (UCT), posted by SE-user A friendly helper
The climb rate depends on the excess power which is available after drag has been subtracted from net thrust. If the airplane stays at the same polar point while climbing, it needs to accelerate in order to compensate for the decrease in air density. Therefore, besides drag also this acceleration work needs to be subtracted before the remaining thrust can be used for climbing. First let's clarify terms: x$_g$, y$_g$, z$_g$ : Earth-fixed coordinate system x$_f$, y$_f$, z$_f$ : Airplane-fixed coordinate system x$_k$, y$_k$, z$_k$ : Kinetic coordinate system where x is the direction of movement L$\;\;$ : Lift D$\;\;$ : Drag T$\;\;$ : Thrust m$\;\:$ : mass $\alpha\;\;$ : Angle of attack (between the x-axes of the airplane-fixed and kinetic coordinate systems) $\gamma\;\;$ : Flight path angle (between the x-axes of the earth-fixed and kinetic coordinate systems) $\sigma\;\:$ : Thrust angle relative to the airplane-fixed coordinate system $v_{\infty}$ : Airspeed The polar point should be the one for optimum climb speed. There is also one for optimum climb angle, but this simplification is justified. It also helps to make the math easier, since propeller aircraft climb best at the polar point where minimum power is required to maintain flight. This is at$$c_L = \sqrt{3\cdot c_{D0}\cdot AR\cdot\pi\cdot\epsilon}$$with $c_L\;\;$: Lift coefficient $c_{D0}$ : Zero-lift drag coefficient $AR$ : Wing aspect ratio $\epsilon\;\;$ : Wing efficiency factor The zero-lift drag coefficient of propeller aircraft is around 0.025 to 0.04, with the high value for fixed-gear aircraft and the lower for those with retractable gear. It increases slightly with altitude due to the decrease of the Reynolds number from the drop in temperature. Here you need to pick a value which is appropriate for each specific aircraft. Staying at the same polar point also means that weight will influence only the speed at which the aircraft climbs best, not the lift coefficient. The speed $v$ will change with the square root of the weight difference, because$$v = \sqrt{\frac{m\cdot g}{\frac{\rho}{2}\cdot S_{ref}\cdot c_L}}$$with $S_{ref}$ being the reference area of the aircraft and $\rho$ the air density. Next to the correction term $C$ for acceleration. It depends on the local speed of sound, the gas constant for humid air $R_h$ and the temperature gradient (lapse rate $\Gamma$) of the atmosphere. This answer explains in detail how it is calculated and I repeat here only the result for standard atmospheric conditions:$$C = 1 - 0.13335\cdot Ma^2 + \frac{(1+0.2\cdot Ma^2)^{3.5}-1}{(1+0.2\cdot Ma^2)^{2.5}}$$with $Ma$ being the ratio between flight speed and local speed of sound. Now your climb speed $v_z$ becomes $$v_z = \frac{v}{C}\cdot sin\gamma = \frac{v}{C}\cdot\frac{T\cdot cos(\sigma)-D}{m\cdot g} = \frac{P\cdot\eta_{prop}\cdot cos(\sigma) - D\cdot v}{C\cdot m\cdot g}$$with $\eta_{Prop}$ the propeller efficiency and $P$ the engine brake power at the given altitude and throttle setting. This leaves a bunch of unknown variables in order to correctly calculate the climb rate: engine power aircraft zero-lift drag coefficient propeller efficiency Therefore, it will be best to look up the possible climb speeds at several altitudes and power settings from each POH and to interpolate between those values. Or you settle for an approximation and use rule-of-thumb values for the unknown parameters. for $\epsilon$ assume 0.8 for $\sigma$ assume zero for $c_{D0}$ assume 0.026 at low and 0.03 at high altitude for retracted gear and 0.035 at low and 0.04 at high altitude for fixed gear. for $D$ use $\left(c_{D0} + \frac{c_L^2}{AR\cdot\pi\cdot\epsilon} \right) \cdot\frac{\rho\cdot v^2\cdot S_{ref}}{2}$ for $\eta_{Prop}$ use 0.75 for a fixed-pitch and 0.8 for a constant speed prop. for normally aspirated engines reduce power proportionally with density. For turbocharged engines assume constant power up to their critical height and reduce power in proportion to density above that. Let the users of your program set the throttle setting themselves. Where you have performance charts available, compare your results with published figures and tweak the variables such that you get a good fit. For example, look at the published optimum climb speed and adjust $c_{D0}$ until your result, taken from the optimum lift coefficient, agrees. And so on. This should give you very useable results.
In electrodynamics, the retarded potentials are the electromagnetic potentials for the electromagnetic field generated by time-varying electric current or charge distributions in the past. The fields propagate at the speed of light c, so the delay of the fields connecting cause and effect at earlier and later times is an important factor: the signal takes a finite time to propagate from a point in the charge or current distribution (the point of cause) to another point in space (where the effect is measured), see figure below. [1] In the Lorenz gauge [ edit ] Position vectors r and r′ used in the calculation. The starting point is Maxwell's equations in the potential formulation using the Lorenz gauge: ◻ φ = ρ ϵ 0 , ◻ A = μ 0 J {\displaystyle \Box \varphi ={\dfrac {\rho }{\epsilon _{0}}}\,,\quad \Box \mathbf {A} =\mu _{0}\mathbf {J} } where φ( r, t) is the electric potential and A( r, t) is the magnetic potential, for an arbitrary source of charge density ρ( r, t) and current density J( r, t), and is the ◻ {\displaystyle \Box } D'Alembert operator. Solving these gives the retarded potentials below (all in [2] SI units). For time-dependent fields [ edit ] For time-dependent fields, the retarded potentials are: [3] [4] φ ( r , t ) = 1 4 π ϵ 0 ∫ ρ ( r ′ , t r ) | r − r ′ | d 3 r ′ {\displaystyle \mathrm {\varphi } (\mathbf {r} ,t)={\frac {1}{4\pi \epsilon _{0}}}\int {\frac {\rho (\mathbf {r} ',t_{r})}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} ^{3}\mathbf {r} '} A ( r , t ) = μ 0 4 π ∫ J ( r ′ , t r ) | r − r ′ | d 3 r ′ . {\displaystyle \mathbf {A} (\mathbf {r} ,t)={\frac {\mu _{0}}{4\pi }}\int {\frac {\mathbf {J} (\mathbf {r} ',t_{r})}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} ^{3}\mathbf {r} '\,.} where r is a point in space, t is time, t r = t − | r − r ′ | c {\displaystyle t_{r}=t-{\frac {|\mathbf {r} -\mathbf {r} '|}{c}}} is the retarded time, and d 3 r' is the integration measure using r'. From φ( r, t) and A( r, t), the fields E( r, t) and B( r, t) can be calculated using the definitions of the potentials: − E = ∇ φ + ∂ A ∂ t , B = ∇ × A . {\displaystyle -\mathbf {E} =\nabla \varphi +{\frac {\partial \mathbf {A} }{\partial t}}\,,\quad \mathbf {B} =\nabla \times \mathbf {A} \,.} and this leads to Jefimenko's equations. The corresponding advanced potentials have an identical form, except the advanced time t a = t + | r − r ′ | c {\displaystyle t_{a}=t+{\frac {|\mathbf {r} -\mathbf {r} '|}{c}}} replaces the retarded time. In comparison with static potentials for time-independent fields [ edit ] In the case the fields are time-independent ( electrostatic and magnetostatic fields), the time derivatives in the operators of the fields are zero, and Maxwell's equations reduce to ◻ {\displaystyle \Box } ∇ 2 φ = − ρ ϵ 0 , ∇ 2 A = − μ 0 J , {\displaystyle \nabla ^{2}\varphi =-{\dfrac {\rho }{\epsilon _{0}}}\,,\quad \nabla ^{2}\mathbf {A} =-\mu _{0}\mathbf {J} \,,} where ∇ 2 is the Laplacian, which take the form of Poisson's equation in four components (one for φ and three for A), and the solutions are: φ ( r ) = 1 4 π ϵ 0 ∫ ρ ( r ′ ) | r − r ′ | d 3 r ′ {\displaystyle \mathrm {\varphi } (\mathbf {r} )={\frac {1}{4\pi \epsilon _{0}}}\int {\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} ^{3}\mathbf {r} '} A ( r ) = μ 0 4 π ∫ J ( r ′ ) | r − r ′ | d 3 r ′ . {\displaystyle \mathbf {A} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int {\frac {\mathbf {J} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} ^{3}\mathbf {r} '\,.} These also follow directly from the retarded potentials. In the Coulomb gauge [ edit ] In the Coulomb gauge, Maxwell's equations are [5] ∇ 2 φ = − ρ ϵ 0 {\displaystyle \nabla ^{2}\varphi =-{\dfrac {\rho }{\epsilon _{0}}}} ∇ 2 A − 1 c 2 ∂ 2 A ∂ t 2 = − μ 0 J + 1 c 2 ∇ ( ∂ φ ∂ t ) , {\displaystyle \nabla ^{2}\mathbf {A} -{\dfrac {1}{c^{2}}}{\dfrac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}=-\mu _{0}\mathbf {J} +{\dfrac {1}{c^{2}}}\nabla \left({\dfrac {\partial \varphi }{\partial t}}\right)\,,} although the solutions contrast the above, since A is a retarded potential yet φ changes instantly, given by: φ ( r , t ) = 1 4 π ϵ 0 ∫ ρ ( r ′ , t ) | r − r ′ | d 3 r ′ {\displaystyle \varphi (\mathbf {r} ,t)={\dfrac {1}{4\pi \epsilon _{0}}}\int {\dfrac {\rho (\mathbf {r} ',t)}{|\mathbf {r} -\mathbf {r} '|}}\mathrm {d} ^{3}\mathbf {r} '} A ( r , t ) = 1 4 π ε 0 ∇ × ∫ d 3 r ′ ∫ 0 | r − r ′ | / c d t r t r J ( r ′ , t − t r ) | r − r ′ | 3 × ( r − r ′ ) . {\displaystyle \mathbf {A} (\mathbf {r} ,t)={\dfrac {1}{4\pi \varepsilon _{0}}}\nabla \times \int \mathrm {d} ^{3}\mathbf {r'} \int _{0}^{|\mathbf {r} -\mathbf {r} '|/c}\mathrm {d} t_{r}{\dfrac {t_{r}\mathbf {J} (\mathbf {r'} ,t-t_{r})}{|\mathbf {r} -\mathbf {r} '|^{3}}}\times (\mathbf {r} -\mathbf {r} ')\,.} This presents an advantage and a disadvantage of the Coulomb gauge - φ is easily calculable from the charge distribution ρ but A is not so easily calculable from the current distribution j. However, provided we require that the potentials vanish at infinity, they can be expressed neatly in terms of fields: φ ( r , t ) = 1 4 π ∫ ∇ ⋅ E ( r ′ , t ) | r − r ′ | d 3 r ′ {\displaystyle \varphi (\mathbf {r} ,t)={\dfrac {1}{4\pi }}\int {\dfrac {\nabla \cdot \mathbf {E} ({r}',t)}{|\mathbf {r} -\mathbf {r} '|}}\mathrm {d} ^{3}\mathbf {r} '} A ( r , t ) = 1 4 π ∫ ∇ × B ( r ′ , t ) | r − r ′ | d 3 r ′ {\displaystyle \mathbf {A} (\mathbf {r} ,t)={\dfrac {1}{4\pi }}\int {\dfrac {\nabla \times \mathbf {B} ({r}',t)}{|\mathbf {r} -\mathbf {r} '|}}\mathrm {d} ^{3}\mathbf {r} '} In linearized gravity [ edit ] The retarded potential in linearized general relativity is closely analogous to the electromagnetic case. The trace-reversed tensor plays the role of the four-vector potential, the h ~ μ ν = h μ ν − 1 2 η μ ν h {\displaystyle {\tilde {h}}_{\mu \nu }=h_{\mu \nu }-{\frac {1}{2}}\eta _{\mu \nu }h} harmonic gauge replaces the electromagnetic Lorenz gauge, the field equations are h ~ μ ν , μ = 0 {\displaystyle {\tilde {h}}^{\mu \nu }{}_{,\mu }=0} , and the retarded-wave solution is ◻ h ~ μ ν = − 16 π G T μ ν {\displaystyle \Box {\tilde {h}}_{\mu \nu }=-16\pi GT_{\mu \nu }} . h ~ μ ν ( r , t ) = 4 G ∫ T μ ν ( r ′ , t r ) | r − r ′ | d 3 r ′ {\displaystyle {\tilde {h}}_{\mu \nu }(\mathbf {r} ,t)=4G\int {\frac {T_{\mu \nu }(\mathbf {r} ',t_{r})}{|\mathbf {r} -\mathbf {r} '|}}\mathrm {d} ^{3}\mathbf {r} '} [6] Occurrence and application [ edit ] A many-body theory which includes an average of retarded and advanced Liénard–Wiechert potentials is the Wheeler–Feynman absorber theory also known as the Wheeler–Feynman time-symmetric theory. Example [ edit ] The potential of charge with uniform speed on a straight line has inversion in a point that is in the recent position. The potential is not changed in the direction of movement. [7] See also [ edit ] References [ edit ] ^ McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3 ^ Garg, A., Classical Electromagnetism in a Nutshell, 2012, p. 129 ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9 ^ Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 ^ Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 ^ Sean M. Carroll, "Lecture Notes on General Relativity" ( arXiv:gr-qc/9712019), equations 6.20, 6.21, 6.22, 6.74 ^ http://www.feynmanlectures.caltech.edu/II_26.html - Feynman, Lecture 26, Lorentz Transformations of the Fields
I was thinking of this today as I was looking over my complex analysis notes. If you have some complex number $z$, then we can define it using Euler's formula as $z=a+ib=\cos\theta+i \sin\theta$. Say we have the case that $z=3+4i=25(\cos\theta+i\sin\theta)$. Then $25\cos \theta=3$, and $25\sin\theta=4$. But this would mean that $$\theta=\cos^{-1}\left(\frac{3}{25}\right) =\sin^{-1}\left(\frac{4}{25}\right).$$ How can this be true if $\cos^{-1}\left(\frac{3}{25}\right)=83.107 \text{ degrees}$ and $\sin^{-1}\left(\frac{4}{25}\right)=9.206 \text{ degrees}$? Does this mean that we can only have certain values of $z$ in order to use Euler's formula ?
Let $K$ be an ideal of the direct product $R\times S$.Define\[I=\{a\in R \mid (a,b)\in K \text{ for some } b\in S\}\]and\[J=\{b\in S \mid (a, b)\in K \text{ for some } a\in R\}.\] We claim that $I$ and $J$ are ideals of $R$ and $S$, respectively. Let $a, a’\in I$. Then there exist $b, b’\in S$ such that $(a, b), (a’, b’)\in K$.Since $K$ is an ideal we have\[(a,b)+(a’,b’)=(a+a’, b+b)\in k.\] It follows that $a+a’\in I$.Also, for any $r\in R$ we have\[(r,0)(a,b)=(ra,0)\in K\]because $K$ is an ideal. Thus, $ra\in I$, and hence $I$ is an ideal of $R$.Similarly, $J$ is an ideal of $S$. Next, we prove that $K=I \times J$.Let $(a,b)\in K$. Then by definitions of $I$ and $J$ we have $a\in I$ and $b\in J$.Thus $(a,b)\in I\times J$. So we have $K\subset I\times J$. On the other hand, consider $(a,b)\in I \times J$.Since $a\in I$, there exists $b’\in S$ such that $(a, b’)\in K$.Also since $b\in J$, there exists $a’\in R$ such that $(a’, b)\in K$. As $K$ is an ideal of $R\times S$, we have\[(1,0)(a,b’)=(a,0)\in K \text{ and } (0, 1)(a’,b)=(0, b)\in K.\]It yields that\[(a,b)=(a,0)+(0,b)\in K.\]Hence $I\times J \subset K$. Putting these inclusions together gives $k=I\times J$ as required. Remark. The ideals $I$ and $J$ defined in the proof can be alternatively defined as follows.Consider the natural projections\[\pi_1: R\times S \to R \text{ and } \pi_2:R\times S \to S.\]Define\[I=\pi_1(K) \text{ and } J=\pi_2(K).\] Ring Homomorphisms and Radical IdealsLet $R$ and $R'$ be commutative rings and let $f:R\to R'$ be a ring homomorphism.Let $I$ and $I'$ be ideals of $R$ and $R'$, respectively.(a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$.(b) Prove that $\sqrt{f^{-1}(I')}=f^{-1}(\sqrt{I'})$(c) Suppose that $f$ is […] The Inverse Image of an Ideal by a Ring Homomorphism is an IdealLet $f:R\to R'$ be a ring homomorphism. Let $I'$ be an ideal of $R'$ and let $I=f^{-1}(I)$ be the preimage of $I$ by $f$. Prove that $I$ is an ideal of the ring $R$.Proof.To prove $I=f^{-1}(I')$ is an ideal of $R$, we need to check the following two […] Ideal Quotient (Colon Ideal) is an IdealLet $R$ be a commutative ring. Let $S$ be a subset of $R$ and let $I$ be an ideal of $I$.We define the subset\[(I:S):=\{ a \in R \mid aS\subset I\}.\]Prove that $(I:S)$ is an ideal of $R$. This ideal is called the ideal quotient, or colon ideal.Proof.Let $a, […] Non-Prime Ideal of Continuous FunctionsLet $R$ be the ring of all continuous functions on the interval $[0,1]$.Let $I$ be the set of functions $f(x)$ in $R$ such that $f(1/2)=f(1/3)=0$.Show that the set $I$ is an ideal of $R$ but is not a prime ideal.Proof.We first show that $I$ is an ideal of […] Prime Ideal is Irreducible in a Commutative RingLet $R$ be a commutative ring. An ideal $I$ of $R$ is said to be irreducible if it cannot be written as an intersection of two ideals of $R$ which are strictly larger than $I$.Prove that if $\frakp$ is a prime ideal of the commutative ring $R$, then $\frakp$ is […] A Prime Ideal in the Ring $\Z[\sqrt{10}]$Consider the ring\[\Z[\sqrt{10}]=\{a+b\sqrt{10} \mid a, b \in \Z\}\]and its ideal\[P=(2, \sqrt{10})=\{a+b\sqrt{10} \mid a, b \in \Z, 2|a\}.\]Show that $p$ is a prime ideal of the ring $\Z[\sqrt{10}]$.Definition of a prime ideal.An ideal $P$ of a ring $R$ is […]
Understanding Neutral Density Filters Neutral Density (ND) filters are designed to reduce transmission evenly across a portion of a specific spectrum. ND filters are typically defined by their Optical Density (OD) which describes the amount of energy blocked by the filter. A high optical density value indicates very low transmission, and low optical density indicates high transmission (Equations 1 – 2). ND filters can be stacked to achieve a custom optical density. To calculate the final system OD, simply add the OD of each filter together. (1)$$ T \left(\text{Percent Transmission} \right)=10^{-\text{OD}} \times 100 \% $$ (2)$$ \text{OD} = -\log\left(\frac{T}{100 \% }\right) $$ Example 1: What is the transmission if OD 0.3 and OD 1.5 filters are stacked? (3)$$ \text{OD}_{\text{Total}} = 0.3 + 1.5 =1.8 $$ (4)$$ T = 10^{-1.8} \times 100 \% = 1.58 \% $$ Example 2: How can I build a filter with 0.5% Transmission? (5)$$ \text{OD} = -\log{ \left( \frac{0.5 \% }{100 \%} \right) } = -\log{\left( 0.005 \right)} = 2.3 $$ OD Total of 2.3 could be created by stacking OD 0.3 + OD 2.0 or OD 1.0 + OD 1.3. Types of Neutral Density Filters There are two types of ND filters: reflective and absorptive. Reflective ND filters consist of thin film optical coatings, typically metallic, that have been applied to a glass substrate. The coating can be optimized for specific wavelength ranges such as UV-VIS or NIR. The thin film coating primarily reflects light back toward the source. Special care should be taken to ensure the reflected light does not interfere with the system setup. Absorptive ND filters utilize a glass substrate to absorb light by a specific percentage.
This is an old revision of the document! Abbreviation: SchrCat A is a category $\mathbf{C}=\langle C,\circ,\text{dom},\text{cod}\rangle$ such that Schroeder category every morphism is an isomorphism: $\forall x\exists y\ x\circ y=\text{dom}(x)\text{ and }y\circ x=\text{cod}(x)$ Let $\mathbf{C}$ and $\mathbf{D}$ be Schroeder categories. A morphism from $\mathbf{C}$ to $\mathbf{D}$ is a function $h:C\rightarrow D$ that is a : $h(x\circ y)=h(x)\circ h(y)$, $h(\text{dom}(x))=\text{dom}(h(x))$ and $h(\text{cod}(x))=\text{cod}(h(x))$. functor Remark: These categories are also called . groupoids Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &2\\ f(4)= &3\\ f(5)= &7\\ f(6)= &9\\ f(7)= &16\\ f(8)= &22\\ f(9)= &42\\ f(10)= &57\\ \end{array}$
I would like to obtain analytic solutions to the following PDE system: \begin{equation} \rho_t + D(\lambda)\,\rho_\lambda = A(\lambda) \rho, \tag{1} \end{equation} with $\rho = (\rho_0,\rho_1)^T$, $D$ is the diagonal matrix \begin{equation} D(\lambda) = \begin{pmatrix} -(1+a)\lambda & 0 \\ 0 & p_b-\lambda\end{pmatrix}, \end{equation} and the matrix $A$ depends on $\lambda$ alone, through component functions of the form $ \frac{\lambda^2 + a \lambda + b}{c \lambda + d}$. I am aware of the fact that questions of this type have been asked before on MathOverflow. However, system $(1)$ has certain features which make it, in my opinion, worthwhile to devote a new question to it. System $(1)$ is non-homogeneous (i.e. $A \neq 0$), so it cannot be interpreted as a conservation equation. The left hand side of $(1)$ is in diagonal form, while $A$ has no nonzero components. In other words, $D$ and $A$ are not simultaneously diagonalizable. While I am aware of the fact that 'There is no generally applicable method of characterstics for first order systems' (@Igor Khavkine's comment on this question), I would hope that the characteristic surface spanned by the two characteristics defined by the left hand side of $(1)$ could be used to solve $(1)$ by a generalisation of the method of characteristics. Since system $(1)$ does not have constant coefficients, previous questions such as this, this, and this do not apply, as far as I can see. Moreover, the nonvanishing right hand side of $(1)$ makes the situation qualitatively different from this and this question. Also, system $(1)$ cannot be reduced in a manner demonstrated in this question or this question. Any ideas would be highly appreciated!
Skills to Develop To learn how to construct a confidence interval for the difference in the proportions of two distinct populations that have a particular characteristic of interest. To learn how to perform a test of hypotheses concerning the difference in the proportions of two distinct populations that have a particular characteristic of interest. Suppose we wish to compare the proportions of two populations that have a specific characteristic, such as the proportion of men who are left-handed compared to the proportion of women who are left-handed. Figure \(\PageIndex{1}\) illustrates the conceptual framework of our investigation. Each population is divided into two groups, the group of elements that have the characteristic of interest (for example, being left-handed) and the group of elements that do not. We arbitrarily label one population as Population \(1\) and the other as Population \(2\), and subscript the proportion of each population that possesses the characteristic with the number \(1\) or \(2\) to tell them apart. We draw a random sample from Population \(1\) and label the sample statistic it yields with the subscript \(1\). Without reference to the first sample we draw a sample from Population \(2\) and label its sample statistic with the subscript \(2\). Figure \(\PageIndex{1}\): Independent Sampling from Two Populations In Order to Compare Proportions Our goal is to use the information in the samples to estimate the difference \(p_1-p_2\) in the two population proportions and to make statistically valid inferences about it. Confidence Intervals Since the sample proportion \(\hat{p}_1\) computed using the sample drawn from Population \(1\) is a good estimator of population proportion \(p_1\) of Population \(1\) and the sample proportion \(\hat{p}_2\) computed using the sample drawn from Population \(2\) is a good estimator of population proportion \(p_2\) of Population \(2\), a reasonable point estimate of the difference \(p_1−p_2\) is \(\hat{p}_1 -\hat{p}_2\). In order to widen this point estimate into a confidence interval we suppose that both samples are large, as described in Section 7.3 and repeated below. If so, then the following formula for a confidence interval for \(p_1−p_2\) is valid. \(100(1−\alpha)\%\) Confidence Interval for the Difference Between Two Population Proportions The samples must be independent, and each sample must be large: each of the intervals and must lie wholly within the interval \([0,1]\). Example \(\PageIndex{1}\) The department of code enforcement of a county government issues permits to general contractors to work on residential projects. For each permit issued, the department inspects the result of the project and gives a “pass” or “fail” rating. A failed project must be re-inspected until it receives a pass rating. The department had been frustrated by the high cost of re-inspection and decided to publish the inspection records of all contractors on the web. It was hoped that public access to the records would lower the re-inspection rate. A year after the web access was made public, two samples of records were randomly selected. One sample was selected from the pool of records before the web publication and one after. The proportion of projects that passed on the first inspection was noted for each sample. The results are summarized below. Construct a point estimate and a \(90\%\) confidence interval for the difference in the passing rate on first inspection between the two time periods. Solution: The point estimate of \(p_1−p_2\) is Because the “No public web access” population was labeled as Population \(1\) and the “Public web access” population was labeled as Population \(2\), in words this means that we estimate that the proportion of projects that passed on the first inspection increased by \(13\) percentage points after records were posted on the web. The sample sizes are sufficiently large for constructing a confidence interval since for sample 1: so that and for sample \(2\): \[3\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}}=3\sqrt{\frac{(0.8)(0.2)}{100}}=0.12\] so that \[\left [ \hat{p_2}-3\sqrt{\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}, \hat{p_2}+3\sqrt{\frac{\hat{p_2}(1-\hat{p_2})}{n_2}} \right ]=[0.8-0.12,0.8+0.12]=[0.68,0.92]\subset [0,1]\] To apply the formula for the confidence interval, we first observe that the \(90\%\) confidence level means that \(\alpha =1-0.90=0.10\) so that \(z_{\alpha /2}=z_{0.05}\). From Figure 7.1.6 we read directly that \(z_{0.05}=1.645\). Thus the desired confidence interval is \[\begin{align} (\hat{p}_1−\hat{p}_2)&± z_{α/2} \sqrt{ \dfrac{ \hat{p}_1(1−\hat{p}_1)}{n_1} + \dfrac{\hat{p}_2(1−\hat{p}_2)}{n_2}} \\ &= 0.13 ± 1.645 \sqrt{ \dfrac{(0.67)(0.33)}{500}+\dfrac{(0.8)(0.2)}{100}} \\ &= -0.13±0.07 \end{align}\] The \(90\%\) confidence interval is \([-0.20,-0.06]\). We are \(90\%\) confident that the difference in the population proportions lies in the interval \([-0.20,-0.06]\), in the sense that in repeated sampling \(90\%\) of all intervals constructed from the sample data in this manner will contain \(p_1−p_2\). Taking into account the labeling of the two populations, this means that we are \(90\%\) confident that the proportion of projects that pass on the first inspection is between \(6\) and \(20\) percentage points higher after public access to the records than before. Hypothesis Testing In hypothesis tests concerning the relative sizes of the proportions \(p_1\) and \(p_2\) of two populations that possess a particular characteristic, the null and alternative hypotheses will always be expressed in terms of the difference of the two population proportions. Hence the null hypothesis is always written The three forms of the alternative hypothesis, with the terminology for each case, are: Form of \(H_a\) Terminology \(H_a : p _1−p_2 < D_0\) Left-tailed \(H_a : p_1−p_2>D_0\) Right-tailed \(H_a : p_1−p_2 \neq D_0\) Two-tailed As long as the samples are independent and both are large the following formula for the standardized test statistic is valid, and it has the standard normal distribution. Standardized Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Proportions The test statistic has the standard normal distribution. The samples must be independent, and each sample must be large: each of the intervals and must lie wholly within the interval \([0,1]\). Example \(\PageIndex{2}\) Using the data of Example \(\PageIndex{1}\), test whether there is sufficient evidence to conclude that public web access to the inspection records has increased the proportion of projects that passed on the first inspection by more than \(5\) percentage points. Use the critical value approach at the \(10\%\) level of significance. Solution: Step 1. Taking into account the labeling of the populations an increase in passing rate at the first inspection by more than \(5\) percentage points after public access on the web may be expressed as \(p_2>p_1+0.05\), which by algebra is the same as \(p_1-p_2<-0.05\). This is the alternative hypothesis. Since the null hypothesis is always expressed as an equality, with the same number on the right as is in the alternative hypothesis, the test is \[H_0: p_1-p_2=-0.05\\ \text{vs.}\\ H_a: p_1-p_2<-0.05\; \; @\; \; \alpha =0.10\] Step 2. Since the test is with respect to a difference in population proportions the test statistic is \[Z=\frac{(\hat{p_1}-\hat{p_2})-D_0}{\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}+\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}}\] Step 3. Inserting the values given in Example \(\PageIndex{1}\) and the value \(D_0=-0.05\)into the formula for the test statistic gives \[Z=\frac{(\hat{p_1}-\hat{p_2})-D_0}{\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}+\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}}=\frac{(-0.13)-(-0.05)}{\sqrt{\frac{(0.67)(0.33)}{500}+\frac{(0.8)(0.2)}{100}}}=-1.770\] Step 4. Since the symbol in \(H_a\) is “\(<\)” this is a left-tailed test, so there is a single critical value, \(z_\alpha =-z_{0.10}\). From the last row in Figure 7.1.6 \(z_{0.10}=1.282\), so \(-z_{0.10}=-1.282\). The rejection region is \((-\infty ,-1.282]\). Step 5. As shown in Figure \(\PageIndex{2}\) the test statistic falls in the rejection region. The decision is to reject \(H_0\). In the context of the problem our conclusion is: The data provide sufficient evidence, at the \(10\%\) level of significance, to conclude that the rate of passing on the first inspection has increased by more than \(5\) percentage points since records were publicly posted on the web. : Figure \(\PageIndex{2}\) Rejection Region and Test Statistic for "Example\(\PageIndex{2}\) " Example \(\PageIndex{3}\) Perform the test of Example \(\PageIndex{2}\) using the \(p\)-value approach. Solution: The first three steps are identical to those in Example \(\PageIndex{2}\) Step 4. Because the test is left-tailed the observed significance or \(p\)-value of the test is just the area of the left tail of the standard normal distribution that is cut off by the test statistic \(Z=-1.770\). From Figure 7.1.5 the area of the left tail determined by \(-1.77\) is \(0.0384\). The \(p\)-value is \(0.0384\). Step 5. Since the \(p\)-value \(0.0384\) is less than \(\alpha =0.10\), the decision is to reject the null hypothesis: The data provide sufficient evidence, at the \(10\%\) level of significance, to conclude that the rate of passing on the first inspection has increased by more than \(5\) percentage points since records were publicly posted on the web. Finally a common misuse of the formulas given in this section must be mentioned. Suppose a large pre-election survey of potential voters is conducted. Each person surveyed is asked to express a preference between, say, Candidate \(A\) and Candidate \(B\). (Perhaps “no preference” or “other” are also choices, but that is not important.) In such a survey, estimators \(\hat{p}_A\) and \(\hat{p}B\) of \(p_A\) and \(p_B\) can be calculated. It is important to realize, however, that these two estimators were not calculated from two independent samples. While \(\hat{p}A−\hat{p}_B\) may be a reasonable estimator of \(p_A−p_B\), the formulas for confidence intervals and for the standardized test statistic given in this section are not valid for data obtained in this manner. Key Takeaway A confidence interval for the difference in two population proportions is computed using a formula in the same fashion as was done for a single population mean. The same five-step procedure used to test hypotheses concerning a single population proportion is used to test hypotheses concerning the difference between two population proportions. The only difference is in the formula for the standardized test statistic.
ISSN: 1556-1801 eISSN: 1556-181X All Issues Networks & Heterogeneous Media December 2014 , Volume 9 , Issue 4 Special issue on the mathematics of concrete Select all articles Export/Reference: Abstract: Although the concrete is a simple man-made material with initially-controlled composition (for instance, all ingredients are known beforehand, the involved chemical mechanisms are well studied, the mechanical strength of test samples is measured accurately), forecasting its behaviour for large times under variable external (boundary) conditions is not properly understood. The main reason is that the simplicity of the material is only apparent. The combination of the heterogeneity of the material together with the occurrence of a number of multiscale phase transitions either driven by aggressive chemicals (typically ions, like in corrosion situations), or by extreme heating, or by freezing/thawing of the ice lenses within the microstructure, and the inherent non-locality of the mechanical damage leads to mathematically challenging nonlinear coupled systems of partial differential equations (PDEs). For more information please click the “Full Text” above. Abstract: Failure of quasi-brittle materials such as concrete needs a proper description of strain softening due to progressive micro-cracking and the introduction of an internal length in the constitutive model in order to achieve non zero energy dissipation. This paper reviews the main results obtained with the non local damage model, which has been among the precursors of such models. In most cases up to now, the internal length has been considered as a constant. There is today a consensus that it should not be the case as models possess severe shortcomings such as incorrect averaging near the boundaries of the solid considered and non local transmission across non convex boundaries. An interaction-based model in which the weight function is constructed from the analysis of interaction has been proposed. It avoids empirical descriptions of the evolution of the internal length. This model is also recalled and further documented. Additional results dealing with spalling failure are discussed. Finally, it is pointed out that this model provides an asymptotic description of complete failure, which is consistent with fracture mechanics. Abstract: We study a two-scale homogenization problem describing the linearized poro-elastic behavior of a periodic two-component porous material exhibited to a slightly compressible viscous fluid flow and a first-order chemical reaction. One material component consists of disconnected parts embedded in the other component which is supposed to be connected. It is shown that a memory effect known from the purely mechanic problem gets inherited by the reaction component of the model. Abstract: We study spring-block systems which are equivalent to the P1-finite element methods for the linear elliptic partial differential equation of second order and for the equations of linear elasticity. Each derived spring-block system is consistent with the original partial differential equation, since it is discretized by P1-FEM. Symmetry and positive definiteness of the scalar and tensor-valued spring constants are studied in two dimensions. Under the acuteness condition of the triangular mesh, positive definiteness of the scalar spring constant is obtained. In case of homogeneous linear elasticity, we show the symmetry of the tensor-valued spring constant in the two dimensional case. For isotropic elastic materials, we give a necessary and sufficient condition for the positive definiteness of the tensor-valued spring constant. Consequently, if Poisson's ratio of the elastic material is small enough, like concrete, we can construct a consistent spring-block system with positive definite tensor-valued spring constant. Abstract: The subject of the present paper is the derivation and asymptotic analysis of a mathematical model for the formation of a mushy region during sulphation of calcium carbonate. The model is derived by averaging, with the use of the multiple scales method, applied on microscopic moving - boundary problems. The latter problems describe the transformation of calcium carbonate into gypsum on the microscopic scale. The derived macroscopic model is solved numerically with the use of a finite element method. The results of some simulations and a relevant discussion are also presented. Abstract: In this paper we deal with a one-dimensional free boundary problem, which is a mathematical model for an adsorption phenomena appearing in concrete carbonation process. This model was proposed in line of previous studies of three dimensional concrete carbonation process. The main result in this paper is concerned with the existence and uniqueness of a time-local solution to the free boundary problem. This result will be obtained by means of the abstract theory of nonlinear evolution equations and Banach's fixed point theorem, and especially, the maximum principle applied to our problem will play a very important role to obtain the uniform estimate to approximate solutions. Abstract: In this paper we consider a three-component reaction-diffusion system with a fast precipitation and dissolution reaction term. We investigate its singular limit as the reaction rate tends to infinity. The limit problem is described by a combination of a Stefan problem and a linear heat equation. The rate of convergence with respect to the reaction rate is established in a specific case. Abstract: When dealing with concrete materials it is always a big issue how to deal with the moisture transport. Here, we consider a mathematical model for moisture transport, which is given as a system consisting of the diffusion equation for moisture and of the ordinary differential equation which describes a hysteresis operator. In [3] we already proved the existence of a time global solution of an initial boundary value problem of this system, however, the uniqueness is obtained only for one dimensional domains. The main purpose of this paper is to establish the uniqueness of a solution of our problem in three dimensional domains under the assumption of the smooth boundary and initial data. Abstract: We study the homogenization of a reaction-diffusion-convection system posed in an $\varepsilon$-periodic $\delta$-thin layer made of a two-component (solid-air) composite material. The microscopic system includes heat flow, diffusion and convection coupled with a nonlinear surface chemical reaction. We treat two distinct asymptotic scenarios: (1) For a fixed width $\delta>0$ of the thin layer, we homogenize the presence of the microstructures (the classical periodic homogenization limit $\varepsilon\to 0$); (2) In the homogenized problem, we pass to $\delta\to 0$ (the vanishing limit of the layer's width). In this way, we are preparing the stage for the simultaneous homogenization ($\varepsilon\to 0$) and dimension reduction limit ($\delta\to 0$) with $\delta=\delta(\epsilon)$. We recover the reduced macroscopic equations from [25] with precise formulas for the effective transport and reaction coefficients. We complement the analytical results with a few simulations of a case study in smoldering combustion. The chosen multiscale scenario is relevant for a large variety of practical applications ranging from the forecast of the response to fire of refractory concrete, the microstructure design of resistance-to-heat ceramic-based materials for engines, to the smoldering combustion of thin porous samples under microgravity conditions. Abstract: We study the solvability and homogenization of a thermal-diffusion reaction problem posed in a periodically perforated domain. The system describes the motion of populations of hot colloidal particles interacting together via Smoluchowski production terms. The upscaled system, obtained via two-scale convergence techniques, allows the investigation of deposition effects in porous materials in the presence of thermal gradients. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
KKT conditions from Wikipedia: We consider the following nonlinear optimization problem: $$ \text{Minimize }\; f(x) $$ $$ \text{subject to: }\ g_i(x) \le 0 , h_j(x) = 0 $$ The number of inequality and equality constraints are denoted $m$ and $l$, respectively. Suppose that the objective function $f : \mathbb{R}^n \rightarrow \mathbb{R} $ and the constraint functions $g_i : \,\!\mathbb{R}^n \rightarrow \mathbb{R}$ and $h_j : \,\!\mathbb{R}^n \rightarrow \mathbb{R}$ are continuously differentiable at a point $x^*$ . If $x^*$ is a local minimum that satisfies some regularity conditions, then there exist constants $\mu_i\ (i = 1,\ldots,m)$ and $\lambda_j\ (j = 1,\ldots,l)$, called KKT multipliers, such that Stationarity $$ \nabla f(x^*) + \sum_{i=1}^m \mu_i \nabla g_i(x^*) + \sum_{j=1}^l \lambda_j \nabla h_j(x^*) = 0, $$ Primal feasibility $$ g_i(x^*) \le 0, \mbox{ for all } i = 1, \ldots, m $$$$ h_j(x^*) = 0, \mbox{ for all } j = 1, \ldots, l $$ Dual feasibility $$ \mu_i \ge 0, \mbox{ for all } i = 1, \ldots, m $$ Complementary slackness $$ \mu_i g_i (x^*) = 0, \mbox{for all}\; i = 1,\ldots,m. $$ I was wondering: how will the KKT conditions change, if the inequality constraints $g_i(x) \le 0$ are replaced with strict inequalities i.e. $g_i(x) < 0$? if the cost function $f$ already put some implicit condition on $x$ so that it can be well-defined, will the implicit condition be considered as an explicit constraint when writing the KKT conditions? For example, $f(x)=x- \ln(x)$ requires $x>0$. Will $x>0$ be considered as a constraint when writing the KKT conditions? Thanks and regards!
In the general equation of the compressibility factor $Z$, we define $Z$ as $$Z=\frac{pV}{nRT}$$ Here, what is $p$? Is it $p_\text{real}$ or $p_\text{ideal}$? Also, what is $V$? $V_\text{real}$ or $V_\text{container}$? Both quantities are of the real gas. Note that for the ideal gas behaviour, the compressibility factor $$Z=\frac{V_\text{real}}{V_\text{ideal}}=\frac{pV_\text{real}}{nRT}=1$$ for any combination of $p, V, T$. But real gases, in contrary to the ideal gas, have nonzero volume of molecules and there are intermolecular interactions. The volume of molecules increases $Z$, as it makes the pressure higher, because the collision frequency is higher. The cohesive forces between molecules decreases $Z$. As molecule attraction, like hydrogen bonds or dipole interactions, decreases the effective number of molecules and therefore pressure. This is reflected in the van der Waals equation - probably the simplest state equation of real gases, for not too high pressure: $$\begin{align} \left(p + \frac{an^2}{V^2}\right)\left(V - nb\right)&=nRT \\ \left(p + \frac{a}{V_\mathrm m^2}\right)\left(V_\mathrm m - b\right)&=RT \end{align}$$ $$\begin{align} Z &= \frac{p\cdot V_\text{real}}{nRT} \\ Z&= \frac{p\cdot V_\text{real}}{\left(p + \frac{an^2}{V_\text{real}^2}\right)(V_\text{real} - nb)} \\ Z&= \frac{1}{\left(1 + \frac{an^2}{p \cdot V_\text{real}^2}\right)\left(1 - \frac{nb}{V_\text{real}}\right)} \\ Z&= \frac{1}{\left(1 + \frac{a}{pV_{\mathrm m,\text{real}}^2}\right)\left(1 - \frac{b}{V_{\mathrm m,\text{real}}}\right)} \end{align}$$ For $|x|\ll1, 1/(1+x)=1-x$ For not too far from ideal behaviour, we can apply the above approximation. $$\begin{align} Z&=\left(1 - \frac{an^2}{pV_{\mathrm{real}}^2}\right)\left(1 + \frac{bn}{V_{\mathrm{real}}}\right) \\ Z&=\left(1 - \frac{a}{pV_{\mathrm m,\text{real}}^2}\right)\left(1 + \frac{b}{V_{\mathrm m,\text{real}}}\right) \\ \end{align}$$ We can therefore also afford to neglect minor terms. $$\begin{align} Z&=1 - \frac{an^2}{pV_{\mathrm{real}}^2}+ \frac{bn}{V_{\mathrm{real}}}\\ Z&=1 - \frac{a}{pV_{\mathrm m,\text{real}}^2}+ \frac{b}{V_{\mathrm m,\text{real}}}\\ \end{align}$$ To address clarified scenario, as a the same amounts of the ideal and real gas are under the same temperature and pressure: $$V_\text{ideal gas}=\frac{nRT}{p}$$ $$V_\text{real gas}=Z \frac{nRT}{p}$$ There is need to perform iteration for probably the easiest way to get the result. With the $V_{\rm{ideal gas}}\ $ as the first approximation. The other option is to solve the root of the cubic equation what we probably do not want to. $$\begin{align} V_\text{real}&=Z(p, V_\text{real})\frac{nRT}{p}\\ V_\text{real}&=\frac{1}{\left(1 + \frac{an^2}{p \cdot V_\text{real}^2}\right)\left(1 - \frac{nb}{V_\text{real}}\right)} \frac{nRT}{p} \\ V_\text{real}&=\left( 1 - \frac{an^2}{pV_{\rm{real}}^2}+ \frac{bn}{V_{\rm{real}}}\right) \frac{nRT}{p} \\ \end{align}$$ The van der Waals equation can also be expressed in terms of reduced properties $$\left( P_r + \frac{3}{V_\rm{r}^2}\right) \left( V_{\rm r}-\frac{1}{3}\right) = \frac{8}{3}T_\mathrm r$$ This yields a critical compressibility factor $3/8$. The values of pressure, temperature and volume are divided by the respective critical values of the given gas. Compressibility factor for a gas is defined as the ratio of the volume of real gas to the volume of ideal gas . Thus $$ Z = \frac{V_\text{real}}{V_\text{ideal}} $$ Using the ideal gas equation $$ PV_\text{ideal} = nRT $$ $$ V_\text{ideal} = \frac{nRT}{P} $$ So $$ Z = \frac{V_\text{real}}{\frac{nRT}{P} } $$ $$ Z = \frac{PV_\text{real}}{nRT} $$ Hence except $V_\text{real}$ all other quantities are ideal.
The news has been so unrelentingly bad these past few weeks that I’m taking momentary refuge in good old numerology. I happened to re-read this blog post by John Baez about the free modular lattice on 3 generators. This is a nice bit of pure math that features rather prominently the numbers 3, 8, 24 and 28. The numerological part is that I noticed the same numbers popping up in a problem that I had studied for other reasons, so I figured it would be fun to write about, even if my 28 isn’t exactly equal to Baez’s 28, so to speak. First, the 3. Consider the Pauli matrices: $$\sigma_x = \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right),\qquad \sigma_y = \left(\begin{array}{cc} 0 & -i \\ i & 0 \end{array}\right),\qquad \sigma_z = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right).$$ Note that of these three matrices, only $\sigma_y$ is antisymmetric, and also note that we have $$\sigma_z \sigma_x = -\sigma_x\sigma_z = i\sigma_y.$$ This much is familiar, though that minus sign gets around. For example, it is the fuel that makes the GHZ thought-experiment go, because it means that $$\sigma_x \otimes \sigma_x \otimes \sigma_x = -(\sigma_x \otimes \sigma_z \otimes \sigma_z)(\sigma_z \otimes \sigma_x \otimes \sigma_z)(\sigma_z \otimes \sigma_z \otimes \sigma_x).$$ And this leads us to where the 8 comes into the story. Let’s consider the finite-dimensional Hilbert space made by composing three qubits. This state space is eight-dimensional, and we build the three-qubit Pauli group by taking tensor products of the Pauli matrices, considering the $2 \times 2$ identity matrix to be the zeroth Pauli operator. There are 64 matrices in the three-qubit Pauli group, and we can label them by six bits. The notation $$\left(\begin{array}{ccc} m_1 & m_3 & m_5 \\ m_2 & m_4 & m_6\end{array}\right)$$ means to take the tensor product $$(-i)^{m_1m_2} \sigma_x^{m_1} \sigma_z^{m_2} \otimes (-i)^{m_3m_4} \sigma_x^{m_3} \sigma_z^{m_4} \otimes (-i)^{m_5m_6} \sigma_x^{m_5} \sigma_z^{m_6}.$$ Now, we ask: Of these 64 matrices, how many are symmetric and how many are antisymmetric? We can only get antisymmetry from $\sigma_y$, and (speaking heuristically) if we include too much antisymmetry, it will cancel out. More carefully put: We need an odd number of factors of $\sigma_y$ in the tensor product to have the result be an antisymmetric matrix. Otherwise, it will come out symmetric. Consider the case where the first factor in the triple tensor product is $\sigma_y$. Then we have $(4-1)^2 = 9$ possibilities for the other two slots. The same holds true if we put the $\sigma_y$ in the second or the third position. Finally, $\sigma_y \otimes \sigma_y \otimes \sigma_y$ is antisymmetric, meaning that we have $$9 \cdot 3 + 1 = 28$$ antisymmetric matrices in the three-qubit Pauli group. In the notation established above, they are the elements for which $$m_1 m_2 + m_3 m_4 + m_5 m_6 = 1 \mod 2.$$ Puzzle: This has a secret geometrical meaning in terms of the Fano plane. What is it? Hint: $28 = 7 \cdot 4 = 7 \cdot (7 – 3)$. We have a 3 (the nontrivial elements of the single-qubit Pauli group), an 8 (the dimension of the three-qubit Hilbert space), and a 28 (the number of antisymmetric three-qubit Pauli operators). Does this story have a 24 as well? Sneakily, yes—because the same combinatorial notation that we used to enumerate the 28 antisymmetric Pauli operators also enumerates the 28 bitangents to a quartic curve. This is a lovely piece of nineteenth-century geometry, in the genre of the 27 lines on a cubic surface. It connects to Galois theory, as this paper explains: D. Plaumann, B. Sturmfels and C. Vinzant, “Quartic curves and their bitangents,” Journal of Symbolic Computation 46(2011), 712–33. I didn’t think that 24 entered into this numerology until I read a little more deeply about quartic curves, and I learned that a generic plane quartic has 24 flex points. The story of the free modular lattice on 3 generators is a story about how 3 things together build up an interesting collection of 28 things that live in an 8-dimensional space. I find it rather cute that an 8-dimensional space also yields an interesting collection of 28 things built up from 3 things in this other way. [Cross-posted from the n-Category Café.]
My question concerns Section 6.1 of Hori's 'Linear Models of Supersymmetric D-branes' (http://arxiv.org/abs/hep-th/0012179). Firstly, some background. The quantum field theory in question is a 2d ${\cal N}=(2,2)$ Landau-Ginzburg (LG) model on a worldsheet with boundaries (e.g. the infinite strip, or a disk). Boundary conditions for the fields ought to be chosen at the boundaries. The possible boundary conditions can only preserve a subset of the supersymmetries at the boundaries, in this case only B-type supersymmetry (see Section 2.2.2 for definition) is chosen to be preserved, and the corresponding boundary conditions are called B-branes. Now, on to my question. The B-brane studied in Section 6.1 is a D0-brane, and Hori argues that the D0-brane must be located at a critical point ($\partial_i W=0$) of the superpotential. Unfortunately, I cannot grasp his argument completely. He presents the conserved supercharge, \begin{equation}Q={1\over 2\pi}\int d x^1\left\{g_{i\bar{\jmath}}(\overline{\psi}_-^{\bar{\jmath}}+\overline{\psi}_+^{\bar{\jmath}})\partial_0\phi^i+g_{i\bar{\jmath}}(\overline{\psi}_-^{\bar{\jmath}}-\bar{\psi}_+^{\bar{\jmath}})\partial_1\phi^i+(\psi_-^i-\psi_+^i)\partial_iW\right\}.\end{equation} and from there says that 'Since the boundary point, say at $x^1 = π$ is locked at that point, we see that the supersymmetryis indeed broken for any configuration. Thus, we will not consider such a D-brane.In other words, D0-branes must be located at one of the critical points of W.' I suspect that his argument has something to do with the SUSY algebra $\{Q,Q^\dagger\}=H$, where $H$ is the Hamiltonian, and when the Hamiltonian has non-zero vacuum expectation value, there is spontaneous supersymmetry breaking. But what exactly does he mean by 'locked', and why does it imply breaking of supersymmetry? Aren't all the other fields also 'locked' at the boundary due to their boundary conditions?This post imported from StackExchange Physics at 2016-09-28 07:45 (UTC), posted by SE-user Mtheorist
ASU Electronic Theses and Dissertations This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media. In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog. Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu. Arizona State University Schmidt, Kevin E 3 Beckstein, Oliver 2 Alarcon, Ricardo 2 Lebed, Richard 2 Shumway, John 1 Alarcon, Ricardo O more 1 Alarcón, Ricardo 1 Blyth, David Cooper 1 Chen, Tingyong 1 Comfort, Joseph R 1 Erten, Onur 1 Liu, Jianheng 1 Lynn, Joel Eric 1 Madeira, Lucas 1 Nelson, Garrett 1 Ritchie, Barry G 1 Ros, Robert 1 Sadjadi, Seyed Mahdi 1 Shovkovy, Igor 1 Shumway, John B 1 Spence, John C 1 Thorpe, Michael F 1 Treacy, Michael MJ 1 Weierstall, Uwe J 1 Yu, Hongbin 1 Zhang, Jie 7 English 7 Public Physics 2 Condensed matter physics 2 Nuclear physics and radiation 1 2-Photon Polymerization 1 2D materials 1 Atomic physics 1 Biophysics more 1 Carlo 1 Cold Fermi gases 1 GDVN 1 Hadronic weak interaction 1 Injector 1 Materials Science 1 Monte 1 Nuclear physics 1 Nuclear structure 1 Parity violation 1 Pions 1 Quantum 1 Quantum Monte Carlo 1 Serial Crystallography 1 Theoretical physics 1 Viscous Jet 1 XFEL 1 chiral 1 computational physics 1 condensed matter 1 glasses 1 graphene 1 path integral Monte Carlo 1 physics 1 quantum Monte Carlo 1 quantum wires 1 rigidity 1 silica One dimensional (1D) and quasi-one dimensional quantum wires have been a subject of both theoretical and experimental interest since 1990s and before. Phenomena such as the "0.7 structure" in the conductance leave many open questions. In this dissertation, I study the properties and the internal electron states of semiconductor quantum wires with the path integral Monte Carlo (PIMC) method. PIMC is a tool for simulating many-body quantum systems at finite temperature. Its ability to calculate thermodynamic properties and various correlation functions makes it an ideal tool in bridging experiments with theories. A general study of the features interpreted by the … Contributors Liu, Jianheng, Shumway, John B, Schmidt, Kevin E, et al. Created Date 2012 This work presents analysis and results for the NPDGamma experiment, measuring the spin-correlated photon directional asymmetry in the $\vec{n}p\rightarrow d\gamma$ radiative capture of polarized, cold neutrons on a parahydrogen target. The parity-violating (PV) component of this asymmetry $A_{\gamma,PV}$ is unambiguously related to the $\Delta I = 1$ component of the hadronic weak interaction due to pion exchange. Measurements in the second phase of NPDGamma were taken at the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS) from late 2012 to early 2014, and then again in the first half of 2016 for an unprecedented level of statistics in order … Contributors Blyth, David Cooper, Alarcon, Ricardo O, Ritchie, Barry G, et al. Created Date 2017 Monte Carlo methods often used in nuclear physics, such as auxiliary field diffusion Monte Carlo and Green's function Monte Carlo, have typically relied on phenomenological local real-space potentials containing as few derivatives as possible, such as the Argonne-Urbana family of interactions, to make sampling simple and efficient. Basis set methods such as no-core shell model or coupled-cluster techniques typically use softer non-local potentials because of their more rapid convergence with basis set size. These non-local potentials are typically defined in momentum space and are often based on effective field theory. Comparisons of the results of the two types of methods … Contributors Lynn, Joel Eric, Schmidt, Kevin E, Alarcón, Ricardo, et al. Created Date 2013 In this dissertation two kinds of strongly interacting fermionic systems were studied: cold atomic gases and nucleon systems. In the first part I report T=0 diffusion Monte Carlo results for the ground-state and vortex excitation of unpolarized spin-1/2 fermions in a two-dimensional disk. I investigate how vortex core structure properties behave over the BEC-BCS crossover. The vortex excitation energy, density profiles, and vortex core properties related to the current are calculated. A density suppression at the vortex core on the BCS side of the crossover and a depleted core on the BEC limit is found. Size-effect dependencies in the disk … Contributors Madeira, Lucas, Schmidt, Kevin E, Alarcon, Ricardo, et al. Created Date 2018 Sample delivery is an essential component in biological imaging using serial diffraction from X-ray Free Electron Lasers (XFEL) and synchrotrons. Recent developments have made possible the near-atomic resolution structure determination of several important proteins, including one G protein-coupled receptor (GPCR) drug target, whose structure could not easily have been determined otherwise (Appendix A). In this thesis I describe new sample delivery developments that are paramount to advancing this field beyond what has been accomplished to date. Soft Lithography was used to implement sample conservation in the Gas Dynamic Virtual Nozzle (GDVN). A PDMS/glass composite microfluidic injector was created and given … Contributors Nelson, Garrett, Spence, John C, Weierstall, Uwe J, et al. Created Date 2015 Spin-orbit interactions are important in determining nuclear structure. They lead to a shift in the energy levels in the nuclear shell model, which could explain the sequence of magic numbers in nuclei. Also in nucleon-nucleon scattering, the large nucleon polarization observed perpendicular to the plane of scattering needs to be explained by adding the spin-orbit interactions in the potential. Their effects change the equation of state and other properties of nuclear matter. Therefore, the simulation of spin-orbit interactions is necessary in nuclear matter. The auxiliary field diffusion Monte Carlo is an effective and accurate method for calculating the ground state … Contributors Zhang, Jie, Schmidt, Kevin E, Alarcon, Ricardo, et al. Created Date 2014 The structure of glass has been the subject of many studies, however some details remained to be resolved. With the advancement of microscopic imaging techniques and the successful synthesis of two-dimensional materials, images of two-dimensional glasses (bilayers of silica) are now available, confirming that this glass structure closely follows the continuous random network model. These images provide complete in-plane structural information such as ring correlations, and intermediate range order and with computer refinement contain indirect information such as angular distributions, and tilting. This dissertation reports the first work that integrates the actual atomic coordinates obtained from such images with structural … Contributors Sadjadi, Seyed Mahdi, Thorpe, Michael F, Beckstein, Oliver, et al. Created Date 2018
ISSN: 1937-5093 eISSN: 1937-5077 Kinetic & Related Models September 2012 , Volume 5 , Issue 3 Issue dedicated to Michel Chipot on the occasion of his 60th birthday Select all articles Export/Reference: Abstract: We present a Fourier transform formula of quadratic-form type for the collision operator with a Maxwellian kernel under the momentum transfer condition. As an application, we extend the work of Toscani and Villani on the uniform stability of the Cauchy problem for the associated Boltzmann equation to any physically relevant Maxwellian molecules in the long-range interactions with a minimal requirement for the initial data. Abstract: We develop a rigorous formalism for the description of the kinetic evolution of infinitely many hard spheres. On the basis of the kinetic cluster expansions of cumulants of groups of operators of finitely many hard spheres which are the generating operators of a nonperturbative solution of the Cauchy problem of the BBGKY hierarchy the nonlinear kinetic Enskog equation is derived. It is established that for initial states which are specified in terms of one-particle distribution functions the description of the evolution by the Cauchy problem of the BBGKY hierarchy and by the Cauchy problem of the generalized Enskog kinetic equation together with a sequence of explicitly defined functionals of a solution of stated kinetic equation are an equivalent. For the initial-value problem of the generalized Enskog equation the existence theorem is proved in the space of integrable functions. Abstract: We discuss optimal control problems for the Fokker--Planck equation arising in radiotherapy treatment planning. We prove existence and uniqueness of an optimal boundary control for a general tracking--type cost functional in three spatial dimensions. Under additional regularity assumptions we prove existence of a continuous necessary first--order optimality system. In the one--dimensional case we analyse a numerical discretization of the Fokker--Planck equation. We prove that the resulting discrete optimality system is a suitable discretization of the continuous first--order system. Abstract: In this paper, we establish two regularity criteria for the 3D MHD equations in terms of partial derivatives of the velocity field or the pressure. It is proved that if $\partial_3 u \in L^\beta(0,T; L^\alpha(\mathbb{R}^3)),~\mbox{with}~ \frac{2}{\beta}+\frac{3}{\alpha}\leq\frac{3(\alpha+2)}{4\alpha},~\alpha>2$, or $\nabla_h P \in L^\beta(0,T; L^{\alpha}(\mathbb{R}^3)),~\mbox{with}~\frac{2}{\beta}+\frac{3}{\alpha}< 3,~\alpha>\frac{9}{7},~\beta\geq 1$, then the weak solution $(u,b)$ is regular on $[0, T]$. Abstract: The aim of this paper is to derive the quantum hydrodynamic system associated with the most general class of nonlinear Schrödinger equations accounting for Fokker--Planck type diffusion of the probability density, called of Doebner--Goldin. This 'Doebner--Goldin hydrodynamic system' is shown to be reduced in most cases to a simpler one of quantum Euler type by means of the introduction of a nonlinear gauge transformation that changes the fluid mean velocity into a new effective velocity corrected by an osmotic contribution. Finally, we also discuss some particular situations of especial interest and compare the structure of the resulting fluid systems with that of the viscous quantum hydrodynamic and the quantum Navier--Stokes equations stemming from maximization of the quantum entropy for Wigner--BGK models. Abstract: In this paper we present a physically relevant hydrodynamic model for a bipolar semiconductor device considering Ohmic conductor boundary conditions and a non-flat doping profile. For such an Euler-Poisson system, we prove, by means of a technical energy method, that the solutions are unique, exist globally and asymptotically converge to the corresponding stationary solutions. An exponential decay rate is also derived. Moreover we allow that the two pressure functions can be different. Abstract: The purpose of this paper is to extend the result concerning the existence and the uniqueness of infinite energy solutions, given by Cannone-Karch, of the Cauchy problem for the spatially homogeneous Boltzmann equation of Maxwellian molecules without Grad's angular cutoff assumption in the mild singularity case, to the strong singularity case. This extension follows from a simple observation of the symmetry on the unit sphere for the Bobylev formula which is the Fourier transform of the Boltzmann collision term. Abstract: In this paper, we investigate large amplitude solutions to a system of conservation laws which is transformed, by a change of variable, from the well-known Keller-Segel model describing cell (bacteria) movement toward the concentration gradient of the chemical that is consumed by the cells. For the Cauchy problem and initial-boundary value problem, the global unique solvability is proved based on the energy method. In particular, our main purpose is to investigate the convergence rates as the diffusion parameter $\varepsilon$ goes to zero. It is shown that the convergence rates in $L^\infty$-norm are of the order $O\left(\varepsilon\right)$ and $O(\varepsilon^{1/2})$ corresponding to the Cauchy problem and the initial-boundary value problem respectively. Abstract: In this paper we study the large-time behavior of perturbative classical solutions to the hard and soft potential Boltzmann equation without the angular cut-off assumption in the whole space $\mathbb{R}^n _x$ with $n≥3$ .We use the existence theory of global in time nearby Maxwellian solutions from [12,11].It has been a longstanding open problem to determine the large time decay rates for the soft potential Boltzmann equation in the whole space, with or without the angular cut-off assumption [26,1]. For perturbative initial data, we prove that solutions converge to the global Maxwellian with the optimal large-time decay rate of $O(t^{-\frac{N}{2}+\frac{N}{2r}})$ in the $L^2_v$$(L^r_x)$-norm for any $2\leq r\leq \infty$. Abstract: We are concerned with the long-time behavior of global strong solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$, where the electric field is governed by the self-consistent Poisson equation. When the regular initial perturbations belong to $H^{4}(\mathbb{R}^{3})\cap \dot{B}_{1,\infty}^{-s}(\mathbb{R}^{3})$ with $s\in [0,1]$, we show that the density and momentum of the system converge to their equilibrium state at the optimal $L^2$-rates $(1+t)^{-\frac{3}{4}-\frac{s}{2}}$ and $(1+t)^{-\frac{1}{4}-\frac{s}{2}}$ respectively, and the decay rate is still $(1+t)^{-\frac{3}{4}}$ for temperature which is proved to be not optimal. Abstract: We consider the time-dependent 1D Schrödinger equation on the half-axis with variable coefficients becoming constant for large $x$. We study a two-level symmetric in time (i.e. the Crank-Nicolson) and any order finite element in space numerical method to solve it. The method is coupled to an approximate transparent boundary condition (TBC). We prove uniform in time stability with respect to initial data and a free term in two norms, under suitable conditions on an operator in the approximate TBC. We also consider the corresponding method on an infinite mesh on the half-axis. We derive explicitly the discrete TBC allowing us to restrict the latter method to a finite mesh. The operator in the discrete TBC is a discrete convolution in time; in turn its kernel is a multiple discrete convolution. The stability conditions are justified for it. The accomplished computations confirm that high order finite elements coupled to the discrete TBC are effective even in the case of highly oscillating solutions and discontinuous potentials. Abstract: N/A Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
So far I have avoided division of fractions. This is not hard, in fact, it’s almost as easy as multiplying but I would like to show why the method works. First consider 6 ÷ 2. This problem is asking the question, “How many 2’s fit into 6?”. The answer is 3 sets of 2’s make up 6. If the problem was 6 ÷ 3, the question would be “How many 3’s fit into 6?”. The answer here is there are 2 sets of 3’s that make up 6. Of course, the answer is not always an integer, Sometimes there are leftover numbers. For example, how many 2’s can make up 7, in other words, 7 ÷ 2. The answer is there are 3 sets of 2’s that can fit into 7 but there will be 1 left over as sets of 2’s do not exactly make up 7. Now remember that fractions are indicating division as well so the above problems are equivalent to\[ \frac{6}{2}{,}\hspace{0.33em}\hspace{0.33em}\frac{6}{3}{,}\hspace{0.33em}\hspace{0.33em}\frac{7}{2} \] So what would \[ {1}\hspace{0.33em}\div\hspace{0.33em}\frac{1}{2}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{\frac{1}{2}} \] mean? It means the same as before: how many one-halves fit into 1? Now the answer is 2 because there are two halves in a whole. What about \[ {2}\hspace{0.33em}\div\hspace{0.33em}\frac{1}{2}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{\frac{1}{2}} \]? Can you see that the answer is 4? Because if there are two things split into halves, then there will be 4 halves. Let’s now look at \[ {3}\hspace{0.33em}\div\hspace{0.33em}\frac{3}{4}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{\frac{3}{4}} \] Looks like there are 4 three-quarters that fit into 3, and that is correct. Let’s look at one more. What about \[ \frac{3}{4}\hspace{0.33em}\div\hspace{0.33em}\frac{1}{4}\hspace{0.33em}{=}\hspace{0.33em}\frac{\frac{3}{4}}{\frac{1}{4}} \]. Can you see that there are 3 one-quarters that fit into three-quarters? All of the above answers can be obtained by multiplying and remembering that integers like 6 can also be shown as a fraction: \[ \frac{6}{1} \] as 6 divided by 1 is still 6. So now let’s look at the previous problems. You can convert a fraction division problem by inverting the fraction in the denominator then multiplying:\[ {1}\hspace{0.33em}\div\hspace{0.33em}\frac{1}{2}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{\frac{1}{2}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{1}\hspace{0.33em}\times\hspace{0.33em}\frac{2}{1}\hspace{0.33em}{=}\hspace{0.33em}\frac{2}{1}\hspace{0.33em}{=}\hspace{0.33em}{2} \] The inverted fraction is called a reciprocal. A simply way to remember how to do fraction division is the phrase: invert and multiply. Now let’s do the other ones:\[ {3}\hspace{0.33em}\div\hspace{0.33em}\frac{3}{4}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{\frac{3}{4}}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{3}}{1}\hspace{0.33em}\times\hspace{0.33em}\frac{4}{\rlap{/}{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{4}{1}\hspace{0.33em}{=}\hspace{0.33em}{4} \] And finally\[ \frac{3}{4}\hspace{0.33em}\div\hspace{0.33em}\frac{1}{4}\hspace{0.33em}{=}\hspace{0.33em}\frac{\frac{3}{4}}{\frac{1}{4}}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{\rlap{/}{4}}\hspace{0.33em}\times\hspace{0.33em}\frac{\rlap{/}{4}}{1}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{1}\hspace{0.33em}{=}\hspace{0.33em}{3} \] I will do more examples in my next post including division with mixed numbers.
Home > First look at time-dependent CP violation using early Belle II data BELLE2-CONF-PROC-2019-007 Stefano Lacaprara 24 June 2019 Abstract: Time dependent CP-violation phenomena are a powerful tool to precisely measure fundamental parameters of the Standard Model and search for New Physics. The Belle II experiment at the SuperKEKB energy-asymmetric $\Pep\Pem$ collider is a substantial upgrade of the B factory facility at the Japanese KEK laboratory. The design luminosity of the machine is $8\times{10}^{35}~cm^{-2}s^{-1}$ and the Belle II experiment aims to record $50~ab^{-1}$ of data, a factor of 50 more than its predecessor. From February to July 2018, the machine has completed a commissioning run, achieved a peak luminosity of $5.5\times10^{33}~cm^{-2}s^{-1}$ , and Belle II has recorded a data sample of about $0.5~fb{-1}$ . Main operation of SuperKEKB has started in March 2019. This early data set is used to establish the performance of the detector in terms of reconstruction efficiency of final states of interest for the measurement of time dependent CP violation, such as $\PJpsi\PKst$ , $\Petaprime \PKs$, and $\phi K_s$. A first assessment of the B flavor tagging capabilities of the experiment will be given, along with estimates of the Belle II sensitivity to the CKM angles $\phi_1/\beta$ and $\phi_2/\alpha$ and to potential New Physics contributions in penguin amplitudes dominated decays and in $b\to s\gamma$ transitions. In this talk we will present estimates of the sensitivity to $\phi_1$ in the golden channels $\Pqb\to\Pqc\APqc\Pqs$ and in the penguin-dominated modes $\PBz\to\Petaprime\PKz,\quad\Pphi\PKz,\quad \PKz\Pgpz(\gamma)$. A study for the time-dependent analysis of $\PBz\to\Pgpz\Pgpz$, relevant for the measurement of $\phi_2$, and feasible only in the clean environment of an $\Pep\Pem$ collider, will also be given. Keyword(s): TDCPV ; Belle II ; Phase 3
I want to get a better grasp of what a rigorous formal proof is. So I was hoping to find proofs of interesting results using natural deduction or Hilbert system or similar. The "interesting result" could be anything from infinitude of primes to Lagrange's theorem. "Interesting result" is not $x \wedge y \implies y \wedge x$ or similar. In particular I'm interested in how mathematical objects (like primes or groups in the examples) fit into the proofs. Nobody uses Hilbert systems to actually write down fully formal proofs of anything interesting. The explosion in size that results from repeated use of the Deduction Theorem makes them completely unfeasible for practical work. Using natural deduction instead avoids this problem, but still one needs to have some sort of facility for using abbreviations for defined notions; otherwise the properties you want to prove will themselves become completely impenetrable spaghetti balls of primitive notions. This means that systems of the kind usually presented by logic texts are not really well suited to doing actual work, but only for theoretical investigations into the limits of what can be proved. For actual work you need a system with native support for defined notions, something like formalized metatheorems, and so forth. What you'll want is probably to download an actual proof assistant software with an existing library of basic concepts, and look at how the proofs in that library look -- for example Isabelle/HOL (which was all the rage a decade or two ago when I had a connection with the area, but may be obsoleted by something else these days, for all I know). Be aware that there's something of a learning curve if you start out with just knowledge of abstract textbook-style proof systems, but the basic concepts ought to be recognizable. I will present proofs of two interesting formulas (both proofs are hopefuly interesting enough). I will not use any specific calculi, but the reasoning I will show is precisely how you would prove it in Hilbert style calculus or in natural deduction. An interesting proof can be the one of Peirce law $(A\to B)\to A)\to A$: Assume (1) $ (A\to B)\to A$ and further assume (2) $\neg A$. From (2) you obtain $A\to B$, thus from (1) you get $A$. It means that (1) and $\neg A$ implies A, contradiction. You can conclude $(1)$ implies $A$. Which is the Peirce law. Another interesting proof, this time in first order classical logic, is for the drinking guru formula $\exists x (D(x)\to \forall y D(y))$. Meaning D is a unary predicate symbol. Lets say that D(x) means $x$ is drinking. Then the formula stands for: There is someone who whenever he is drinking, everyone else is drinking. Proof: We will use the following instace of the law of excluded middle: $\forall x D(x)\vee\neg\forall x D(x)$. It is now enough to show that the desired formula is consequence of both of these - this is a proof by cases argument - which is formalised by the Hilbert style axiom $(A\to C)\to ((B\to C)\to (A\vee B\to C))$. Assume $\forall x D(x)$, then easily one get the following chain: $\forall y D(y)$, $D(x)\to \forall y D(y)$ (using weakening), $\exists x (D(x)\to \forall y D(y)) $ (exist. quantification). Now assume $\neg \forall x (D(x))$, from this we prove $\exists x (\neg D(x))$ (interderivability of quantifiers). Next from $\neg D(x)$ you can prove $D(x)\to \forall y D(y) $ (this is is true because from a false premis everything follows). Therefore from $\exists x (\neg D(x)) $ you conclude the desired $\exists x (D(x)\to \forall D(y))$.
Sobolev spaces of order 2 are known to form a Hilbert space. Consider such a Sobolev space of (order 2) functions on the domain $f:\mathbb{R}\rightarrow \mathbb{R}$. What is an example for the basis of such a Sobolev space. As mentioned above, the question is not well-posed in this form since the answer depends not only on the precise space you are considering but also on the norm you are using. Perhaps the following comment might nevertheless be useful. If your space can be identified with (or is defined as) the domain of definition of an unbounded, self-adjoint operator (as many of the useful ones are) and if the latter has discrete spectrum, then its eigenfunctions (suitably normed) form an ONB for the Sobolev space with the corresponding norm. Simple examples are the Laplace operator on the circle or the standard one-dimensional Schrödinger operator on the line. More sophisticated examples are provided by the Laplacian on a compact Riemann manifold or general Schrödinger operators under suitable conditions on the potential function. My guess for (a nice!) basis: use the Hermite functions and the Gram-Schmidt process. It is may be rather lengthy, but the procedure and ingredients are as follows. 1) $h_n' = \sqrt{n/2}h_{n-1} + \sqrt{(n+1)/2)}h_{n+1}, n \geq 0, h_{-1}=0$ 2) $h_n$ are what is called the 'physicists polynomials', denoted by $\psi_n$ there. 3) Sobolev product would be $(f,g)_S=(f,g)_0 + (f',g')_0$. Could be also the product containing the Fourier transform, but this seems to me more lenghty ($Fh_n$ is nice, due to Wiener thorem, and equals $(-i)^nh_n$, but the factor $(1+|\xi|^2)^r)$ in the product in H^r is also a bit disturbing the smoothness of the computation. 4) The space ($\Omega$ for PDE-researchres) would be R. 5) Advantage of the Hermite fucntions is due to their easy dimensional properties $h_{a_1...a_n}(x^1,...,x^n)=h_{a_1}(x^1)....h_{a_n}(x^n)$, where $a=(a_1,...,a_n) \in N_0^n.$ One should compute the first three and then to see the system... The derivative property of the Hermits enables to speak about the neighbour basis terms only. But this must be done properly. [Formally by an induction, after "guessing".] Note that this gives you the orthogonal basis only, one should normalize than! (In the Sobolev norm!) It is quite strange - I could not found this results in the literature (even not as excercises for students). (Chebyshev polynomials should be orthonormal for $H^1([-1,1])$ but changing the domain to $\mathbb R$ change the polynomials substantially even if we take the correct domain diffeomorphic to $\mathbb R$, i.e., $(-1,1)$.)
A slight infinite extension of this Show that two bounded sequences have convergent subsequences with the same index sequence Let $S$ be a compact metric space. Suppose, for each $m\in\mathbb{N}$, $\{x(m,n)\}_{n\geq 1}$, is a sequence in $S$. The question is can we always get a strictly increasing sequence of natural numbers $n_k, k=1,2,\cdots$ such that the subsequence for each $m$, $\{x(m,n_k)\}_{k\geq 1}$ is convergent as $k\to \infty$? I want the same $n_k$ working for any $m$. I worked like this: Compact implies sequentially compact in metric spaces. Therefore, the sequence for fixed $m$, $\{x(m,n)\}_{n\geq 1}$ has convergent subsequence. But the subsequence indices would differ for each $m$. I am not able to move further inductively. Any help would be grateful.
I was having trouble proving by induction with this problem. $$\sum_{i=1}^n \frac{3}{4^i} < 1$$ for all $n \geq 2$ I went to see my professor and he said try proving this equality $$\sum_{i=1}^n \frac{3}{4^i} < 1 - 1/4^n $$ Where did he get the $$1-(1/4^n)$$ from? How would I prove this? And is it still proving the same inequality? The "improved" inequality is wrong as stated, it should be $\le$ (or even $=$) instead of $<$ You can hardly use induction with the original inequality. If you only have $s_n<1$, you cannot conclude that $s_{n+1}<1$ because you always have $s_{n+1}>s_n$. In other words, you need that $s_n$ is sufficientlysmaller than $1$ (and need to show that $s_{n+1}$ is not just smaller, but sufficientlysmaller than $1$) You might get the $1-1/4^n$ from looking at the first few sums ($\frac34$, $\frac{15}{16}$, $\frac{63}{64}$) and smelling the pattern. As it turns out, the stricterinequality (or even equality) is much easierto prove. Proe by induction is straightforward. Since $1-1/4^n<1$ you also obtain the originally desired result.
200 1 Hi there! I'd like to calculate the work done by the gravitational force. I know the work is defined by the integration of a 1-form: [tex]L=\int_\gamma \omega[/tex] where [tex]\omega=F_xdx+F_ydy+F_zdz[/tex] This works fine in cartesian coordinates and I know how to integrate it, but what if I want to use spherical coordinates? Then I'd have: [tex]\omega=F_rdr+F_{\theta}d{\theta}+F_{\phi}{d\phi}=F_rdr[/tex] Suppose [tex]\gamma[/tex] is a curve defined in spherical coordinates (i.e. [tex]\vec\gamma=R(t)\hat r+\Theta(t)\hat\theta+\Phi(t)\hat\phi[/tex]), how do I integrate the 1-form along [tex]\gamma[/tex]? I'd like to calculate the work done by the gravitational force. I know the work is defined by the integration of a 1-form: [tex]L=\int_\gamma \omega[/tex] where [tex]\omega=F_xdx+F_ydy+F_zdz[/tex] This works fine in cartesian coordinates and I know how to integrate it, but what if I want to use spherical coordinates? Then I'd have: [tex]\omega=F_rdr+F_{\theta}d{\theta}+F_{\phi}{d\phi}=F_rdr[/tex] Suppose [tex]\gamma[/tex] is a curve defined in spherical coordinates (i.e. [tex]\vec\gamma=R(t)\hat r+\Theta(t)\hat\theta+\Phi(t)\hat\phi[/tex]), how do I integrate the 1-form along [tex]\gamma[/tex]?
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Your realisation is correct and something chemistry teachers try to hammer into their students’ heads time and time again (and yet, the point is still often lost): Catalysts will never change the thermodynamics of a reaction. They only ease the path of the reaction. Forward and backward reactions will be accelerated equivalently. So what is the benefit of a catalyst? There are multiple ones. Speed Take for example the Haber-Bosch process to synthesise ammonia from nitrogen and hydrogen. $$\ce{N2 + 3 H2 <=> 2 NH3}\tag{1}$$$$\Delta_\mathrm{r} H^0_\mathrm{298~K} = -45.8~\mathrm{\frac{kJ}{mol}}$$ This reaction is exothermic and thus should, theoretically or thermodynamicly, proceed spontaneously, e.g. if you mixed nitrogen and hydrogen in the appropriate ratio and added a spark. It does not, however. Significant activation energy is required to cleave the $\ce{N#N}$ triple bond. Typical methods to add activation energy include heating. In the Haber-Bosch process, the mixture is heated to $400$ to $500~\mathrm{^\circ C}$ to supply the required activation energy. However, since the reaction is exothermic, heating will favour the reactant side. Increasing the pressure improves the entropic term of the Gibbs free energy equation, hence why pressures of $15$ to $25~\mathrm{MPa}$ are used. Catalysts, based on iron with different promotors, are used to accelerate the reaction. By using catalysts, one can lower the temperature required in a trade-off between speed of reaction and favouring the product side of the equilibrium. With the conditions and catalysts used, one achieves a yield of $\approx 15~\%$ of ammonia within a reasonable timeframe. Not employing a catalyst would give much lower yields at much longer timeframes — economically much less feasible. Direct reaction path not accessable This is mainly true for many transition-metal catalysed carbon-carbon bond formation reactions, but is also true for some inorganic processes like the disproportionation of hydrogen peroxide as in equation $(2)$. $$\ce{2 H2O2 -> 2 H2O + O2}\tag{2}$$ Hydrogen peroxide is a reactive chemical that cannot be stored forever, but the direct disproportionation path is not typically what degrades it. However, you can add $\ce{MnO2}$ to it. Upon addition, oxygen gas vigourously bubbles out of the solution. In this case, there was a kinetic barrier impeding the direct transformation due to reactants and products having different multiplicities (oxygen gas’ ground state is a triplet, all others are singlets). The $\mathrm{d^3}$ ion manganese(IV) is a radical itself that can partake in different radical reactions, allowing the diradical oxygen to be liberated. Selectivity This is exceptionally true for transition-metal catalysed organic carbon-carbon bond formation reactions. Note first that the action of a catalyst is frequently depicted as a catalytic cycle: A reactant reacts with the catalyst to some intermediate species, this rearranges or reacts with other reactants/additives/solvents in a set of specific steps until finally the products are liberated and the catalytic species is regenerated. Many such reactions require organic halides as one of the reacting species. And the first step is typically an oxidative addition as shown in equation $(3)$, where $\ce{X}$ is a halide ($\ce{Cl, Br, I}$). $$\ce{R-X + Pd^0 -> R-Pd^{+II}-X}\tag{3}$$ Palladium typically prefers oxidatively adding to bromides or iodides and tends to leave chlorides alone. I myself have performed a reaction with near-quatitative yield in which a reactant contained both a $\ce{C-Br}$ and a $\ce{C-Cl}$ bond — selectively, only the $\ce{C-Br}$ bond took part in the palladium(0) catalysed Sonogashira reaction. Although I did not try it myself, I am pretty sure that switching to a nickel(0) catalyst species would shift the reaction in favour of reacting with the carbon-chlorine bond rather than the carbon-bromine one. Mildness This is basically a reiteration of the first point albeit with different intentions. Many a time in organic synthesis, one has a rather sensitive reactant that would degrade or undergo side-reactions if subjected to standard reaction conditions, such as high pH-value or elevated temperatures. As an example, consider a transesterification as shown in equation $(4)$. $$\ce{R-COO-Et + Me-OH <=> R-COO-Me + EtOH}\tag{4}$$ This reaction is, of course, an equilibrium and by using methanol as the solvent we can shift it to the product side. For the reaction to happen, one would need a base strong enough to deprotonate methanol, giving the methanolate anion, which can then attack the ester functionality. However, methanolate being a strong (and nucleophilic) base itself can introduce undesired side-reactions, including epimerisation of the α-carbon. One can catalyse this reaction by using $\ce{Bu2SnO}$, which will activate the carbonyl group, making it more susceptible to a nucleophilic attack. The reaction speed is the same but the conditions are milder (no additional base required) and the number of side-reactions is those strongly limited. In particular, I noticed no epimerisation of the α-carbon in the tin(IV) catalysed method.
$\lim_{x \rightarrow 27} \frac{x^{1/3} - 3}{x - 27}$. Hello, I am stuck on this one. I am sure there is a simple step but I am not seeing it. Thanks in advance for the help. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community $$(x-27)=(x^{\frac{1}{3}}-3)(x^{\frac{2}{3}}+3x^{\frac{1}{3}}+9)$$ Hint: This is equivalent to $$\lim_{x\to3}\frac{x-3}{x^3-27}$$ A simpler ratio. Factor from here. Hint. For any differentiable function near $a$, one has$$\lim_{x \to a}\frac{f(x)-f(a)}{x-a}=f'(a).$$ Then just apply it to $f(x)=x^{1/3}-3,\, a=27$.
In my last post, you saw another maths shortcut – exponents. But I used examples where the exponents were positive integers. It turns out that mathematically, any number, be it negative, fractional, or irrational, can be used as an exponent. Let’s expand our knowledge here by looking at exponents that are not positive integers. In my last post, I presented the rule that\[ \frac{{x}^{a}}{{x}^{b}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{{a}{-}{b}} \] so that \[ \frac{{x}^{3}}{{x}^{2}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{{3}{-}{2}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{1}\hspace{0.33em}{=}\hspace{0.33em}{x} \]. What if you have \[ \frac{{x}^{3}}{{x}^{3}} \]? Well, the rule says that\[ \frac{{x}^{3}}{{x}^{3}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{{3}{-}{3}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{0} \] But we know that the same number divided by itself is 1. So it makes sense, and it makes maths consistent, if we define anything raised to the “0” power as 1. That is, \[ {x}^{0}\hspace{0.33em}{=}\hspace{0.33em}{1} \] for any number x. Now let’s go further. What if we have \[ \frac{{x}^{2}}{{x}^{3}} \]? According to the rule, this is \[ {x}^{{2}{-}{3}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{{-}{1}} \]. What does a negative exponent mean? Well let’s do the same problem without using the rule: \frac{{x}^{2}}{{x}^{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{x}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{x}}{\rlap{/}{x}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{x}\hspace{0.33em}\times\hspace{0.33em}{x}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{1}} \] So it appears that a negative exponent means that the number raised to a negative power, is the same as the number in the denominator raised to its positive power. This is true for any exponent:\[ {x}^{{-}{a}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{a}} \] This means that factors in a fraction can be moved at will from the numerator to the denominator or vice versa, by just changing the sign of the exponents:\[ \begin{array}{l} {\frac{7}{{x}^{{-}{2}}}\hspace{0.33em}{=}\hspace{0.33em}{7}{x}^{2}}\\ {{3}{y}^{{-}{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{{y}^{3}}}\\ {\frac{4{xy}^{5}}{{wz}^{{-}{6}}}\hspace{0.33em}{=}\hspace{0.33em}\frac{4{xy}^{5}{z}^{6}}{w}} \end{array} \] However, be careful. You can only do this with factors, that is things that are multiplied together. It does not work with fractions where things are added or subtracted:\[ \frac{x}{{y}\hspace{0.33em}{+}\hspace{0.33em}{z}^{{-}{3}}}\hspace{0.33em}\ne\hspace{0.33em}\frac{{x}\hspace{0.33em}{+}\hspace{0.33em}{z}^{3}}{y} \] By the way, the symbol ≠ means “does not equal”.
Note that we always have $H \subset N_G(H)$.Hence our goal is to find an element in $N_G(H)$ that does not belong to $H$. Since $G$ is a nilpotent group, it has a lower central series\[ G=G^{0} \triangleright G^{1} \triangleright \cdots \triangleright G^{n}=\{e\},\]where $G=G^{0}$ and $G^{i}$ is defined by\[G^i=[G^{i-1},G]=\langle [x,y]=xyx^{-1}y^{-1} \mid x \in G^{i-1}, y \in G \rangle\]successively, and $e$ is the identity element of $G$. Since $H$ is a proper subgroup of $G$, there is an index $k$ such that\[G^{k+1} \subset H \text{ but } G^{k} \nsubseteq H.\] Take any $x\in G^{k} \setminus H$.We claim that $x \in N_G(H)$. For any $y\in H$, it follows from the definition of $G^{k+1}$ that\[ [x,y] \in G^{k+1} \subset H.\]Hence $xyx^{-1}y^{-1}\in H$.Since $y\in H$, we see that $xyx^{-1}\in H$.As this is true for any $y\in H$, we conclude that $x\in N_G(H)$.The claim is proved. Since $x$ does not belong to $H$, we conclude that $H \subsetneq N_G(H)$. Normalizer and Centralizer of a Subgroup of Order 2Let $H$ be a subgroup of order $2$. Let $N_G(H)$ be the normalizer of $H$ in $G$ and $C_G(H)$ be the centralizer of $H$ in $G$.(a) Show that $N_G(H)=C_G(H)$.(b) If $H$ is a normal subgroup of $G$, then show that $H$ is a subgroup of the center $Z(G)$ of […] Centralizer, Normalizer, and Center of the Dihedral Group $D_{8}$Let $D_8$ be the dihedral group of order $8$.Using the generators and relations, we have\[D_{8}=\langle r,s \mid r^4=s^2=1, sr=r^{-1}s\rangle.\](a) Let $A$ be the subgroup of $D_8$ generated by $r$, that is, $A=\{1,r,r^2,r^3\}$.Prove that the centralizer […] Infinite Cyclic Groups Do Not Have Composition SeriesLet $G$ be an infinite cyclic group. Then show that $G$ does not have a composition series.Proof.Let $G=\langle a \rangle$ and suppose that $G$ has a composition series\[G=G_0\rhd G_1 \rhd \cdots G_{m-1} \rhd G_m=\{e\},\]where $e$ is the identity element of […] Special Linear Group is a Normal Subgroup of General Linear GroupLet $G=\GL(n, \R)$ be the general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices.Consider the subset of $G$ defined by\[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\]Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that […]
I have some trouble understanding the benefits of Bayesian networks. Am I correct that the key benefit of the network is that one does not need to use chain rule of probability in order to calculate joint distributions? So, using the chain rule: $$ P(A_1, \dots, A_n) = \prod_{i=1}^n (A_i \mid \cap_{j=1}^{i-1} A_j) $$ leads to the same result as the following (assuming the nodes are structured by an Bayesian network)? $$ P(A_1, \dots, A_n) = \prod_{i=1}^n P(A_i \mid \text{parents}(A_i)) $$
Here's some piece of the bigger picture. Maass forms and holomorphic modular forms are both automorphic representations for $GL(2)$ over the rationals. An automorphic representation is a typically huge representation $\pi$ of an adele group (in this case $GL(2,\mathbf{A})$, with $\mathbf{A}$ the adeles of $\mathbf{Q}$). Because the adeles is the product of the finite adeles and the infinite adeles, this representation $\pi$ is a product of a finite part $\pi_f$ and an infinite part $\pi_\infty$. The infinite part is a representation of $GL(2,\mathbf{R})$ (loosely speaking -- there are technicalities but they would only cloud the water here). The representation theory of $GL(2,\mathbf{R})$, in this context, is completely understood. The representations basically fall into four categories, which I'll name (up to twist): 1) finite-dimensional representations (these never show up in the representations attached to cusp forms). 2) Discrete series representations $D_k$, $k\geq2$ (these are the modular forms of weight 2 or more). 3) The limit of discrete series representation $D_1$ (these are the weight 1 forms). 4) The principal series representations (these are the Maass forms). Now what does Langlands conjecture? He makes a conjecture which does not care which case you're in! He conjectures the existence of a "Galois representation" attached to $\pi$, and this is a "Galois representation" in a very loose sense: it is a continuous 2-dimensional complex representation of the conjectural "Langlands group", attached to $\pi$. Note that there should be a map from the Langlands group to the Galois group, and in the case of Maass forms and weight 1 forms Langlands' representation should factor through the Galois group. For modular forms of weight 2 or more Langlands' conjecture has not been proved and in some sense it is almost not meaningful to try to prove it because no-one can define the group. In particular Deligne did not prove Langlands' conjecture, he proved something else. So Clozel came along in 1990 and tried to put Deligne's work into some context and he came up with the following: he formulated the notion of what it meant for $\pi_\infty$ to be algebraic (in fact there are two notions of algebraic, which differ by a twist in this context, so let me write "$L$-algebraic" to make it clear which one I'm talking about) and conjectured that if $\pi$ were $L$-algebraic then there should be an $\ell$-adic Galois representation $\rho_\pi$ attached to $\pi$. Maass forms with eigenvalue $1/4$, and holomorphic eigenforms, are $L$-algebraic, and the $\ell$-adic Galois representation attached to the Maass forms/weight 1 forms is just the one you obtain by fixing an isomorphism $\mathbf{C}=\overline{\mathbf{Q}}_\ell$. I should say that Clozel worked with $GL(n)$ not $GL(2)$ and also worked over an arbitrary number field base. Whether or not the image of $\rho_\pi$ is finite is something which is conjecturally determined by $\pi_\infty$: you can read it off from the infinitesimal character of $\pi_\infty$ and also from the local Weil group representation attached to $\pi_\infty$ by the local Langlands conjectures, which are all theorems (of Langlands) for real reductive groups. Put within this context your question becomes purely local: one has to figure out what Clozel's recipe gives in each case to get a handle on what your question is asking. You're asking about principal series representations. If you work out Clozel's recipe in these cases you find that if $\lambda\not=1/4$ then $\pi_\infty$ is not $L$-algebraic (and so we don't even expect a representation of the Galois group, we just expect a representation of the conjectural Langlands group), and if $\lambda=1/4$ then, up to twist, we expect the image to be always finite, because, well, that's what the calculation gives us. I learnt this by just doing all these calculations myself. I wrote them up in brief notes at http://www2.imperial.ac.uk/~buzzard/maths/research/notes/automorphic_forms_for_gl2_over_Q.pdf and http://www2.imperial.ac.uk/~buzzard/maths/research/notes/local_langlands_for_gl2R.pdf (both available from http://www2.imperial.ac.uk/~buzzard/maths/research/notes/index.html ). So why is there this asymmetry? Well actually this asymmetry is not surprising because it is predicted on the Galois side as well. If you look at an irreducible mod $p$ ordinary Galois representation which is odd then its universal ordinary deformation is often known to be isomorphic to a Hecke algebra of the type defined by Hida (so in particular we get lots of interesting $\ell$-adic Galois representations with infinite image). In particular its Krull dimension should be 2 (and this was already known to Mazur in the 80s). But the calculations for these Krull dimensions involve local terms, and the local term at infinity depends on whether the representation is odd or even. If you consider deformations of an even Galois representation then the calculations come out differently and the Krull dimension comes out one smaller. In particular one only expects to see finite image lifts, plus twists of such lifts by powers of the cyclotomic character. So in summary you see differences on both sides -- the automorphic side and the Galois side -- and they match up perfectly! You don't expect $\ell$-adic representations to show up in the Maass form story and yet things are completely consistent anyway. Toby Gee and I recently tried to figure out the complete conjectural picture about how automorphic representations and Galois representations were related. Our conclusions are at http://www2.imperial.ac.uk/~buzzard/maths/research/papers/bgagsvn.pdf . But for $GL(n)$ this was all due to Clozel over 20 years ago (who would have known all those calculations that I linked to earler; these are all standard).
Electromagnetic Waves Displacement Current The magnetic field due to current carrying conductor, i cis determined by using the Ampere’s circuit law. \tt \int \overrightarrow{B} \cdot \overrightarrow{dl} = \mu_{0} i_{c} Conduction current, i cis produced by the time varying magnetic field \tt i_{c} \propto \frac{dB}{dt} \ and \ i_{c} = \frac{dq}{dt} The rate of change of electrical flux produces a current called DISPLACEMENT CURRENT, i d. Flux due to electric field. φ E= EA Cos θ \tt \overline{E} \cdot \overline{A} = \int \overline{E} \cdot \overline{ds} \tt i_{d} = \varepsilon_{0} \frac{d \phi_{E}}{dt} \tt i_{d} = \varepsilon_{0} A \frac{dE}{dt} {φ = E.A} When a variable electrical field is applied to the gap. \tt i_{d} = A \varepsilon_{0} \frac{dE}{dt} id = Displacement Current When a variable potential difference is applied to the plates of a condenser C then \tt i_{d} = C \frac{dv}{dt} \tt i_{c} = i_{d} \Rightarrow i_{c} = \frac{V}{X_{c}} = V \omega C where, i c= conduction current, i d= displacement current Amperes circuit law \tt \int \overrightarrow{B} \cdot \overrightarrow{dl} = \mu_{0} \left(i_{c} + i_{d}\right) = \tt \mu_{0} \left(i_{c} + \varepsilon_{0} \frac{d \phi_{E}}{dt}\right) \tt \int B \cdot dl = \mu_{0}i_{c} + \mu_{0} \varepsilon_{0} \frac{d \phi_{E}}{dt} \tt B = \frac{\mu_{0}}{2 \pi} i_{d} \frac{r}{R^{2}} The magnetic field at a distance R from the axis \tt B = \frac{\mu_{0}}{2 \pi} \frac{i_{d}}{R}. This is the maximum value. GAUSS LAW FOR ELECTRICITY \tt \int \overrightarrow{E} . \overrightarrow{dA} = q_{net}/ \varepsilon_{0} GAUSS LAW FOR MAGNETISM \tt \int \overrightarrow{B} . \overrightarrow{dA} = 0 FARADAY’S LAW, \tt \int \overrightarrow{E} . \overrightarrow{dl} = - \frac{d \phi_{B}}{dt} \tt \int \overrightarrow{B} . \overrightarrow{dl} = \mu_{0}i_{c} + \mu_{0} \varepsilon_{0} \frac{d \phi_{E}}{dt} = \mu_{0} \left(i_{c} + i_{d}\right) The force acting on a charge, of moving in electric and magnetic fields which are similar to EM wave are existing simultaneously is \tt \overline{F} = q \left[ \overline{E} + \overline{V} \times \overline{B}\right]. This force is Lorentz force. View the Topic in this video From 00:34 To 57:46 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. According to Ampere's circuital law, the magnetic field B is related to steady current I as \oint_{c} \overrightarrow{B}\cdot \overrightarrow{dl} = \mu_{0}I 2. The sum of the conduction current and the displacement current. The generalised law is \oint B· d I = \mu_{0}i_{c} + \mu_{0} \varepsilon_{0}\frac{d \phi_{E}}{dt} 3. \oint E·d A = Q/ε 0 (Gauss's Law for electricity) 4. \oint B·d A = 0 (Gauss's Law for Magnetism) 5. \oint E·d l = \frac{-d \phi_{B}}{dt}(Faraday's Law)
Let $\Gamma$ be a finite abelian group, and let $P$ be the polytope in $\mathbb{R}^\Gamma$ defined to be the points $x$ satisfying the following inequalities: $$\begin{array}{cl} \sum_{g\in G} x_g \le |G| & \forall G \le \Gamma \\ x_g \ge 0 & \forall g \in \Gamma \end{array}$$ where $G \le \Gamma$ means $G$ is a subgroup of $\Gamma$. Is $P$ integral? If so, can we characterize its vertices? My question originally arose with $\Gamma = \mathbb{F}_2^n$, where some small examples ($n = 2,3$) suggest that the answer is "yes" and "maybe, but it's not simple". I also tried the cyclic group on 9 and 10 elements, as well as $\mathbb{F}_3^2$, where again the polytope is integral. The polytope is not integral when $\Gamma$ is any of $S_3$, $D_4$, and $D_5$, so abelianness is apparently essential. I should mention that if you write the first set of equations as $Ax \ge b$, then $A$ is not necessarily totally unimodular (which would imply the polytope is integral). When $\Gamma = \mathbb{F}_2^3$, you can choose three linearly independent $g$ and take the three $G$'s spanned by each pair of the selected elements $g$. The resulting submatrix is $$\begin{bmatrix}0&1&1\\1&0&1\\1&1&0\end{bmatrix}$$ up to permutation, and so has determinant $\pm 2$. It's easy (if tedious) to characterize the vertices for prime-order groups and observe that they're integral. I'm pretty sure this can be extended to cyclic groups with order a prime-power. I'm not sure what happens when taking products. This system is very reminiscient of those defining polymatroids, but rather than a submodular set function, the constraints are a "subgroup function" that I suspect is 'submodular' once that's been defined the right way. Still, the techniques for showing certain polymatroids are integral might work here, too, but I don't see how. Also, Fourier analysis may be relevant: when $\Gamma = \mathbb{F}_2^n$, it seems that the vertices maximizing $\sum_g x_g$ are exactly the point with $x_g = 1$ for all $g$, as well as those with $x_g = 1 - \chi_S(g)$ where $\chi_S$ is the $S$-th Fourier character (following standard notation from analysis of boolean functions), and $S$ is nonempty. (When $S$ is empty, the corresponding point is $x_g = 0$, which is also a vertex.)
Design and Performance of the ICON EUV Spectrograph 219 Downloads Citations Part of the following topical collections: Abstract We present the design, implementation, and on-ground performance measurements of the Ionospheric Connection Explorer EUV spectrometer, ICON EUV, a wide field (\(17^{\circ}\times 12^{\circ}\)) extreme ultraviolet (EUV) imaging spectrograph designed to observe the lower ionosphere at tangent altitudes between 100 and 500 km. The primary targets of the spectrometer, which has a spectral range of 54–88 nm, are the Oii emission lines at 61.6 nm and 83.4 nm. Its design, using a single optical element, permits a imaging resolution perpendicular to the spectral dispersion direction with a large (\(12^{\circ} \)) acceptance parallel to the dispersion direction while providing a slit-width dominated spectral resolution of \(R\sim25\) at 58.4 nm. Pre-flight calibration shows that the instrument has met all of the science performance requirements. KeywordsExtreme ultraviolet Instrumentation Ionosphere Spectrograph Notes Acknowledgements We thank Carl Dobson for systems support, Steve Marker for keeping the vacuum facility operational, Christopher Scholz for technical support, Paul Turin and David Pankow who provided useful advice, and two anonymous referees who provided constructive critiques. ICON is supported by NASA’s Explorers Program through contracts NNG12FA45C and NNG12FA42I. This project utilizes data from the NIST Atomic Spectra Database. (NIST 2015) Special thanks to EUVester the pug who made boring meetings practically tolerable. References M.W. Davis, G.R. Gladstone, T.K. Greathouse, D.C. Slater, M.H. Versteeg, K.B. Persson, G.S. Winters, S.C. Persyn, J.S. Eterno, in UV/Optical/IR Space Telescopes and Instruments: Innovative Technologies and Concepts V. Proc. SPIE, vol. 8146 (2011), p. 814604. doi: 10.1117/12.894274 CrossRefGoogle Scholar J. Edelstein, K.W. Min, W. Han, E.J. Korpela, K. Nishikida, B.Y. Welsh, C. Heiles, J. Adolfo, M. Bowen, W.M. Feuerstein, K. McKee, J.T. Lim, K. Ryu, J.H. Shinn, U.W. Nam, J.H. Park, I.S. Yuk, H. Jin, K.I. Seon, D.H. Lee, E. Sim, Astrophys. J. 644, L153 (2006). doi: 10.1086/505208 ADSCrossRefGoogle Scholar T.J. Immel et al., Space Sci. Rev. (2017, this issue) Google Scholar R.L. Kelly, J. Palumbo, NRL Report No. 7599 (1973) Google Scholar E.J. Korpela, J. Edelstein, P. Berg, M.S. Bowen, R. Chung, M. Feuerstein, W. Han, J.S. Hull, H. Jin, D.h. Lee, K.w. Min, U.w. Nam, K. Nishikida, J.g. Rhee, K. Ryu, K. Seon, B.Y. Welsh, I. Yuk, in Future EUV/UV and Visible Space Astrophysics Missions and Instrumentation. Proc. SPIE, vol. 4854, ed. by J.C. Blades, O.H.W. Siegmund (2003), p. 665. doi: 10.1117/12.459970 CrossRefGoogle Scholar A. Kramida, Y. Ralchenko, J. Reader, NIST ASD Team (2015). http://physics.nist.gov/asd A. Liard, Final Control Report, Grating 54900269H(Horiba Jobin Yvon, Longjumeau Cedex, 2015) Google Scholar A.W. Stephan, J.M. Picone, S.A. Budzien, R.L. Bishop, A.B. Christensen, J.H. Hecht, J. Geophys. Res. Space Phys. 117(A01316) (2012). doi: 10.1029/2011JA016897 A.W. Stephan, E.J. Korpela, M.M. Sirk, S. England, T.J. Immel, Space Sci. Rev. (2017, this issue) Google Scholar D.L. Windt, Private Communication, Reflective X-Ray Optics(LLC, New York, 2015) Google Scholar
I've been trying to read Gross' paper on Heegner points on $X_0(N)$ and I am stuck on a few details. The definition he is working with is that a heegner points is a pair $y=(E,E')$, where $E$ and $E'$ are elliptic curves admitting an isogeny that has cyclic kernel of order $N$ and where $E$ and $E'$ both have complex multiplication by the order $\mathcal{O}$ of discriminant $D$ in a quadratic imaginary field $K$. Gross goes on to explain that we may assume the lattice for $E$ is a fractional ideal $\mathfrak{a}$ and the lattice for $E'$ is $\mathfrak{b}$ such that the ideal $\mathfrak{n}=\mathfrak{a}\mathfrak{b}^{-1}$ is proper ideal of $\mathcal{O}$ such that the quotient $\mathcal{O}/\mathfrak{n}$ is cyclic of order $N$. It is the next line that I don't understand: "Such an ideal will exist if and only if there is a primitive binary quadratic form of discriminant $D$ which properly represents $N$...". The line goes on, but this is one of the things I'm stuck on. I've tried googling some notes/papers on binary quadratic forms, but I can't find anything that helps me understand what a binary quadratic form representing $N$ has to say about an order admitting a cyclic quotient. An explanation or a good reference would be much appreciated. The second and, I think, more important part of my confusion is a bit later on in the same section: Gross goes on to explain that if we have such an $\mathfrak{n}$, we can construct a heegner point as follows. Let $\mathfrak{a}$ be an invertible $\mathcal{O}$-submodule of $K$ and let $[\mathfrak{a}]$ denotes its class in $Pic(\mathcal{O})$. Let $\mathfrak{n}$ be a proper $\mathcal{O}$-ideal with cyclic quotient of order $N$, put $E=\mathbf{C}/\mathfrak{a}$, $E'=\mathbf{C}/\mathfrak{a}\mathfrak{n}^{-1}$. They are related by an obvious isogeny and thus determine a Heegner point, denoted $(\mathcal{O},\mathfrak{n},[\mathfrak{a}])$. Next, given $y=(\mathcal{O},\mathfrak{n},[\mathfrak{a}])$, we can find the image of it in the upper-half plane by picking an oriented basis $\langle\omega_1,\omega_2\rangle$ of $\mathfrak{a}$ such that $\mathfrak{a}\mathfrak{n}^{-1}=\langle\omega_1,\omega_2/N\rangle$. Then $y$ corresponds to the orbit of $\omega_1/\omega_2$ under $\Gamma_0(N)$. Lastly, since $\tau\in K$ it follows that it satisfies $A\tau^2+b\tau+C=0$ for some integers $A,B,C$ such that $gcd(A,B,C)=1$. Finally, what I don't understand is that Gross claims that $D=B^2-4AC, A=NA'$ from some $A'$ and $gcd(A',B,NC)$. I don't see what the $\tau$ we cooked up has to do with the discriminant of our order. I have read a paper that defined a Heegner point to be a quadratic imaginary point in the half-plane such that $\Delta(\tau)=\Delta(N\tau)$. I have seen how this would help with part of the claim above, but I don't see why in this situation, $\Delta(\tau)=\Delta(N\tau)$. In fact, it seems that everything I'm confused about here is the fact that it seems to be the case that $$D=\Delta(\tau)=\Delta(NT),$$ where $\Delta$ denotes discriminant. Any insight into these two questions would very appreciated.
Multiple solutions for a class of $(p_1, \ldots, p_n)$-biharmonic systems 1. Department of Mathematics, University of Tennessee at Chattanooga, Chattanooga, TN 37403, United States 2. Department of Mathematics, Faculty of Sciences, Razi University, Kermanshah 67149, Iran Keywords:p_{n})$-biharmonic systems, critical points, Three solutions, $(p_{1}, variational methods, multiplicity results., \ldots. Mathematics Subject Classification:Primary: 35J65, 47J1. Citation:John R. Graef, Shapour Heidarkhani, Lingju Kong. Multiple solutions for a class of $(p_1, \ldots, p_n)$-biharmonic systems. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1393-1406. doi: 10.3934/cpaa.2013.12.1393 References: [1] G. A. Afrouzi and S. Heidarkhani, [2] G. A. Afrouzi and S. Heidarkhani, [3] G. A. Afrouzi, S. Heidarkhani and D. O'Regan, [4] G. A. Afrouzi, S. Heidarkhani and D. O'Regan, [5] [6] D. Averna and G. Bonanno, [7] [8] [9] M. Ayed and A. Selmi, [10] [11] [12] [13] Y. Bozhkov and E. Mitidieri, [14] A. Cabada, J. A. Cid and L. Sanchez, [15] [16] C. Cowan, P. Esposito and N. Ghoussoub, [17] A. Djellit and S. Tas, [18] [19] P. Drábek, N. M. Stavrakakis and N. B. Zographopoulos, [20] S. Federica, [21] M. R. Grossinho, L. Sanchez and S. A. Tersian, [22] [23] [24] A. Kristály, [25] A. C. Lazer and P. J. McKenna, [26] [27] [28] [29] X.-L. Liu and W.-T. Li, [30] [31] [32] S. I. Pokhozhaev, [33] B. Ricceri, [34] [35] J. Simon, [36] [37] [38] [39] [40] [41] E. Zeidler, "Nonlinear Functional Analysis and its Applications," Vol. II,, [42] G. Q. Zhang, X. P. Liu and S. Y. Liu, [43] J. Zhang and S. Li, show all references References: [1] G. A. Afrouzi and S. Heidarkhani, [2] G. A. Afrouzi and S. Heidarkhani, [3] G. A. Afrouzi, S. Heidarkhani and D. O'Regan, [4] G. A. Afrouzi, S. Heidarkhani and D. O'Regan, [5] [6] D. Averna and G. Bonanno, [7] [8] [9] M. Ayed and A. Selmi, [10] [11] [12] [13] Y. Bozhkov and E. Mitidieri, [14] A. Cabada, J. A. Cid and L. Sanchez, [15] [16] C. Cowan, P. Esposito and N. Ghoussoub, [17] A. Djellit and S. Tas, [18] [19] P. Drábek, N. M. Stavrakakis and N. B. Zographopoulos, [20] S. Federica, [21] M. R. Grossinho, L. Sanchez and S. A. Tersian, [22] [23] [24] A. Kristály, [25] A. C. Lazer and P. J. McKenna, [26] [27] [28] [29] X.-L. Liu and W.-T. Li, [30] [31] [32] S. I. Pokhozhaev, [33] B. Ricceri, [34] [35] J. Simon, [36] [37] [38] [39] [40] [41] E. Zeidler, "Nonlinear Functional Analysis and its Applications," Vol. II,, [42] G. Q. Zhang, X. P. Liu and S. Y. Liu, [43] J. Zhang and S. Li, [1] Zheng-Hai Huang, Nan Lu. Global and global linear convergence of smoothing algorithm for the Cartesian $P_*(\kappa)$-SCLCP. [2] Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar. Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system. [3] Yuxiang Zhang, Shiwang Ma. Some existence results on periodic and subharmonic solutions of ordinary $P$-Laplacian systems. [4] Vladimir Ejov, Anatoli Torokhti. How to transform matrices $U_1, \ldots, U_p$ to matrices $V_1, \ldots, V_p$ so that $V_i V_j^T= {\mathbb O} $ if $ i \neq j $?. [5] Pasquale Candito, Giovanni Molica Bisci. Multiple solutions for a Navier boundary value problem involving the $p$--biharmonic operator. [6] [7] [8] Leszek Gasiński, Nikolaos S. Papageorgiou. Three nontrivial solutions for periodic problems with the $p$-Laplacian and a $p$-superlinear nonlinearity. [9] Vincenzo Ambrosio, Teresa Isernia. Multiplicity and concentration results for some nonlinear Schrödinger equations with the fractional [10] Arrigo Cellina. The regularity of solutions to some variational problems, including the [11] [12] Claudianor O. Alves, J. V. Gonçalves, Olimpio Hiroshi Miyagaki. Remarks on multiplicity of positive solutions of nonlinear elliptic equations in $IR^N$ with critical growth. [13] [14] Michael E. Filippakis, Nikolaos S. Papageorgiou. Existence and multiplicity of positive solutions for nonlinear boundary value problems driven by the scalar $p$-Laplacian. [15] Jean Dolbeault, Marta García-Huidobro, Rául Manásevich. Interpolation inequalities in $ \mathrm W^{1,p}( {\mathbb S}^1) $ and [16] [17] Ziqing Yuana, Jianshe Yu. Existence and multiplicity of nontrivial solutions of biharmonic equations via differential inclusion. [18] [19] Maya Chhetri, D. D. Hai, R. Shivaji. On positive solutions for classes of p-Laplacian semipositone systems. [20] Petru Jebelean. Infinitely many solutions for ordinary $p$-Laplacian systems with nonlinear boundary conditions. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
We define a $2\times 2$ Givens rotation matrix as: $${\bf G}(\theta) = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) &\cos(\theta) \end{bmatrix}.$$ On the other hand, we define a $2\times 2$ hyperbolic rotation matrix as: $${\bf H}(y)=\begin{bmatrix} \cosh( y) & \sinh( y) \\ \sinh( y) &\cosh( y) \end{bmatrix}.$$ I don't see why do we qualify matrix ${\bf H}$ as a rotation! Suppose we take a 2-D vector $x=[-3, 1]^T$ and we transform it using ${\bf G}(\theta), \theta = 0,\dots, \pi/2$, and ${\bf H}, y = -2,\dots, 2.5$. See below for the result. For me Givens rotation does clearly rotate the initial point around the point $[0,0]^T$ but for the hyperbolic rotation, we see a bending but not a rotation, at least not around a fixed point (I checked for other points and its the same behavior with different bending angles). am I missing something?
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.