text
stringlengths 256
16.4k
|
|---|
I have a basic question: if we use 1d string to replace 0d particle to gain insight of nature in string theory, and advanced to use 2d membranes, can we imagine that using $3$- or $n$-dimensional blocks/objects/branes as basic units in physics theory? Where is the end of this expansion?
There
can not only, there have to be heavy higher dimensional objects (as for example D-branes) in string theory, as Joseph Polchinski discovered. So it is strictly speaking no longer appropriate to talk about "string theory", since M-theory is now known to relate all the different string theories known before by dualities and which contains these higher dimensional (from points D0, up to space filling D9 branes if spacetime is 10D) objects.
One way to see why these higher dimensional objects have to be there, is because T-duality transforms (among other things) the von Neuman boundary condition of a free floating open string, that are not stuck on anything, to the Dirichlet boundery condition which means that the endpoints are fixed. So there has to be something the strings can stick on, these objects are called D-branes which can be higher dimensional. That's the way Lenny Susskind introduced D-branes in this last lecture of his string course.
D-branes can for among other things be used to model the interactions of the standard model. For example QCD can be described by 3 D-branes, one for each color.
The mesons are strings which do not need to have both ends on the same "color" brane, quarks and anti-quarks are distinguished by the orientation of the string. Interactions take place when strings break and leave new end points on the brane and when two end points come together.
There are, actually. Dilaton already covered the reason through T-duality, so I will discuss the requirement of $p$-branes imposed by Ramond-Ramond potentials.
The worldsheet of a string can couple to a Neveu-Schwarz B-field: $$q\int_{}^{} {{{h^{ab}}}\frac{{\partial {X^\mu }}}{{\partial {\xi ^a}}}\frac{{\partial {X^\nu }}}{{\partial {\xi ^b}}}B_{\mu \nu }\sqrt { - \det {h_{ab}}} {{\text{d}}^2}\xi } $$
($q$ is the electric charge) The worldsheet of a string can couple to graviton field (spacetime metric): $$m\int_{}^{} {{{h^{ab}}}\frac{{\partial {X^\mu }}}{{\partial {\xi ^a}}}\frac{{\partial {X^\nu }}}{{\partial {\xi ^b}}}g_{\mu \nu }\sqrt { - \det {h_{ab}}} {{\text{d}}^2}\xi } $$
You can change the "$m$" to any form you like, in terms of the tension/Regge Slope parameter/string length etc.
For a dilaton field, $${q }\ell _P^2\int_{}^{} {\Phi R\sqrt { - \det {h_{\alpha \beta }}} {\text{ }}{{\text{d}}^2}\xi } $$ Ignore conformal invariance for the time being.
But what about Ramond-Ramond potentials? All is fine with the Ramond-Ramond fields, but the Ramond-Ramond potentials $C_k$are associated with the Ramond-Ramond field $A_{k+1}$ and it is clear that they can't couple similarly to the worldsheet. But it can couple to a higher dimensional worlvolume -- $${q_{{\text{RR}}}}\int_{}^{} {C_{{\mu _1}...{\mu _p}}^{p + 1}\frac{{\partial {x^{{\mu _1}}}}}{{\partial {\xi ^{{a_1}}}}}...\frac{{\partial {x^{{\mu _p}}}}}{{\partial {\xi ^{{a_p}}}}}{h^{{a_0}...{a_p}}}\sqrt { - \det {h^{{a_0}...{a_p}}}} {{\text{d}}^{p + 1}}\xi } $$
Which requires membranes and other higher-dimensional objects. It's interesting to note that while 10-dimensional string theories permit all sorts of these branes, M-theory only permits 2 and 5 dimensional branes.
|
Reference : Invariance in a class of operations related to weighted quasi-geometric means
E-prints/Working papers : First made available on ORBilu Physical, chemical, mathematical & earth Sciences : Mathematics http://hdl.handle.net/10993/36748
Invariance in a class of operations related to weighted quasi-geometric means English Devillet, Jimmy [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Mathematics Research Unit >] Matkowski, Janusz [] 29-Sep-2018 No [en] invariant functions ; mean ; invariant mean ; reflexivity ; iteration ; functional equation [en] Let $I\subset (0,\infty )$ be an interval that is closed with respect to the
multiplication. The operations $C_{f,g}\colon I^{2}\rightarrow I$ of the
form
\begin{equation*}
C_{f,g}\left( x,y\right) =\left( f\circ g\right) ^{-1}\left( f\left(
x\right) \cdot g\left( y\right) \right) \text{,}
\end{equation*}
where $f,g$ are bijections of $I$ are considered. Their connections with
generalized weighted quasi-geometric means is presented. It is shown that invariance\
question within the class of this operations leads to means of iterative
type and to a problem on a composite functional equation. An application of
the invariance identity to determine effectively the limit of the sequence
of iterates of some generalized quasi-geometric mean-type mapping, and the
form of all continuous functions which are invariant with respect to this
mapping are given. The equality of two considered operations is also discussed.
University of Luxembourg - UL ; Fonds National de la Recherche - FnR Researchers http://hdl.handle.net/10993/36748
File(s) associated to this reference
All documents in ORBi
lu are protected by a user license.
|
So first let me state my homework problem:
Let $X$ be a set, let $\{A_k\}$ be a sequence of subsets of $X$, let $B = \bigcup_{n=1}^{+\infty} \bigcap_{k=n}^{+\infty} A_k$, and let $C = \bigcap_{n=1}^{+\infty} \bigcup_{k=n}^{+\infty} A_k$. Show that (a) $\liminf_k\; {\xi_A}_{_k} = \xi_B$, and $(b)$ $\limsup_k \;{\chi_A}_{_k} = \chi_C.$
I know that, in the context I am familiar with, that $$\liminf_{k\rightarrow +\infty}\; X_k = \bigcup_{k=1}^{+\infty} \bigcap_{n=k}^{+\infty} X_n$$ and $$\limsup_{k\rightarrow +\infty}\;X_k = \bigcap_{k=1}^{+\infty} \bigcup_{n=k}^{+\infty} X_n.$$
I also know that the characteristics (indicator) function is defined as $\chi_A(x) = \begin{cases} 1, & x \in A \\ 0, & x \notin A .\end{cases} $
So I wrote out $B$ in some of its `glory': $B= (A_1 \cap A_2 \cap A_3 \cap \cdots) \cup (A_2 \cap A_3 \cap \cdots) \cup (A_3 \cap A_4 \cap \cdots) \cup \cdots$, and as the first argument is the smallest, with increasing size to the right, the last term in the expression for $B$ would be $B$, which would be the largest.
So if I replace the $X_k$'s above with $\chi$'s I still don't see how I can get the correct answer - though it looks pretty clear from the definition of $B$ and that of $\liminf$ being basically the same, except in this case for the $\chi$.
Any direction would be greatly appreciated. By the way, I have checked out limsup and liminf of a sequence of subsets of a set but I was somewhat confused by the topology, the meets/joins, etc.
Thanks much, Nate
|
I'm working on an introductory qm project, hope somebody has the time to help me (despite the length of this post), it will be highly appreciated.
My goal is to determine the bound states and their energies for the potential
$V_j(x) = -\frac{\hslash^2a^2}{2m}\frac{j(j+1)}{\cosh^2(ax)}$
for any positive value of j (I think I'm supposed to show that j has to be integer at some point, but I don't know). By a change of variable, $ax\rightarrow y$, I have rewritten the Schrödinger equation for this potential to:
$-u''(y)-\frac{j(j+1)}{\cosh^2(y)}u(y) = Eu(y).$
I will denote the corresponding Hamiltonian $H_j$. I have shown that the ground state for $V_j(x)$ is $\psi_0(x) = A\cosh(ax)^{-j}$ (this is all correct, has been verified). Now here is where serious problems begin. I am given these ladder operators:
$a_{j+} = p + ij\tanh(y)$
$a_{j-} = p - ij\tanh(y)$
where $p = -i\frac{d}{dy}$ (no h-bar). I have shown that $H_j = a_{j+}a_{j-} - j^2$ and $H_{j-1} = a_{j-}a_{j+} - j^2$ and I have used this to show that if $|\psi>$ is an eigenstate for $H_j$ then $a_{j-}|\psi>$ is an eigenstate for $H_(j-1)$ and $a_{(j+1)+}|\psi>$ is an eigenstate for $H_(j+1)$, both with the same eigenvalue (energy). Thus the purpose of the a-operators is to carry an eigenfunction for the potential characterized by the value$ j (H_j)$ over to an eigenfunction for the potential characterized by the value j-1 or $j+1 (H_(j-1), H_(j+1)).$
Now I should be able to find all bound states and their energies (I have been told that there is only one bound state for every (integer?) value of j, so this amounts to showing that the ground state I found earlier is the only bound state there is). I'm clueless on this last step.
|
RGPV First Year Engineering (Set A) (Semester 1)
Basic Electricals & Electronics Engg. May 2013
Basic Electricals & Electronics Engg.
May 2013
Total marks: --
Total time: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
Distinguish the following
1 (a) (i) 3 phase balanced and unbalanced supply with phasor diagram
3 M
Answer any one question from Q1 & Q2
1 (a) (ii) Voltage source and current source
4 M
1 (b) (i) Obtain current I in the given R-L-C parallel circuit under resonant condition. Justify your answer.
3 M
1 (b) (ii) Obtain resultant voltage when two source of emfs having e
1=100 sin ωt and \[ e_2=100 \sin \left( \omega - \dfrac {\pi}{6} \right )\] are connected in series. If resultant voltage is applied to circuit of impedance (8+j3)Ω, calculate the power (active) supplied to the impedance.
4 M
2 (a) Find the average and RMS value of the following waveforms. Also calculate form factor and peak factor of the same.
7 M
2 (b) Explain Thevenin?s & superposition theorem giving an application example for each.
7 M
Answer any one question from Q3 & Q4
3 (a) Explain basic principle of operation of a transformer. Draw an equivalent circuit of single phase transformer.
7 M
3 (b) A single phase transformer rated 570 watt has an efficiency of 95 percent when working at full load and half full load, both at unity PF. Calculate its efficiency at 75 percent of full load.
7 M
Specify the following w.r.t. transformer.
4 (a) (i) All day efficiency
3 M
4 (a) (ii) Losses in the transformer
4 M
4 (b) How will you determine the transformer losses in the laboratory.
7 M
Answer any one question from Q5 & Q6
5 (a) Draw Torque-slip characteristics of three phase induction motor and explain its stable and unstable region of operation
7 M
Answer the following w.r.t. induction motor
5 (b) (i) What is the frequency of rotor currents of an induction motor?
3 M
5 (b) (ii) Why is an induction motor called asynchronous?
2 M
5 (b) (iii) What do you mean by space phase difference.
2 M
6 (a) State the types of dc motors. Discuss constructional details of any type of dc motor.
7 M
6 (b) A shunt generator delivers 50kw at 250V and 400r.p.m. The armature and field resistances are 0.027Ωand 50Ω respectively. Calculate the speed of the machine running as shunt motor and taking 50kw input at 250V.
7 M
Answer the following (Any 04):
7 (a) What are logic gates. Enlist the different types of logic gates.
4 M
7 (b) What is an EX-NOR Gate? Write its truth table.
3 M
7 (c) Verify that the following operations are commutative and associative.
i) AND ii) OR iii) E
i) AND ii) OR iii) E
XOR
4 M
7 (d) Implement the following logic expressions with logic gates:
Y=ABC + AB + BC Y=ABC (D + EF)
Y=ABC + AB + BC
Y=ABC (D + EF)
3 M
7 (e) Design a full adder circuit using NAND gates.
4 M
7 (f) State and explain De Morgan's theorem.
3 M
7 (g) How will you convert decimal number in octal.
4 M
Answer the following (Any 04)
8 (a) Draw Input/Output characteristic of a transistor in CE configuration.
4 M
8 (b) Discuss DC biasing of BJT
3 M
8 (c) How BJT can be used a
i) Switch ii) Inverter
i) Switch ii) Inverter
4 M
8 (d) Discuss V-I characteristic of P.N Diode.
3 M
8 (e) Which transistor configuration CC, CB & CE is suitable for Amplifier and why.
4 M
8 (f) Differential between intrinsic & extrinsic semiconductor.
3 M
More question papers from Basic Electricals & Electronics Engg.
|
Let $X$ be a Hausdorff locally compact in $x \in X$. Show that for each open nbd $U$ of $x$ there exists an open nbd $V$ of $x$ such that $\overline{V}$ is compact and $\overline{V} \subset U$.
My work:
Since $X$ is Hausdorff and locally compact then $X$ is regular. Let $U$ be an open nbd of $x$. By assumption $X$ is locally compact so there exists some open nbd $W$ of $x$ such that $\overline{W}$ is compact. Now consider the open set $W \cap U$ this is non-empty since $x$ lies in the intersection. By regularity find an open set $V$ such that:
$x\in V \subset \overline{V} \subset W \cap U$
Then in particular $\overline{V} \subset U$. But also $\overline{V} \subset W \subset \overline{W}$. Since $\overline{W}$ is compact then $\overline{V}$ is a closed subset of a compact set, hence compact.
Is the above OK? Thank you.
|
Let $n$ be a nonnegative integer, and let $B$ be the $n \times n$-matrix (over the rational numbers) whose $\left(i, j\right)$-th entry is $\dbinom{n+1}{2j-i}$ for all $i, j \in \left\{ 1, 2, \ldots, n \right\}$.
For example, if $n = 5$, then \begin{equation} B = \left(\begin{array}{rrrrr} 6 & 20 & 6 & 0 & 0 \\ 1 & 15 & 15 & 1 & 0 \\ 0 & 6 & 20 & 6 & 0 \\ 0 & 1 & 15 & 15 & 1 \\ 0 & 0 & 6 & 20 & 6 \end{array}\right) . \end{equation}
Question 1.Prove that the eigenvalues of $B$ are $2^1, 2^2, \ldots, 2^n$. (I know how to do this -- I'll write up the answer soon -- but there might be other approaches too.)
Question 2.Find a left eigenvector for each of these eigenvalues. What I know is that the row vector $v$ whose $i$-th entry is $\left(-1\right)^{i-1} \dbinom{n-1}{i-1}$ (for $i \in \left\{1,2,\ldots,n\right\}$) is a left eigenvector for eigenvalue $2^1$ (that is, $v B = 2 v$). But the other left eigenvectors are a mystery to me.
Question 3.Find a right eigenvector for each of these eigenvalues. For example, it appears to me that the column vector $w$ whose $i$-th entry is $\left(-1\right)^{i-1} / \dbinom{n-1}{i-1}$ (for $i \in \left\{1,2,\ldots,n\right\}$) is a right eigenvector for eigenvalue $2^1$ (that is, $B w = 2 w$). This (if correct) boils down to the identity \begin{equation} \sum_{k=1}^n \left(-1\right)^{k-1} \left(k-1\right)! \left(n-k\right)! \dbinom{n+1}{2k-i} = 2 \left(-1\right)^{i-1} \left(i-1\right)! \left(n-i\right)! \end{equation} for all $i \in \left\{1,2,\ldots,n\right\}$. Note that the entries of $w$ are the reciprocals to the corresponding entries of $v$ ! Needless to say, this pattern doesn't persist, but maybe there are subtler patterns.
I am going to put up an answer to Question 1 soon, as a stepping stone for the proof of https://math.stackexchange.com/questions/2886392 , but this shouldn't keep you from adding your ideas or answers.
|
Here is a closely related pair of examples from operator theory, von Neumann's inequality and the theory of unitary dilations of contractions on Hilbert space, where things work for 1 or 2 variables but not for 3 or more.
In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $\|T\|\leq1$ and $p$ is in $\mathbb{C}[z]$, then $\|p(T)\|\leq\sup\{|p(z)|:|z|=1\}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$.
These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $\mathbb{C}[z_1,z_2]$, then $\|p(T_1,T_2)\|\leq\sup\{|p(z_1,z_2)|:|z_1|=|z_2|=1\}$.
Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $n\geq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,\ldots,T_n$ are commuting contractions and $p$ is in $\mathbb{C}[z_1,\ldots,z_n]$, then $\|p(T_1,\ldots,T_n)\|\leq K\cdot\sup\{|p(z_1,\ldots,z_n)|:|z_1|=\cdots=|z_n|=1\}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_n\geq\frac{\sqrt{n}}{11}$. When $n>2$, it is not known whether $K_n\lt\infty$.
See Paulsen's book (2002) for more. On page 69 he writes:
The fact that von Neumann’s inequality holds for two commuting contractions
but not three or more is still the source of many surprising results and
intriguing questions. Many deep results about analytic functions come
from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an
analogue of the classical Nevanlinna–Pick interpolation formula
for analytic functions on the bidisk. Because of the failure of a von
Neumann inequality for three or more commuting contractions, the analogous
formula for the tridisk is known to be false, and the problem of finding the
correct analogue of the Nevanlinna–Pick formula for polydisks
in three or more variables remains open.
|
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 28, Number 3 (1957), 773-778. On the Power of Optimum Tolerance Regions when Sampling from Normal Distributions Abstract
In [1], optimum $\beta$-expectation tolerance regions were found by reducing the problem to that of solving an equivalent hypothesis testing problem. The regions produced when sampling from a $k$-variate normal distribution were found to be of similar $\beta$-expectation and optimum in the sense of minimax and most stringency. It is the purpose of this paper to discuss the "Power" or "Merit" of such regions, when sampling from the $k$-variate normal distribution. Let $X = (X_1, \cdots, X_n)$ be a random sample point in $n$ dimensions, where each $X_i$, is an independent observation, distributed by $N(\mu, \sigma^2)$. It is often desirable to estimate on the basis of such a sample point a region which contains a given fraction $\beta$ of the parent distribution. We usually seek to estimate the center 100$\beta$% of the parent distribution and/or the 100$\beta$% left-hand tail of the parent distribution.
Article information Source Ann. Math. Statist., Volume 28, Number 3 (1957), 773-778. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177706890 Digital Object Identifier doi:10.1214/aoms/1177706890 Mathematical Reviews number (MathSciNet) MR90946 Zentralblatt MATH identifier 0078.33401 JSTOR links.jstor.org Citation
Guttman, Irwin. On the Power of Optimum Tolerance Regions when Sampling from Normal Distributions. Ann. Math. Statist. 28 (1957), no. 3, 773--778. doi:10.1214/aoms/1177706890. https://projecteuclid.org/euclid.aoms/1177706890
|
We have been exploring vectors and vector operations in three-dimensional space, and we have developed equations to describe lines, planes, and spheres. In this section, we use our knowledge of planes and spheres, which are examples of three-dimensional figures called
surfaces, to explore a variety of other surfaces that can be graphed in a three-dimensional coordinate system. Identifying Cylinders
The first surface we’ll examine is the cylinder. Although most people immediately think of a hollow pipe or a soda straw when they hear the word cylinder, here we use the broad mathematical meaning of the term. As we have seen, cylindrical surfaces don’t have to be circular. A rectangular heating duct is a cylinder, as is a rolled-up yoga mat, the cross-section of which is a spiral shape.
In the two-dimensional coordinate plane, the equation \( x^2+y^2=9\) describes a circle centered at the origin with radius \( 3\). In three-dimensional space, this same equation represents a surface. Imagine copies of a circle stacked on top of each other centered on the \(z\)-axis (Figure \(\PageIndex{1}\)), forming a hollow tube. We can then construct a cylinder from the set of lines parallel to the \(z\)-axis passing through the circle \( x^2+y^2=9\) in the \(xy\)-plane, as shown in the figure. In this way, any curve in one of the coordinate planes can be extended to become a surface.
Definition: cylinders and rulings
A set of lines parallel to a given line passing through a given curve is known as a cylindrical surface, or
cylinder . The parallel lines are called .
From this definition, we can see that we still have a cylinder in three-dimensional space, even if the curve is not a circle. Any curve can form a cylinder, and the rulings that compose the cylinder may be parallel to any given line (Figure \(\PageIndex{2}\)).
Example \( \PageIndex{1}\): Graphing Cylindrical Surfaces
Sketch the graphs of the following cylindrical surfaces.
\( x^2+z^2=25\) \( z=2x^2−y\) \( y=\sin x\) Solution
a. The variable \( y\) can take on any value without limit. Therefore, the lines ruling this surface are parallel to the \(y\)-axis. The intersection of this surface with the \(xz\)-plane forms a circle centered at the origin with radius \( 5\) (see Figure \(\PageIndex{3}\)).
b. In this case, the equation contains all three variables —\( x,y,\) and \( z\)— so none of the variables can vary arbitrarily. The easiest way to visualize this surface is to use a computer graphing utility (Figure \(\PageIndex{4}\)).
c. In this equation, the variable \( z\) can take on any value without limit. Therefore, the lines composing this surface are parallel to the \(z\)-axis. The intersection of this surface with the
yz-plane outlines curve \( y=\sin x\) (Figure \(\PageIndex{5}\)).
Exercise \( \PageIndex{1}\):
Sketch or use a graphing tool to view the graph of the cylindrical surface defined by equation \( z=y^2\).
Hint
The variable \( x\) can take on any value without limit.
Answer
When sketching surfaces, we have seen that it is useful to sketch the intersection of the surface with a plane parallel to one of the coordinate planes. These curves are called traces. We can see them in the plot of the cylinder in Figure \(\PageIndex{6}\).
Definition: traces
The
traces of a surface are the cross-sections created when the surface intersects a plane parallel to one of the coordinate planes.
Traces are useful in sketching cylindrical surfaces. For a cylinder in three dimensions, though, only one set of traces is useful. Notice, in Figure \(\PageIndex{6}\), that the trace of the graph of \( z=\sin x\) in the
xz-plane is useful in constructing the graph. The trace in the xy-plane, though, is just a series of parallel lines, and the trace in the yz-plane is simply one line.
Cylindrical surfaces are formed by a set of parallel lines. Not all surfaces in three dimensions are constructed so simply, however. We now explore more complex surfaces, and traces are an important tool in this investigation.
Quadric Surfaces
We have learned about surfaces in three dimensions described by first-order equations; these are planes. Some other common types of surfaces can be described by second-order equations. We can view these surfaces as three-dimensional extensions of the conic sections we discussed earlier: the ellipse, the parabola, and the hyperbola. We call these graphs quadric surfaces
Definition: Quadric surfaces and conic sections
Quadric surfaces are the graphs of equations that can be expressed in the form
\[Ax^2+By^2+Cz^2+Dxy+Exz+Fyz+Gx+Hy+Jz+K=0.\]
When a quadric surface intersects a coordinate plane, the trace is a conic section.
An
ellipsoid is a surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}+\dfrac{z^2}{c^2}=1.\) Set \( x=0\) to see the trace of the ellipsoid in the yz-plane. To see the traces in the \(y\)- and \(xz\)-planes, set \( z=0\) and \( y=0\), respectively. Notice that, if \( a=b\), the trace in the \(xy\)-plane is a circle. Similarly, if \( a=c\), the trace in the \(xz\)-plane is a circle and, if \( b=c\), then the trace in the \(yz\)-plane is a circle. A sphere, then, is an ellipsoid with \( a=b=c.\)
Example \( \PageIndex{2}\): Sketching an Ellipsoid
Sketch the ellipsoid
\[ \dfrac{x^2}{2^2}+\dfrac{y^2}{3^2}+\dfrac{z^2}{5^2}=1.\]
Solution
Start by sketching the traces. To find the trace in the
xy-plane, set \( z=0: \dfrac{x^2}{2^2}+\dfrac{y^2}{3^2}=1\) (Figure \(\PageIndex{7}\)). To find the other traces, first set \( y=0\) and then set \( x=0.\)
Now that we know what traces of this solid look like, we can sketch the surface in three dimensions (Figure \(\PageIndex{8}\)).
The trace of an ellipsoid is an ellipse in each of the coordinate planes. However, this does not have to be the case for all quadric surfaces. Many quadric surfaces have traces that are different kinds of conic sections, and this is usually indicated by the name of the surface. For example, if a surface can be described by an equation of the form
\[ \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=\dfrac{z}{c}\]
then we call that surface an
elliptic paraboloid. The trace in the xy-plane is an ellipse, but the traces in the xz-plane and yz-plane are parabolas (Figure \(\PageIndex{9}\)). Other elliptic paraboloids can have other orientations simply by interchanging the variables to give us a different variable in the linear term of the equation \( \dfrac{x^2}{a^2}+\dfrac{z^2}{c^2}=\dfrac{y}{b}\) or \( \dfrac{y^2}{b^2}+\dfrac{z^2}{c^2}=\dfrac{x}{a}\).
Example \( \PageIndex{3}\): Identifying Traces of Quadric Surfaces
Describe the traces of the elliptic paraboloid \( x^2+\dfrac{y^2}{2^2}=\dfrac{z}{5}\).
Solution
To find the trace in the \(xy\)-plane, set \( z=0: x^2+\dfrac{y^2}{2^2}=0.\) The trace in the plane \( z=0\) is simply one point, the origin. Since a single point does not tell us what the shape is, we can move up the \(z\)-axis to an arbitrary plane to find the shape of other traces of the figure.
The trace in plane \( z=5\) is the graph of equation \( x^2+\dfrac{y^2}{2^2}=1\), which is an ellipse. In the \(xz\)-plane, the equation becomes \( z=5x^2\). The trace is a parabola in this plane and in any plane with the equation \( y=b\).
In planes parallel to the \(yz\)-plane, the traces are also parabolas, as we can see in Figure \(\PageIndex{10}\).
Exercise \( \PageIndex{2}\):
A hyperboloid of one sheet is any surface that can be described with an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}−\dfrac{z^2}{c^2}=1\). Describe the traces of the hyperboloid of one sheet given by equation \( \dfrac{x^2}{3^2}+\dfrac{y^2}{2^2}−\dfrac{z^2}{5^2}=1.\)
Hint
To find the traces in the coordinate planes, set each variable to zero individually.
Answer
The traces parallel to the \(xy\)-plane are ellipses and the traces parallel to the \(xz\)- and \(yz\)-planes are hyperbolas. Specifically, the trace in the \(xy\)-plane is ellipse \( \dfrac{x^2}{3^2}+\dfrac{y^2}{2^2}=1,\) the trace in the \(xz\)-plane is hyperbola \( \dfrac{x^2}{3^2}−\dfrac{z^2}{5^2}=1,\) and the trace in the \(yz\)-plane is hyperbola \( \dfrac{y^2}{2^2}−\dfrac{z^2}{5^2}=1\) (see the following figure).
Hyperboloids of one sheet have some fascinating properties. For example, they can be constructed using straight lines, such as in the sculpture in Figure \(\PageIndex{1a}\). In fact, cooling towers for nuclear power plants are often constructed in the shape of a hyperboloid. The builders are able to use straight steel beams in the construction, which makes the towers very strong while using relatively little material (Figure \(\PageIndex{1b}\)).
Example \( \PageIndex{4}\): Chapter Opener: Finding the Focus of a Parabolic Reflector
Energy hitting the surface of a parabolic reflector is concentrated at the focal point of the reflector (Figure \(\PageIndex{12}\)). If the surface of a parabolic reflector is described by equation \( \dfrac{x^2}{100}+\dfrac{y^2}{100}=\dfrac{z}{4},\) where is the focal point of the reflector?
Solution
Since
z is the first-power variable, the axis of the reflector corresponds to the \(z\)-axis. The coefficients of \( x^2\) and \( y^2\) are equal, so the cross-section of the paraboloid perpendicular to the \(z\)-axis is a circle. We can consider a trace in the xz-plane or the yz-plane; the result is the same. Setting \( y=0\), the trace is a parabola opening up along the \(z\)-axis, with standard equation \( x^2=4pz\), where \( p\) is the focal length of the parabola. In this case, this equation becomes \( x^2=100⋅\dfrac{z}{4}=4pz\) or \( 25=4p\). So p is \( 6.25\) m, which tells us that the focus of the paraboloid is \( 6.25\) m up the axis from the vertex. Because the vertex of this surface is the origin, the focal point is \( (0,0,6.25).\)
Seventeen standard quadric surfaces can be derived from the general equation
\[Ax^2+By^2+Cz^2+Dxy+Exz+Fyz+Gx+Hy+Jz+K=0.\]
The following figures summarizes the most important ones.
Example \( \PageIndex{5}\): Identifying Equations of Quadric Surfaces
Identify the surfaces represented by the given equations.
\( 16x^2+9y^2+16z^2=144\) \( 9x^2−18x+4y^2+16y−36z+25=0\) Solution
a. The \( x,y,\) and \( z\) terms are all squared, and are all positive, so this is probably an ellipsoid. However, let’s put the equation into the standard form for an ellipsoid just to be sure. We have
\[ 16x^2+9y^2+16z^2=144. \nonumber\]
Dividing through by 144 gives
\[ \dfrac{x^2}{9}+\dfrac{y^2}{16}+\dfrac{z^2}{9}=1. \nonumber\]
So, this is, in fact, an ellipsoid, centered at the origin.
b. We first notice that the \( z\) term is raised only to the first power, so this is either an elliptic paraboloid or a hyperbolic paraboloid. We also note there are \( x\) terms and \( y\) terms that are not squared, so this quadric surface is not centered at the origin. We need to complete the square to put this equation in one of the standard forms. We have
\[ \begin{align*} 9x^2−18x+4y^2+16y−36z+25&=0 \\[4pt] 9x^2−18x+4y^2+16y+25 &=36z \\[4pt] 9(x^2−2x)+4(y^2+4y)+25 &=36z \\[4pt] 9(x^2−2x+1−1)+4(y^2+4y+4−4)+25 &=36z \\[4pt] 9(x−1)^2−9+4(y+2)^2−16+25 &=36z \\[4pt] 9(x−1)^2+4(y+2)^2 &=36z \\[4pt] \dfrac{(x−1)^2}{4}+\dfrac{(y−2)^2}{9} &=z. \end{align*}\]
This is an elliptic paraboloid centered at \( (1,2,0).\)
Exercise \( \PageIndex{3}\)
Identify the surface represented by equation \( 9x^2+y^2−z^2+2z−10=0.\)
Hint
Look at the signs and powers of the \( x,y\), and \( z\) terms
Answer
Hyperboloid of one sheet, centered at \( (0,0,1)\).
Key Concepts A set of lines parallel to a given line passing through a given curve is called a cylinder,or a cylindrical surface. The parallel lines are called rulings. The intersection of a three-dimensional surface and a plane is called a trace. To find the trace in the -,\(yz\) -, or \(xz\) -planes, set \( z=0,x=0,\) or \( y=0,\) respectively. Quadric surfaces are three-dimensional surfaces with traces composed of conic sections. Every quadric surface can be expressed with an equation of the form
\[Ax^2+By^2+Cz^2+Dxy+Exz+Fyz+Gx+Hy+Jz+K=0. \nonumber\]
To sketch the graph of a quadric surface, start by sketching the traces to understand the framework of the surface. Important quadric surfaces are summarized in Figures \(\PageIndex{13}\) and \(\PageIndex{14}\). Glossary cylinder a set of lines parallel to a given line passing through a given curve ellipsoid a three-dimensional surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}+\dfrac{z^2}{c^2}=1\); all traces of this surface are ellipses elliptic cone a three-dimensional surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}−\dfrac{z^2}{c^2}=0\); traces of this surface include ellipses and intersecting lines elliptic paraboloid a three-dimensional surface described by an equation of the form \( z=\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}\); traces of this surface include ellipses and parabolas hyperboloid of one sheet a three-dimensional surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}−\dfrac{z^2}{c^2}=1;\) traces of this surface include ellipses and hyperbolas hyperboloid of two sheets a three-dimensional surface described by an equation of the form \( \dfrac{z^2}{c^2}−\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\); traces of this surface include ellipses and hyperbolas quadric surfaces surfaces in three dimensions having the property that the traces of the surface are conic sections (ellipses, hyperbolas, and parabolas) rulings parallel lines that make up a cylindrical surface trace the intersection of a three-dimensional surface with a coordinate plane Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
|
Good day all,
I'm new to probability theory and am currently working on a problem and was looking for some feedback on my work.
The question is this: One of two components is selected at random and tested. Component 1 is faulty with probability 1/5 and Component 2 is faulty with probability 1/10. What is the probability that Component 2 was the one selected given that the selected component is faulty?
So firstly, I tried working out the probability of selecting a faulty component.
Let $F$ be the event that the selected component is faulty. Then let $A$ be the event that Component 1 is faulty and let $B$ be the event that Component 2 is faulty.
$$P(F)=P(A\cup B)$$ $$=P(A)+P(B)-P(A\cap B)$$ $$\frac{1}{2}\cdot\frac{1}{5}+\frac{1}{2}\cdot\frac{1}{10}-0$$ $$=\frac{3}{20}$$
Firstly, I've no idea whether that is correct or not, but if it is, would I then proceed to calculate the conditional probability like this:
$$P(B|F)=P(B\cap F)/P(F)$$ $$=\frac{1}{20}\cdot\frac{20}{3}$$ $$=\frac{1}{3}$$
I feel like I've really missed something. I'm not sure exactly how to calculate $P(B\cap F)$ but my intuition says that $P(B\cap F)=P(B)$ in this instance. I'm also unsure of whether my working even makes sense. Any helpful feedback would be appreciated. Thanks in advance for your patience and time!
|
The Annals of Statistics Ann. Statist. Volume 19, Number 4 (1991), 1813-1831. Central Limit Theorems for $L_p$ Distances of Kernel Estimators of Densities Under Random Censorship Abstract
A sequence of independent nonnegative random variables with common distribution function $F$ is censored on the right by another sequence of independent identically distributed random variables. These two sequences are also assumed to be independent. We estimate the density function $f$ of $F$ by a sequence of kernel estimators $f_n(t) = (\int^\infty_{-\infty}K((t - x)/h(n))d\hat{F}_n(x))/h(n),$ where $h(n)$ is a sequence of numbers, $K$ is kernel density function and $\hat{F}_n$ is the product-limit estimator of $F.$ We prove central limit theorems for $\int^T_0|f_n(t) - f(t)|^p d\mu(t), 1 \leq p < \infty, 0 < T \leq \infty,$ where $\mu$ is a measure on the Borel sets of the real line. The result is tested in Monte Carlo trials and applied for goodness of fit.
Article information Source Ann. Statist., Volume 19, Number 4 (1991), 1813-1831. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176348372 Digital Object Identifier doi:10.1214/aos/1176348372 Mathematical Reviews number (MathSciNet) MR1135150 Zentralblatt MATH identifier 0747.60022 JSTOR links.jstor.org Citation
Csorgo, Miklos; Gombay, Edit; Horvath, Lajos. Central Limit Theorems for $L_p$ Distances of Kernel Estimators of Densities Under Random Censorship. Ann. Statist. 19 (1991), no. 4, 1813--1831. doi:10.1214/aos/1176348372. https://projecteuclid.org/euclid.aos/1176348372
|
Yes, there are many ways to produce a sequence of numbers that are more evenly distributed than random uniforms. In fact, there is a whole field dedicated to this question; it is the backbone of quasi-Monte Carlo (QMC). Below is a brief tour of the absolute basics.
Measuring uniformity
There are many ways to do this, but the most common way has a strong, intuitive, geometric flavor. Suppose we are concerned with generating $n$ points $x_1,x_2,\ldots,x_n$ in $[0,1]^d$ for some positive integer $d$. Define$$\newcommand{\I}{\mathbf 1}D_n := \sup_{R \in \mathcal R}\,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>,$$where $R$ is a rectangle $[a_1, b_1] \times \cdots \times [a_d, b_d]$ in $[0,1]^d$ such that $0 \leq a_i \leq b_i \leq 1$ and $\mathcal R$ is the set of all such rectangles. The first term inside the modulus is the "observed" proportion of points inside $R$ and the second term is the volume of $R$, $\mathrm{vol}(R) = \prod_i (b_i - a_i)$.
The quantity $D_n$ is often called the
discrepancy or extreme discrepancy of the set of points $(x_i)$. Intuitively, we find the "worst" rectangle $R$ where the proportion of points deviates the most from what we would expect under perfect uniformity.
This is unwieldy in practice and difficult to compute. For the most part, people prefer to work with the
star discrepancy,$$D_n^\star = \sup_{R \in \mathcal A} \,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>.$$The only difference is the set $\mathcal A$ over which the supremum is taken. It is the set of anchored rectangles (at the origin), i.e., where $a_1 = a_2 = \cdots = a_d = 0$.
Lemma: $D_n^\star \leq D_n \leq 2^d D_n^\star$ for all $n$, $d$. Proof. The left hand bound is obvious since $\mathcal A \subset \mathcal R$. The right-hand bound follows because every $R \in \mathcal R$ can be composed via unions, intersections and complements of no more than $2^d$ anchored rectangles (i.e., in $\mathcal A$).
Thus, we see that $D_n$ and $D_n^\star$ are equivalent in the sense that if one is small as $n$ grows, the other will be too. Here is a (cartoon) picture showing candidate rectangles for each discrepancy.
Examples of "good" sequences
Sequences with verifiably low star discrepancy $D_n^\star$ are often called, unsurprisingly,
low discrepancy sequences.
van der Corput. This is perhaps the simplest example. For $d=1$, the van der Corput sequences are formed by expanding the integer $i$ in binary and then "reflecting the digits" around the decimal point. More formally, this is done with the radical inverse function in base $b$,$$\newcommand{\rinv}{\phi}\rinv_b(i) = \sum_{k=0}^\infty a_k b^{-k-1} \>,$$where $i = \sum_{k=0}^\infty a_k b^k$ and $a_k$ are the digits in the base $b$ expansion of $i$. This function forms the basis for many other sequences as well. For example, $41$ in binary is $101001$ and so $a_0 = 1$, $a_1 = 0$, $a_2 = 0$, $a_3 = 1$, $a_4 = 0$ and $a_5 = 1$. Hence, the 41st point in the van der Corput sequence is $x_{41} = \rinv_2(41) = 0.100101\,\text{(base 2)} = 37/64$.
Note that because the least significant bit of $i$ oscillates between $0$ and $1$, the points $x_i$ for odd $i$ are in $[1/2,1)$, whereas the points $x_i$ for even $i$ are in $(0,1/2)$.
Halton sequences. Among the most popular of classical low-discrepancy sequences, these are extensions of the van der Corput sequence to multiple dimensions. Let $p_j$ be the $j$th smallest prime. Then, the $i$th point $x_i$ of the $d$-dimensional Halton sequence is$$x_i = (\rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_d}(i)) \>.$$For low $d$ these work quite well, but have problems in higher dimensions.
Halton sequences satisfy $D_n^\star = O(n^{-1} (\log n)^d)$. They are also nice because they are
extensible in that the construction of the points does not depend on an a priori choice of the length of the sequence $n$.
Hammersley sequences. This is a very simple modification of the Halton sequence. We instead use$$x_i = (i/n, \rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_{d-1}}(i)) \>.$$Perhaps surprisingly, the advantage is that they have better star discrepancy $D_n^\star = O(n^{-1}(\log n)^{d-1})$.
Here is an example of the Halton and Hammersley sequences in two dimensions.
Faure-permuted Halton sequences. A special set of permutations (fixed as a function of $i$) can be applied to the digit expansion $a_k$ for each $i$ when producing the Halton sequence. This helps remedy (to some degree) the problems alluded to in higher dimensions. Each of the permutations has the interesting property of keeping $0$ and $b-1$ as fixed points.
Lattice rules. Let $\beta_1, \ldots, \beta_{d-1}$ be integers. Take$$x_i = (i/n, \{i \beta_1 / n\}, \ldots, \{i \beta_{d-1}/n\}) \>,$$where $\{y\}$ denotes the fractional part of $y$. Judicious choice of the $\beta$ values yields good uniformity properties. Poor choices can lead to bad sequences. They are also not extensible. Here are two examples.
$(t,m,s)$ nets. $(t,m,s)$ nets in base $b$ are sets of points such that every rectangle of volume $b^{t-m}$ in $[0,1]^s$ contains $b^t$ points. This is a strong form of uniformity. Small $t$ is your friend, in this case. Halton, Sobol' and Faure sequences are examples of $(t,m,s)$ nets. These lend themselves nicely to randomization via scrambling. Random scrambling (done right) of a $(t,m,s)$ net yields another $(t,m,s)$ net. The MinT project keeps a collection of such sequences.
Simple randomization: Cranley-Patterson rotations. Let $x_i \in [0,1]^d$ be a sequence of points. Let $U \sim \mathcal U(0,1)$. Then the points $\hat x_i = \{x_i + U\}$ are uniformly distributed in $[0,1]^d$.
Here is an example with the blue dots being the original points and the red dots being the rotated ones with lines connecting them (and shown wrapped around, where appropriate).
Completely uniformly distributed sequences. This is an even stronger notion of uniformity that sometimes comes into play. Let $(u_i)$ be the sequence of points in $[0,1]$ and now form overlapping blocks of size $d$ to get the sequence $(x_i)$. So, if $s = 3$, we take $x_1 = (u_1,u_2,u_3)$ then $x_2 = (u_2,u_3,u_4)$, etc. If, for every $s \geq 1$, $D_n^\star(x_1,\ldots,x_n) \to 0$, then $(u_i)$ is said to be completely uniformly distributed. In other words, the sequence yields a set of points of any dimension that have desirable $D_n^\star$ properties.
As an example, the van der Corput sequence is not completely uniformly distributed since for $s = 2$, the points $x_{2i}$ are in the square $(0,1/2) \times [1/2,1)$ and the points $x_{2i-1}$ are in $[1/2,1) \times (0,1/2)$. Hence there are no points in the square $(0,1/2) \times (0,1/2)$ which implies that for $s=2$, $D_n^\star \geq 1/4$ for all $n$.
Standard references
The Niederreiter (1992) monograph and the Fang and Wang (1994) text are places to go for further exploration.
|
If $\hat{T}(\Delta x) = e^{-\frac{i}{\hbar}\hat{p}\Delta x}$ is the spatial translation operator, then there exists a function $f$ from $\mathbb{R}$ to the ket space $V$ such that $\hat{T}(\Delta x) f(x) = f(x+\Delta x)$. Namely, the function that sends $x$ to the position eigenstate $|x\rangle$.
Similarly if $\hat{U}(\Delta t) = e^{-\frac{i}{\hbar}\hat{H}\Delta t}$ is the time evolution operator (for time-independent Hamiltonians), then there exists a function $f$ from $\mathbb{R}$ to $V$ such that $\hat{U}(\Delta t) f(t) = f(t+\Delta t)$. Namely, the function that sends $t$ to $|ψ(t)\rangle$, the state of a particle at time $t$.
And similarly if $\hat{R}_z(\Delta\theta) = e^{-\frac{i}{\hbar}{\hat{L}_z\Delta \theta}}$ is the orbital rotation operator, then there exists a function $f$ from $\mathbb{R}$ to $V$ such that $\hat{R}_z(\Delta \theta)f(\theta) = f(\theta+\Delta\theta)$. Namely the function which sends $\theta$ to $|r,\theta,\phi\rangle$, an eigenstate of the position operator $\hat{\theta}$ in spherical coordinates.
But my question is, if $\hat{R}_z(\Delta\theta) = e^{-\frac{i}{\hbar}\hat{J}_z\Delta \theta}$ is the intrinsic rotation operator, then does there exist a function $f$ from $\mathbb{R}$ to $V$ such that $\hat{R}_z(\Delta \theta)f(\theta) = f(\theta+\Delta\theta)$? By intrinsic rotation operator I mean the rotation operation related to spin angular momentum.
I suspect the answer is no, because there is no operator corresponding to $\theta$, the parameter of the intrinsic rotation operator, since in quantum mechanics it doesn't really make sense to think of spin as a particle rotating about its own axis. But then again, there is no time operator in non-relativistic quantum mechanics, and yet the time evolution operator satisfies the property.
In any case, assuming that the answer to my question is no, I'd like a formal proof that there cannot exist such an $f$.
EDIT: I posted a follow-up question here.
|
I concur with Aretino's answer; I just wanted to dig in to the details a bit, in the hopes it illustrates some of the options and approaches we can utilise here.
Length of the curve $$\begin{cases} x(t) = r t \cos(t)\\y(t) = r t \sin(t)\end{cases}\tag{1}\label{1}$$from $t_0$ to $t_1$ is$$s( t_0 ,\, t_1 ) = \int_{t_0}^{t_1} ds(t) \, dt$$where$$ds(t) = \sqrt{\left( \frac{d \, x(t)}{d \, t} \right)^2 + \left ( \frac{ d \, y(t) }{d \, t} \right)^2} $$i.e.$$\begin{array}{rl} s( t_0 ,\, t_1 ) = & \frac{r \, t_1}{2} \sqrt{ t_1^2 + 1 } - \frac{r \, t_0}{2} \sqrt{ t_0^2 + 1 } \; - \\\; & \frac{r}{2} \log_e\left(t_0 + \sqrt{t_0^2 + 1}\right) - \frac{r}{2} \log_e\left(\sqrt{t_1^2 + 1} - t_1\right)\end{array}\tag{2}\label{2}$$
In practice, we'd like to know $t_1 = f( d, t_0 )$, i.e. the position $t_1$ on the curve that is distance $d$ from $t_0$ along the curve, fulfilling $s( t_0 , t_1 ) = d$. Unfortunately, there are no algebraic solutions for function $f(d, t_0)$.
Numerically, we can roughly approximate $s'(t_0 ,\, t_1) \approx (t_1^2 -t_0^2) \, r/2$. It is a bit off when $t_0$ is very small (near the center of the spiral), but gets better as $t_0$ and/or $t_1$ increase. (Since the spiral is tightest near the center, $t_0 = 0$, I suspect that humans tend to not perceive that error, which means this approximation should be okay for visual purposes.)
If we need a result to within a specific precision, we can use a binary search to find $t_1$ from $s(t_0 ,\, t_1) = d$, numerically. (This has $O(\log N)$ time complexity, and generally requires $N$ iterations (evaluations of $s(t_0 , t_1)$) to get $N$ bits (binary digits) of precision, so it is quite efficient, too.)
For approximately equally spaced (as measured along the curve) points, we can use $$\tau_n = \sqrt{\frac{2 \, d \, n}{r}}, \qquad 0 \le n \in \mathbb{Z}$$Then,$$s'( \tau_n ,\, \tau_{n+1} ) = d \tag{3}\label{3}$$which means that$$\begin{cases}x_n = x( \tau_n ) = r \sqrt{\frac{2 \, d \, n}{r}} \cos\left(\sqrt{\frac{2 d \, n}{r}} \right) \\y_n = y( \tau_n ) = r \sqrt{\frac{2 \, d \, n}{r}} \sin\left(\sqrt{\frac{2 d \, n}{r}} \right) \end{cases}$$gives us points $(x_n , y_n)$ that are spaced roughly $d$ apart, measuring along the curve.
If we substitute $k = 2 d / r$, Aretino's answer directly follows.
|
Ex 11.1.1 Compute \(\lim_{x\to\infty} x^{1/x}\). (answer) Ex 11.1.2 Use the squeeze theorem to show that \(\lim_{n\to\infty} {n!\over n^n}=0\). Ex 11.1.3 Determine whether \(\{\sqrt{n+47}-\sqrt{n}\}_{n=0}^{\infty}\) converges or diverges. If it converges, compute the limit. (answer) Ex 11.1.4 Determine whether \(\left\{{n^2+1\over (n+1)^2}\right\}_{n=0}^{\infty}\) converges or diverges. If it converges, compute the limit. (answer) Ex 11.1.5 Determine whether \(\left\{{n+47\over\sqrt{n^2+3n}}\right\}_{n=1}^{\infty}\) converges or diverges. If it converges, compute the limit. (answer) Ex 11.1.6 Determine whether \(\left\{{2^n\over n!}\right\}_{n=0}^{\infty}\) converges or diverges. (answer) Ex 11.2.1 Explain why \(\sum_{n=1}^\infty {n^2\over 2n^2+1}\) diverges. (answer) Ex 11.2.2 Explain why \(\sum_{n=1}^\infty {5\over 2^{1/n}+14}\) diverges. (answer) Ex 11.2.3 Explain why \(\sum_{n=1}^\infty {3\over n}\) diverges. (answer) Ex 11.2.4 Compute \(\sum_{n=0}^\infty {4\over (-3)^n}- {3\over 3^n}\). (answer) Ex 11.2.5 Compute \(\sum_{n=0}^\infty {3\over 2^n}+ {4\over 5^n}\). (answer) Ex 11.2.6 Compute \(\sum_{n=0}^\infty {4^{n+1}\over 5^n}\). (answer) Ex 11.2.7 Compute \(\sum_{n=0}^\infty {3^{n+1}\over 7^{n+1}}\). (answer) Ex 11.2.8 Compute \(\sum_{n=1}^\infty \left({3\over 5}\right)^n\). (answer) Ex 11.2.9 Compute \(\sum_{n=1}^\infty {3^n\over 5^{n+1}}\). (answer)
Determine whether each series converges or diverges.
Ex 11.3.1 \(\sum_{n=1}^\infty {1\over n^{\pi/4}}\) (answer) Ex 11.3.2 \(\sum_{n=1}^\infty {n\over n^2+1}\) (answer) Ex 11.3.3 \(\sum_{n=1}^\infty {\ln n\over n^2}\) (answer) Ex 11.3.4 \(\sum_{n=1}^\infty {1\over n^2+1}\) (answer) Ex 11.3.5 \(\sum_{n=1}^\infty {1\over e^n}\) (answer) Ex 11.3.6 \(\sum_{n=1}^\infty {n\over e^n}\) (answer) Ex 11.3.7 \(\sum_{n=2}^\infty {1\over n\ln n}\) (answer) Ex 11.3.8 \(\sum_{n=2}^\infty {1\over n(\ln n)^2}\) (answer) Ex 11.3.9 Find an \(N\) so that \(\sum_{n=1}^\infty {1\over n^4}\) is between \(\sum_{n=1}^N {1\over n^4}\) and \(\sum_{n=1}^N {1\over n^4} + 0.005\). (answer) Ex 11.3.10 Find an \(N\) so that \(\sum_{n=0}^\infty {1\over e^n}\) is between \(\sum_{n=0}^N {1\over e^n}\) and \(\sum_{n=0}^N {1\over e^n} + 10^{-4}\). (answer) Ex 11.3.11 Find an \(N\) so that \(\sum_{n=1}^\infty {\ln n\over n^2}\) is between \(\sum_{n=1}^N {\ln n\over n^2}\) and \(\sum_{n=1}^N {\ln n\over n^2} + 0.005\). (answer) Ex 11.3.12 Find an \(N\) so that \(\sum_{n=2}^\infty {1\over n(\ln n)^2}\) is between \(\sum_{n=2}^N {1\over n(\ln n)^2}\) and \(\sum_{n=2}^N {1\over n(\ln n)^2} + 0.005\). (answer)
Determine whether the following series converge or diverge.
Ex 11.4.1 \(\sum_{n=1}^\infty {(-1)^{n-1}\over 2n+5}\) (answer) Ex 11.4.2 \(\sum_{n=4}^\infty {(-1)^{n-1}\over \sqrt{n-3}}\) (answer) Ex 11.4.3 \(\sum_{n=1}^\infty (-1)^{n-1}{n\over 3n-2}\) (answer) Ex 11.4.4 \(\sum_{n=1}^\infty (-1)^{n-1}{\ln n\over n}\) (answer) Ex 11.4.5 Approximate \(\sum_{n=1}^\infty (-1)^{n-1}{1\over n^3}\) to two decimal places. (answer) Ex 11.4.6 Approximate \(\sum_{n=1}^\infty (-1)^{n-1}{1\over n^4}\) to two decimal places. (answer)
Determine whether the series converge or diverge.
Ex 11.5.1 \(\sum_{n=1}^\infty {1\over 2n^2+3n+5} \) (answer) Ex 11.5.2 \(\sum_{n=2}^\infty {1\over 2n^2+3n-5} \) (answer) Ex 11.5.3 \(\sum_{n=1}^\infty {1\over 2n^2-3n-5} \) (answer) Ex 11.5.4 \(\sum_{n=1}^\infty {3n+4\over 2n^2+3n+5} \) (answer) Ex 11.5.5 \(\sum_{n=1}^\infty {3n^2+4\over 2n^2+3n+5} \) (answer) Ex 11.5.6 \(\sum_{n=1}^\infty {\ln n\over n}\) (answer) Ex 11.5.7 \(\sum_{n=1}^\infty {\ln n\over n^3}\) (answer) Ex 11.5.8 \(\sum_{n=2}^\infty {1\over \ln n}\) (answer) Ex 11.5.9 \(\sum_{n=1}^\infty {3^n\over 2^n+5^n}\) (answer) Ex 11.5.10 \(\sum_{n=1}^\infty {3^n\over 2^n+3^n}\) (answer)
Determine whether each series converges absolutely, converges conditionally, or diverges.
Ex 11.6.1 \(\sum_{n=1}^\infty (-1)^{n-1}{1\over 2n^2+3n+5}\) (answer) Ex 11.6.2 \(\sum_{n=1}^\infty (-1)^{n-1}{3n^2+4\over 2n^2+3n+5}\) (answer) Ex 11.6.3 \(\sum_{n=1}^\infty (-1)^{n-1}{\ln n\over n}\) (answer) Ex 11.6.4 \(\sum_{n=1}^\infty (-1)^{n-1} {\ln n\over n^3}\) (answer) Ex 11.6.5 \(\sum_{n=2}^\infty (-1)^n{1\over \ln n}\) (answer) Ex 11.6.6 \(\sum_{n=0}^\infty (-1)^{n} {3^n\over 2^n+5^n}\) (answer) Ex 11.6.7 \(\sum_{n=0}^\infty (-1)^{n} {3^n\over 2^n+3^n}\) (answer) Ex 11.6.8 \(\sum_{n=1}^\infty (-1)^{n-1} {\arctan n\over n}\) (answer) Ex 11.7.1 Compute \(\lim_{n\to\infty} |a_{n+1}/a_n|\) for the series \(\sum 1/n^2\). Ex 11.7.2 Compute \(\lim_{n\to\infty} |a_{n+1}/a_n|\) for the series \(\sum 1/n\). Ex 11.7.3 Compute \(\lim_{n\to\infty} |a_n|^{1/n}\) for the series \(\sum 1/n^2\). Ex 11.7.4 Compute \(\lim_{n\to\infty} |a_n|^{1/n}\) for the series \(\sum 1/n\).
Determine whether the series converge.
Ex 11.7.5 \(\sum_{n=0}^\infty (-1)^{n}{3^n\over 5^n}\) (answer) Ex 11.7.6 \(\sum_{n=1}^\infty {n!\over n^n}\) (answer) Ex 11.7.7 \(\sum_{n=1}^\infty {n^5\over n^n}\) (answer) Ex 11.7.8 \(\sum_{n=1}^\infty {(n!)^2\over n^n}\) (answer) Ex 11.7.9 Prove theorem 11.7.3, the root test. Ex 11.8.1 \(\sum_{n=0}^\infty n x^n\) (answer) Ex 11.8.2 \(\sum_{n=0}^\infty {x^n\over n!}\) (answer) Ex 11.8.3 \(\sum_{n=1}^\infty {n!\over n^n}x^n\) (answer) Ex 11.8.4 \(\sum_{n=1}^\infty {n!\over n^n}(x-2)^n\) (answer) Ex 11.8.5 \(\sum_{n=1}^\infty {(n!)^2\over n^n}(x-2)^n\) (answer) Ex 11.8.6 \(\sum_{n=1}^\infty {(x+5)^n\over n(n+1)}\) (answer) Ex 11.9.1 Find a series representation for \(\ln 2\). (answer) Ex 11.9.2 Find a power series representation for \(1/(1-x)^2\). (answer) Ex 11.9.3 Find a power series representation for \( 2/(1-x)^3\). (answer) Ex 11.9.4 Find a power series representation for \( 1/(1-x)^3\). What is the radius of convergence? (answer) Ex 11.9.5 Find a power series representation for \(\int\ln(1-x)\,dx\). (answer).
For each function, find the Maclaurin series or Taylor series centered at $a$, and the radius of convergence.
Ex 11.10.1 \(\cos x\) (answer) Ex 11.10.2 \( e^x\) (answer) Ex 11.10.3 \(1/x\), \(a=5\) (answer) Ex 11.10.4 \(\ln x\), \(a=1\) (answer) Ex 11.10.5 \(\ln x\), \(a=2\) (answer) Ex 11.10.6 \( 1/x^2\), \(a=1\) (answer) Ex 11.10.7 \(1/\sqrt{1-x}\) (answer) Ex 11.10.8 Find the first four terms of the Maclaurin series for \(\tan x\) (up to and including the \( x^3\) term). (answer) Ex 11.10.9 Use a combination of Maclaurin series and algebraic manipulation to find a series centered at zero for \( x\cos (x^2)\). (answer) Ex 11.10.10 Use a combination of Maclaurin series and algebraic manipulation to find a series centered at zero for \( xe^{-x}\). (answer) Ex 11.11.1 Find a polynomial approximation for \(\cos x\) on \([0,\pi]\), accurate to \( \pm 10^{-3}\) (answer) Ex 11.11.2 How many terms of the series for \(\ln x\) centered at 1 are required so that the guaranteed error on \([1/2,3/2]\) is at most \( 10^{-3}\)? What if the interval is instead \([1,3/2]\)? (answer) Ex 11.11.3 Find the first three nonzero terms in the Taylor series for \(\tan x\) on \([-\pi/4,\pi/4]\), and compute the guaranteed error term as given by Taylor's theorem. (You may want to use Sage or a similar aid.) (answer) Ex 11.11.4 Show that \(\cos x\) is equal to its Taylor series for all \(x\) by showing that the limit of the error term is zero as N approaches infinity. Ex 11.11.5 Show that \(e^x\) is equal to its Taylor series for all \(x\) by showing that the limit of the error term is zero as \(N\) approaches infinity.
|
Suppose $(X^m, g)$ is a closed Riemannian manifold of dimension $m$ with the following properties,
There is a constant $\kappa$ such that $\kappa r^m \leq Vol(B(x, r)) \leq \kappa^{-1} r^m$ for every $r \in (0,1)$.
For every $C^1$ function $f$, we have $(\int_X |f|^{\frac{2m}{m-2}})^{\frac{m-2}{m}} \leq C_S (\int_X |f|^2 + |\nabla f|^2)$.
For every $C^1$ function $f$, we have $\int_X |f|^2 \leq C_P (\int_X |\nabla f|^2 + (\int_X f)^2)$.
Can we say that there is a positive constant $I$ depending only on $m, \kappa, C_S, C_P$ such that $\frac{Area(\partial \Omega)^m}{Vol(\Omega)^{m-1}} \geq I$ for every domain $\Omega$ satisfying $Vol(\Omega) \leq \frac{1}{2} Vol(X)$?
|
I need to find tangent plane to surface $z = \frac{y^2 - 1}{x}$ that passes through $A(0,1,0)$ and $B(1,3,4)$
Normal vector of the plane I am looking for:
$$\vec{n} = \Big[\frac{\partial z}{\partial x}, \frac{\partial z}{\partial y}, -1 \Big]$$
$$\vec{n} = \Big[\frac{1 - y^2}{x^2}, \frac{2y}{x}, -1 \Big]$$
Plane equation:
$$\pi: Ax + By + Cz + D = 0$$
Until now I was always given only "passing through point $P(1,2,3)$" in the task, so only one point, so I could easily find $D$ and get the final result. But here I am given two distinct points.
Not sure how to proceed.
$$\vec{n_{A}} = [0, 0 \text{#note, division by 0 here, legal?#}, -1] \quad \Rightarrow \pi_{A}: -1z+D = 0 $$
$$D = 0$$
$$\pi_{A}: -z = 0$$
And now tangent plane at point B.
$$\vec{n_{B}} = [-8, 6, -1]\quad \Rightarrow \pi_{B}: -8x + 6y -z = -D$$
$$-8 + 18 - 4 = -D$$
$$D = -6$$
$$\pi_{B}: -8x + 6y - z - 6 = 0$$
My results would probably be correct for two separate tasks, two separate planes, two separate points. But here I need one specific plane, such that it passes through both points.
Not sure what values should I plug into the normal vector $\vec{n}$. I understand that if plane passes through $A$ and $B$ then normal vector $\vec{n}$ is perpendicular to $\vec{AB}$. Also line that contains points $A$ and $B$ is contained within that plane.
Just not sure how to plug in that knowledge here. Struggling mostly with finding the normal vector. And then finding $D$.
|
I was pretty confident that things are simple, but unfortunately I must have missed something. We can always change between between the bases for Dirac spinors, using unitary transformation, because
$$ \partial_\mu \bar\Psi \gamma_\mu \Psi=\partial_\mu \bar\Psi \underbrace{U^\dagger U }_{=1}\gamma_\mu \underbrace{U^\dagger U }_{=1}\Psi= \partial_\mu\underbrace{ \bar\Psi U^\dagger}_{= \bar \Psi' } \underbrace{U \gamma_\mu U^\dagger}_{\gamma_\mu'} \underbrace{U \Psi }_{\Psi'}$$
The action of the projection operator $P_L = \frac{1-\gamma_5}{2}$ is obvious in the Weyl basis:
$$ \begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix} \begin{pmatrix} \Psi_L \\ \Psi_R \end{pmatrix} = \begin{pmatrix} \Psi_L \\ 0 \end{pmatrix}. $$
In a different basis:
$$ P_L^{{\mathrm{Weyl}}} \Psi^{{\mathrm{Weyl}}} = \Psi^{{\mathrm{Weyl}}}_{{\mathrm{left}}} \Rightarrow P_L' \Psi' = \frac{1-\gamma'_5}{2}\underbrace{ \Psi'}_{U\Psi^{{\mathrm{Weyl}}}}= \frac{1-U i \gamma_0 U^\dagger U \gamma_1 U^\dagger U\gamma_2U^\dagger U \gamma_3U^\dagger }{2}U\Psi^{{\mathrm{Weyl}}}= \frac{1-U i \gamma_0 \gamma_1 \gamma_2 \gamma_3 }{2}\Psi^{{\mathrm{Weyl}}} $$
and I was hoping that in the end $U P_L^{{\mathrm{Weyl}}} \Psi^{{\mathrm{Weyl}}}= \Psi'_{{\mathrm{left}}}$, which would show that the projection operator works in different bases, too. Unfortunately, I can't factor out $U$.
Any ideas about what I missed would be great!
|
Defining parameters
Level: \( N \) = \( 4000 = 2^{5} \cdot 5^{3} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 4000.ci (of order \(50\) and degree \(20\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 1000 \) Character field: \(\Q(\zeta_{50})\) Newforms: \( 0 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{1}(4000, [\chi])\).
Total New Old Modular forms 160 40 120 Cusp forms 0 0 0 Eisenstein series 160 40 120
The following table gives the dimensions of subspaces with specified projective image type.
\(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
|
To answer this question it is advantageous to treat a molecule as a graph and use the well known adjacency matrix from graph theory.Here is the wikipedia definition:
For a simple graph with vertex set $V$, the adjacency matrix is a square $|V| × |V|$ matrix $\mathbb{A}$ such that its element $\mathbb{A}_{ij}$ is one when there is an edge from vertex $i$ to vertex $j$, and zero when there is no edge.
Hückel
The Hückel-Hamiltonian $H$ can be written as:
$$ H = \alpha \mathbf{1} + \beta \mathbb{A}$$
Where $\alpha$ is the ionization energy, $\beta$ the overlap between adjacent $p_z$ orbitals, $\mathbf{1}$ the unit matrix, and $\mathbb{A}$ the adjacency matrix.
If you look at the eigenvalue equation, $$H \psi = \lambda \psi $$it has a solution if and only if $\psi$ is an eigenvector of the adjacency matrix.The eigenvalue $\lambda$ of $H$ is given by the eigenvalue $\tilde{\lambda}$ of the adjacency matrix using the following equation:$$(\alpha \mathbf{1} + \beta \mathbb{A}) \psi = \alpha \psi + \beta \mathbb{A} \psi = (\alpha + \beta \tilde{\lambda}) \psi$$
What we see is that the $\alpha$ is just a constant offset for the energy. All the relevant information about level spacing, degeneracy etc. is contained in the $\tilde{\lambda}$ with a scaling factor $\beta$.
To put this in another way: All relevant properties of the Hückel-Hamiltionian are encoded in the adjacency matrix of a graph/molecule.
Particle in a box (Free electron network model)
There is a nice paper by Ruedenberg and Scheer 1953 about that topic here.The main idea is, that your 1D-particle-in-a-box solutions on each edge/bond have to be constrained at joint vertices/atoms.These constraints are "derived by intuition" in the cited paper.¹You want the whole wavefunction, that is piecewise composed of the wavefunctions on each edge, to be continous. Similar to Kirchhoff's Law for electric circuits you want to assert, that the probability current that flows into an vertex/atom flows out. For this reason, the constraints are called Kirchhoff conditions.
If $V$ is the set of all vertices/atoms, $\psi^e$ labels the wavefunction at the edge/bond $e$, and $E_v$ contains all edges/bonds on a vertex/atom $v$. Then you can express your constraints as:
Continuity:$$\forall v \in V: \forall e_1, e_2 \in E_v: \psi^e_1(v) = \psi^e_2(v)$$ Flux Preservation:$$\forall v \in V: \sum_{e \in E_v} (\psi^e)' (v) = 0$$
Without going into the details of the derivation: for
cyclic graphs/molecules your eigenvalues $\lambda$ are given again by the eigenvalues $\tilde{\lambda}$ of the adjacency matrix $\mathbb{A}$.The free scaling factor is then $L$ the edge/bond length (Up to some fixed constants like number literals or $h$.) Conclusion
For
cyclic molecules, it can be proven that the essential properties of the spectrum are given just by the adjacency matrix $\mathbb{A}$ of the molecule in both cases.
The parameter $\alpha$ in the Hückel-Formalism introduces a constant offset, which can be usually ignored for chemistry.
This means that both methods have only one free parameter that scales the spectrum of the adjacency matrix. In the case of Hückel it is the overlap between adjacent $p_z$ orbitals, in the case of the free electron network model it is the bond length $L$.
¹ Note that you can rigorously derive general constraints by forcing only self-adjoint/hermitian realisation of the hamiltonian on the graph. This can be found e.g. here. But we are chemists and not mathematicians so let's stick to the Kirchhoff conditions.
|
I started thermodynamics mostly through independent study and basically built up my own definitions of terms that appeared to fit with what was going on. They seemed to work but my question is whether or not this is how they are actually supposed to be viewed.
Equipartition theorem. 'It is possible to show that at equilibrium molecules share an equal amount of energy between multiple independent coordinates or degrees of freedom which fully describe their states, so long as the energy term is quadratic for each coordinate.' The simplest example is an ideal gas, whose energy is $1/2mv_x^2+1/2mv_y^2 + 1/2mv_z^2$ by pythagoras theorem. The $x,y,z$ terms are statistically the same because rotating the coordinates doesn't change pythagorean distance.
'Temperature is the rate of energy transfer away from a point due to particular collisions of molecules (not net transfer)'. To assign a number to this value, we calculate the
work done in a given collision on a molecule. Collisions occur in one degree of freedom and the velocity of approach is statistically the same as separation. so for instance, on average for a ideal gas
$kT = 2 \times \frac{1}{2}mv_x^2$.
Where $k$ is Boltzmann constant. $\frac{1}{2}mv_x^2$ is roughly $E/a$ where E is the energy of a particle and $a$ is the number of degrees of freedom. I have been able to apply calculus to this to deduce lots of useful facts, such as that the heat capacity of an ideal gas is $\frac{3}{2M_r}kN_a$, at equilibrium $PV = NkT$ and if little heat energy is transferred between an equilibrium gas and its surroundings, $T V^{\frac{2}{a}} = constant$
'Entropy is the average number of degrees of freedom that contain energy, for a given molecule in a system'. It is
s = $\frac{a_{mean}}{n}$
where $n$ is the number of moles. This can change with temperature. For example, increasing the temperature of oxygen gas will 'free up' a greater proportion of molecules, causing $a_{mean}$ to increase. $s$ of the universe is always increasing, much like water spreads through an ice cube tray, energy spreads through existing empty coordinates
'Enthalpy change is the amount of energy transferred (per mole) from the surroundings to the system, measured at constant temperature and pressure'. When negative, it goes into freeing up the coordinates of surrounding molecules if the 'universe' remains at roughly the same temperature. If enthalpy change is zero and the entropy change of the system is positive, all the energy released in the reaction frees up coordinates within the system. We can therefore say $\Delta S_{surroundings} = - \frac{\Delta H}{T}$
'Gibbs free energy (per mole) is proportional (-ve) to the amount of energy which has gone into freeing new coordinates in the 'universe' before and after an event at constant temperature and pressure'. Dimensionally it can be seen as $Ts - T(s_{system} + s_{surroundings}) = E_i - E_f$ where $E_i$ and $E_f$ are initial and final heat energies of the world. This leads to $-T(\Delta s_{system} + \Delta s_{surroundings}) = G < 0$ so
$\Delta H-T\Delta s_{system} = G < 0$
for any feasible reaction. Not sure about this because it suggests that energy has entered the closed system
|
Second Principle of Finite Induction Contents Theorem
Let $n_0 \in \Z$ be given.
Suppose that: $(1): \quad n_0 \in S$ $(2): \quad \forall n \ge n_0: \paren {\forall k: n_0 \le k \le n \implies k \in S} \implies n + 1 \in S$ Then: $\forall n \ge n_0: n \in S$ The second principle of finite induction is usually stated and demonstrated for $n_0$ being either $0$ or $1$.
This is often dependent upon whether the analysis of the fundamentals of mathematical logic are zero-based or one-based.
Suppose that: $(1): \quad 0 \in S$ $(2): \quad \forall n \in \N: \paren {\forall k: 0 \le k \le n \implies k \in S} \implies n + 1 \in S$ Then: $S = \N$ Suppose that: $(1): \quad 1 \in S$ $(2): \quad \forall n \in \N_{>0}: \paren {\forall k: 1 \le k \le n \implies k \in S} \implies n + 1 \in S$ Then: $S = \N_{>0}$ Proof
Define $T$ as:
$T = \set {n \in \Z : \forall k: n_0 \le k \le n: k \in S}$
Since $n \le n$, it follows that $T \subseteq S$.
Therefore, it will suffice to show that:
$\forall n \ge n_0: n \in T$ Firstly, we have that $n_0 \in T$ if and only if the following condition holds: $\forall k: n_0 \le k \le n_0 \implies k \in S$
Since $n_0 \in S$, it thus follows that $n_0 \in T$.
Now suppose that $n \in T$; that is: $\forall k: n_0 \le k \le n \implies k \in S$
By $(2)$, this implies:
$n + 1 \in S$
Thus, we have:
$\forall k: n_0 \le k \le n + 1 \implies k \in S$
Therefore, $n + 1 \in T$.
Hence, by the Principle of Finite Induction: $\forall n \ge n_0: n \in T$
as desired.
$\blacksquare$
The assumption that $\forall k: n_0 \le k \le n: k \in S$ for some $n \in \Z$ is the
induction hypothesis.
The step which shows that $n + 1 \in S$ follows from the assumption that $k \in S$ for all values of $k$ between $n_0$ and $n$ is called the
induction step.
Some sources refer to the
Second Principle of Finite Induction as the Second Principle of Mathematical Induction, and gloss over the differences between the two proof techniques if they discuss them both at all.
Hence the word
finite may well not appear in the various published expositions of this technique.
Both terms are used on $\mathsf{Pr} \infty \mathsf{fWiki}$.
Some sources call it the Principle of Strong (Finite) Induction. Also see Results about Proofs by Inductioncan be found here.
Let $\map P n$ be a propositional function depending on $n \in \N$.
Let it be established that:
$\text{(a)}: \quad \map P 1$ is true $\text{(b)}: \quad$ If all of $\map P 1, \map P 2, \ldots, \map P k$ are true, then $\map P {k + 1}$ is true. $\mathbf 1:$ Prove $\map P 1$. Set $k \gets 1$ and prove $\map P 1$ according to $\text{(a)}$. $\mathbf 3:$ Prove $\map P {k + 1}$. $\mathbf 4:$ Increase $k$. $k \gets k + 1$ and to to step $\mathbf 2$.
|
I didn't feel MO was the best place to ask this question, so apologies for this, but when I asked it at https://math.stackexchange.com/questions/2297837/why-is-this-cubic-polynomial-generic-for-cyclic-field-extensions, I didn't get enough information. I would really like to understand this example, so I will try to streamline the question and ask it here.
I am trying to understand the circumstances in which a finite cover of $\mathbb{P}^1_K$ of Galois group $G$ can be twisted to contain a field extension $L/K$ of group $G$. This construction comes from p1 of Serre's Topics in Galois Theory.
Suppose we have a field $K$, which I would like to think of as $\mathbb{Q}$, and take the curve $Y = \mathbb{P}^1_K$ and a finite subgroup $G \subset \mathrm{Aut}(Y)$, where $G\cong \mathbb{Z}/3\mathbb{Z}$. Now treat $Y$ as a finite branched cover of $\mathbb{P}^1$ via the quotient $Y \to Y/G$.
If $L/K$ is a Galois extension also with group $G$, then we get a map $\phi:G_K \to G \to \mathrm{Aut}(Y)$, which we can view as a 1-cocycle $\Big($because with trivial action $H^1(G_K,\mathrm{Aut}(Y)) = \mathrm{Hom}(G_K,\mathrm{Aut}(Y))$, right?$\Big)$.
(1) Why is the extension $L/K$ given by a rational point on $\mathbb{P}^1_K/G$ if and only if the twist of $Y$ by this cocycle has a rational point not invariant by G?
(2) How explicitly can I understand the twist of $Y$ by a cocycle? Can I get equations for it?
Thanks.
[Edit:] If I can't get an explanation, I will be content with a reference or two that can help me along. According to Serre the fact (1) is "a general property of Galois twists".
|
I am trying to simulate the phase separation of a binary mixture. If the free energy F is known as a function of the concentration $c$, the dynamical equation is:
$ \frac{\partial c(x,t)}{\partial t}=\frac{d^2}{dx^2} \frac{\delta F[c]}{\delta c} $
For the Flory-Huggins free energy we have:
$ \frac{\delta F[c]}{\delta c}=\log\left(\frac{c}{1-c}\right) + \chi c^2 + \gamma (c')^2 $
First term is entropy, second term is attraction between particles ($\chi<0$) and third term is similar as a surface tension.
I use time forward and space centered differentiation. Even for very small $dt$ I get numerical instabilities. I first thought the term $\gamma(c'^2)$ was responsible, but instabilities remain even without.
Here is my Matlab code for no-flux boundary conditions, clear and simplified as much as I can. I am aware the boundary condition implementation may not be correct but I don't think the problem comes from this.
What should I do?
function phaseSep ()clear all;clf;N = 101; %number of grid points in xdx = 1/(N - 1);x = 0:dx:1; %vector of x valuesT = 1e3; %number of time stepsdt = 1e-8;% Second derivativefunction y=mylaplace(f,i) y = f(i+1)-2*f(i)+f(i-1);endfunction y=dfdc(C) p=-0.01; g=0.01; for i=2:N-1 y(i) = log(C(i)/(1-C(i)))+p*C(i)+g*mylaplace(C,i)/dx^2; end % No-flux Boundary conditions y(1)=y(2); y(N)=y(N-1);end%Initial concentration for i=1:N C(i)=0.2+0.1*tanh(10*(x(i)-0.5));end% Plot initial concentrationhold;plot(x,C);% iteratefor t=1:T Z=dfdc(C); for i=2:N-1 C(i) = C(i) + (mylaplace(Z,i)/dx^2)*dt; end %No flux boundary conditions C(1) = C(2); C(N) = C(N-1);end% Plot final concentrationplot(x,C);end
|
Here is an example from Bhargav Bhatt's talk "Using DAG" at MSRI last week. Needless to say, any mistakes are mine.
Theorem. Let $X$ be a coherent (quasi-compact and quasi-separated) scheme, let $A$ be a ring complete with respect to an ideal $I\subseteq A$. Then$$ X(A) \to \varprojlim_n X(A/I^{n+1}) $$is bijective.
Before going into the proof, let us consider the case $X$ is affine. Then $$X(A) = {\rm Hom}(\Gamma(X, \mathcal{O}_X), A) = \varprojlim_n {\rm Hom}(\Gamma(X, \mathcal{O}_X), A/I^{n+1}) = \varprojlim_n X(A/I^{n+1}) . $$
The idea for the general (coherent) case is to replace $\Gamma(X, \mathcal{O}_X)$ with ${\rm Perf}(X)$, the category of perfect complexes on $X$.
Slogan. Affine schemes have "enough functions". Coherent schemes have "enough vector bundles (perfect complexes)".
The second idea may be due to Thomason.
More precisely, we have:
Proposition. Let $X$ and $Y$ be schemes.
(a) If $X$ is affine, then $$ {\rm Hom}(Y, X) \to {\rm Hom}(\Gamma(X, \mathcal{O}_X), \Gamma(Y, \mathcal{O}_Y)) $$is bijective.
(b) If $X$ is coherent, then$$ {\rm Hom}(Y, X) \to {\rm Hom}({\rm Perf}(X), {\rm Perf}(Y)) $$is an equivalence.
We must specify what (b) means (here is where DAG enters the picture). We consider ${\rm Perf}(X)$ as the symmetric monoidal $\infty$-category of perfect complexes on $X$ (complexes locally quasi-isomorphic to a bounded complex of locally free sheaves of finite rank). The ${\rm Hom}$ on the right means the $\infty$-groupoid (space) exact $\otimes$-functors. So in particular (b) implies that this space is discrete. The map in (b) sends $f$ to $f^*$, the pull-back functor.
"Proof" of Theorem. We repeat the proof of the affine case, replacing rings with categories of perfect complexes:$$X(A) = {\rm Hom}({\rm Perf}(X), {\rm Perf}(A)) = \varprojlim_n {\rm Hom}({\rm Perf}(X), {\rm Perf}(A/I^{n+1})) = \varprojlim_n X(A/I^{n+1}) . $$Unlike in the affine case, the middle equality needs some justification, which I am not ready to give.
End remarks.
(1) I think Bhargav mentioned that an idea due to Gabber allows one to get rid of the assumption that $X$ is coherent in the Theorem.
(2) He also said that the above proof is the only one he is aware of.
(3) Reference for the above (thanks to the user
crystalline):
Bhargav Bhatt
Algebraization and Tannaka duality arxiv.org/abs/1404.7483.
|
To my understanding, mixed states is composed of various states with their corresponding probabilities, but what is the actual difference between maximally mixed states and maximally entangled states?
Suppose we have two Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$. A quantum state on $\mathcal{H}_A$ is a normalized, positive trace-class operator $\rho\in\mathcal{S}_1(\mathcal{H}_A)$. If $\mathcal{H}_A$ is finite dimensinal (i.e. $\mathbb{C}^n$), then a quantum state is just a positive semi-definite matrix with unit trace on this Hilbert space. Let's stick to finite dimensions for simplicity.
Let's now consider the idea of a pure state: A pure state is a rank-one state, i.e. a rank-one projection, or a matrix that can be written as $|\psi\rangle\langle \psi|\equiv\psi\psi^{\dagger}$ for some $\psi\in\mathcal{H}_A$ (the first being the Dirac notation, the second is the usual mathematical matrix notation - since I don't know which of the two you are more familar with, let me use both). A mixed state is now a convex combination of pure states and, by virtue of the spectral theorem, any state is a convex combination of pure states. Hence, a mixed state can be written as
$$ \rho=\sum_i \lambda_i |\psi_i\rangle \langle \psi_i|$$for some $\lambda_i\geq 0$, $\sum_i \lambda_i=1$. In a sense, the $\lambda_i$ are a probability distribution and the state $\rho$ is a "mixture" of $|\psi\rangle\langle\psi|$ with weights $\lambda_i$. If we assume that the $\psi_i$ form an orthonormal basis, then a
maximally mixed state is a state where the $\lambda_i$ are the uniform probability distribution, i.e. $\lambda_i=\frac{1}{n}$ if $n$ is the dimension of the state. In this sense, the state is maximally mixed, because it is a mixture where all states occur with the same probability. In our finite dimensional example, this is the same as saying that $\rho$ is proportional to the identity matrix.
Note that a maximally mixed state is defined for all Hilbert spaces! In order to consider
maximally entangled states, we need to have a bipartition of the Hilbert space, i.e. we now consider states $\rho\in\mathcal{S}_1(\mathcal{H}_A\otimes \mathcal{H}_B)$. Let's suppose $\mathcal{H}_A=\mathcal{H}_B$ and finite dimensional. In this case, we can consider entangled state. A state is called separable, if it can be written as a mixture
$$ \rho =\sum_i \lambda_i \rho^{(1)}_i\otimes \rho^{(2)}_i $$i.e. it is a mixture of product states $\rho^{(1)}_i$ in the space $\mathcal{H}_A$ and $\rho^{(2)}_i$ in the space $\mathcal{H}_B$. All states that are not separable are called
entanglend. If we consider $\mathcal{H}_A=\mathcal{H}_B=\mathbb{C}^2$ and denote the standard basis by $|0\rangle,|1\rangle$, an entangled state is given by
$$ \rho= \frac{1}{2}(|01\rangle+|10\rangle)(\langle 01|+\langle 10|)$$ You can try writing it as a separable state and you will see that it's not possible. Note that this state is pure, but entangled states do not need to be pure!
It turns out that for bipartite systems (if you consider three or more systems, this is no longer true), you can define an order on
pure entangled states: There are states that are more entangled than others and then there are states that have the maximum amount of possible entanglement (like the example I wrote down above). I won't describe how this is done (it's too much here), but it turns out that there is an easy characterization of a maximally entangled state, which connects maximally entangled and maximally mixed states:
A pure bipartite state is maximally entangled, if the reduced density matrix on either system is maximally mixed.
The reduced density matrix is what is left if you take the partial trace over one of the subsystems. In our example above:
$$ \rho_A = tr_B(\rho)= tr_B(\frac{1}{2}(|01\rangle\langle 01|+|10\rangle\langle 01|+|01\rangle\langle 10|+|10\rangle\langle 10|))=\frac{1}{2}(|0\rangle\langle 0|+|1\rangle\langle 1|) $$
and the last part is exactly the identity, i.e. the state is maximally mixed. You can do the same over with $tr_A$ and see that the state $\rho$ is therefore maximally entangled.
See the following examples:
$\rho_1 = \frac{1}{2}(|00\rangle + |11\rangle)(\langle 00| + \langle 11|)$ is a maximally entangled state.
$\rho_2 = \frac{1}{2} (|0\rangle \langle 0| + |1\rangle \langle 1|)$ is a maximally mixed state.
The difference is not related with "maximally". Your question can be changed to :
What's the difference between "entangled" and "mixed"?
Simply speaking: "entangled" is a relationship between two systems (or subsystems) while "mixed" is a property of one system.
When the state space for a system can be expressed as a tensor product of the state spaces of individual components of the system, an entangled state is one that can't be expressed as a tensor product of states of those individual components. Thus an entangled state is a particular type of (pure, i.e. non-mixed) state.
A mixed state, by contrast, is a probability distribution over pure states. This makes perfectly good sense whether or not the state space decomposes as a tensor product.
So an entangled state is a mixed state only in the degenerate sense that the probability distribution is concentrated on a single point. Of course you can also have a non-trivial mixture of entangled states.
|
I'll start with Earth
Earth is hurling through space at a speed of approximately $29.78 km/s$ If the sun were to disappear, the Earth would move in a straight line until the sun reappears. Since there are $259,200 seconds$ in three days that gives Earth the time to travel $29.78 km/s \times 259,200 s = 7,718,976 km$ That's quite a distance.
Since the distance between the Earth and the sun varies between $147,098,290 km$ and $152,098,232 km$, I'll average that down to about 150 million kilometers for calculations.
Using Pythagoras, we can get the distance form the sun when it comes back after 3 days: $\sqrt{150,000,000^{2} + 7,700,000^{2}} = 150,1632,44.504$. This puts us about $160,000 km$ out of orbit, peanuts compared to the difference between the Earths aphelion and its perhelion which is about 5 million kilometer.
What about the influence of other planets?
Good point, Jupiter is huge and can get reasonably close to Earth [citation needed]. We'll assume a worst case scenario and place Jupiter at a distance of 600,000,000 km from earth. Jupiter is significantly slower than earth, but in the span of three days, this is not going to make a huge difference considering the distance between them.
You can calculate the acceleration of a body under gravitaional influence by another body as: $G\frac{m}{r^{2}}$ Where G is the gravitational constant, m is the mass of the body attracting (Jupiter in our case) and r is the distance between the two bodies. Filling this in gives us: $6.673\times10^{−11}\frac{1.8986\times10^{27}}{600,000,000,000^{2}} = 3.51926606\times10^{-7} m/s^{2}$ Which means that the earth will accelerate towards Jupiter at a rate of 3.51926606*10^-7 m/s every second. After 3 days we will have traveled $\frac{3.51926606\times10^{-7}\times259200^{2}}{2} = 11822.0311653 m$ towards Jupiter, not even $12 km$!
Mars is closer though.
I see your point, but assuming Mars is as close as 50 million km, we get an shift towards Mars of about $56 km$. Not really significant.
How will other planets fare?
Well, Mercury will be off the worst. If there's no significant change there, there won't be a significant change anywhere. As it is traveling at about $47.362 km/s$ It could travel a distance of more than 12 million km in 3 days. Taking into account its smaller orbit, this would take it about 1.2 million km out of orbit, not bad. But still not much compared to the variance in its orbit which is almost 14 million km.
Conclusion:
If Fernir eats the sun, there are more important things to worry about than where the planets will be in 3 days, when Fernir needs to go to the bathroom.
Edit:
But wait, the Earth is now going too fast for its distance from the sun
You're right. And it's slightly turned away from the sun too. And I must admit, I underestimated the effect of this. As some intelligent people in the comments pointed out, this would change the eccentricity of the earths orbit from 0.016 to 0.06. Using this calculator we can then figure out that Earths orbit will now vary between 141 million km and 159 million km.
The difference has nearly quadrupled! In the grand scheme of things, our orbit will still be relatively similar, this might be enough to seriously influence weather pattern though.
Another possible effect.
Since gravity can not travel faster than the speed of light, the effect of the sun disappearing can only propagate with the speed of light. Gravity needs about 4 seconds to traverse the diameter of the sun, so gravity will drop from 100% to 0 over the course of 4 seconds. Additionally, there will be about a 0.04 second lag between the part of the Earth facing the sun and the most distance part. The acceleration due to the suns gravity is $\frac{6.67\times10^{-11}\times1.9891\times10^{30}}{(1.496\times10^{11})^{2}} = 5.928151\times10^{-3}m/s^{2}$. Dropping from this value down to 0 over the course of 4 seconds with a maximum lag of 0.04 seconds doesn't seems bad enough to cause anything major, but maybe it is enough to cause some earthquakes? I'll leave that to geologists to decide.
|
Answer
$$d = 0.958 \space g/mL$$
Work Step by Step
$$V = 2.18 \space L \times \frac{1000 \space mL}{1 \space L} = 2180 \space mL$$ $$d = \frac{m}{V} = \frac{2088 \space g}{2180 \space mL} \approx 0.958 \space g/mL $$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
"""Author: John VolkDate: 10/10/2016"""from __future__ import print_functionfrom sympy.parsing.sympy_parser import (parse_expr, standard_transformations, implicit_multiplication,\ implicit_application)import numpy as npimport sympyimport re
Python, regex, and SymPy to automate custom text conversions to LaTeX¶ This post includes examples on how to:
Convert text equation in bad format for Python and SymPy
Convert normal Python mathematical experssion into a suitable form for SymPy's LaTeX printer
Use sympy to produce LaTeX output
Create functions and data structures to make the process reusable and efficient to fit your needs
Lets start with the following string that we assign to the variable text that represents a mathematical model but in poor printing form:¶
text = """Ln(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""text
'\nLn(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)\n+ a5 dtime + a6 dtime^2' However, we want this expression to look like:¶
$ \log{\left (Y \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi dtime \right )} + a_{4} \cos{\left (2 \pi dtime \right )} + a_{5} dtime + a_{6} dtime^{2} $
Observe the following differences between text and valid LateX:¶
Some variables and functions are concatenated, i.e.: LnQ, correct latex would be \log{Q}
Functions are not in proper latex form (e.g. Sin = \sin, Ln = \log, ...)
Missing subscripts: a0 = a_0
Newline characters need to be removed
Some symbols need to be replaced: dtime = t
Python's symbolic math pacakge SymPy can automate some of the transformations that we need, and SymPy has built in LaTeX printing capabilities.¶
If you are not familiar with SymPy you should take some time to familiarize yourself with it; it takes some time to get used to its syntax. Check out the well done documentation for Sympy here.
First we need to convert the string (text) into valid SymPy input¶
Valid sympy input includes valid python math expressions with added recognition of math operations. For example the following expression can be parsed by SymPy without error:
exp = "(x + 4) * (x + sin(x**3) + log(x + 5*x) + 3*x - sqrt(y))" sympy.expand(exp)
4*x**2 - x*sqrt(y) + x*log(x) + x*sin(x**3) + x*log(6) + 16*x - 4*sqrt(y) + 4*log(x) + 4*sin(x**3) + 4*log(6)
print(sympy.latex(sympy.expand(exp)))
4 x^{2} - x \sqrt{y} + x \log{\left (x \right )} + x \sin{\left (x^{3} \right )} + x \log{\left (6 \right )} + 16 x - 4 \sqrt{y} + 4 \log{\left (x \right )} + 4 \sin{\left (x^{3} \right )} + 4 \log{\left (6 \right )} Now back to our original text that we want to convert, we need to make some simple adjustments to make the string a valid SymPy expression¶
You have several options here, in this case I choose to use regular expressions (regex) to do basic string pattern substitutions. You will likely need to modify these operations or create alrenative regex to prepare your text. If you do not know regex you can probably get by without using basic Python string methods.
## Note, I removed the LHS and the equal sign from the equation- SymPy requires special syntac for equations## further explanation belowtext = """a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""## Make a dictionary to map our strings to standard python math or symbols as neededsymbol_map = { '^': '**', 'Ln': 'log ', 'Sin': 'sin ', 'Cos': 'cos ', 'dtime': 't' }## use the dictionary to compile a regex on the keys## escape regex characters because ^ is one of the keys, (^ is a regex special character)to_symbols = re.compile('|'.join(re.escape(key) for key in symbol_map.keys())) # run through the text looking for keys (regex) and replacing them with the values from the dicttext = to_symbols.sub(lambda x: symbol_map[x.group()], text) text
'\na0 + a1 log Q + a2 log Q**2 + a3 sin (2 pi t) + a4 cos (2 pi t)\n+ a5 t + a6 t**2'
## remove new line characters from the text text = re.sub('\n', ' ', text)text
' a0 + a1 log Q + a2 log Q**2 + a3 sin (2 pi t) + a4 cos (2 pi t) + a5 t + a6 t**2'
## regex to replace coefficients a0, a1, ... with their equivalents with subscripts e.g. a0 = a_0text = re.sub(r"\s+a(\d)", r"a_\1", text)text
'a_0 +a_1 log Q +a_2 log Q**2 +a_3 sin (2 pi t) +a_4 cos (2 pi t) +a_5 t +a_6 t**2' At this point text is almost ready for LaTeX...¶ The remaining issues are sufficiently difficult string manipulations, SymPy's Parser is perfect for the remaining conversions:¶
Instead of trying to figure out how to place asterisks everywhere that multiplication is implied and parenthesis where functions are implied, e.g. log Q**2 should be log(Q**2) we can use SymPy's Parser that is quite powerful.
We use implicit multiplication (self-explantory) and implicit application for function applications that are mising parenthesis, both of these are transformations provided by the SymPy Parser. Remember the parser will still follow mathematical order of operations (PEMDAS) when doing implicit application. The parser can handle additional cases as well such as function exponentiation. Check the handy examples at the documentation link above.
## get the transformations we need (imported above) and place into a tuple that is required for the parsertransformations = standard_transformations + (implicit_multiplication, implicit_application, )## parse the text by applying implicit multiplication and implicit (math function) appplicationexpr = parse_expr(text, transformations=transformations)expr
a_0 + a_1*log(Q) + a_2*log(Q**2) + a_3*sin(2*pi*t) + a_4*cos(2*pi*t) + a_5*t + a_6*t**2
print(sympy.latex(expr))
a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} SymPly amazing!!¶
$a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} $
## global variables for the functionsymbol_map = { '^': '**', 'Ln': 'log ', 'Sin': 'sin ', 'Cos': 'cos ', 'dtime': 't' }transformations = standard_transformations + (implicit_multiplication, implicit_application, )## the functiondef translate(bad_text): """My custom string-to-LaTeX-ready SymPy expression translation function Arguments: bad_text (str): text that is in some bad format that requires string manipulation including custom string modifications to math functions, symbols, and operators defined by the global symbol_map dictionary (for substitutions), and the regexs compiled herein. More advanced manipulations providied by SymPy are defined by the global variable `transformations` are inputs to the SymPy parser Returns: expr (sympy expression): A SymPy expresion created by the SymPy expression parser after first doing custom string modifications to math functions, symbols, and operators """ to_symbols = re.compile('|'.join(re.escape(key) for key in symbol_map.keys())) bad_text = to_symbols.sub(lambda x: symbol_map[x.group()], bad_text) bad_text = re.sub('\n', '', bad_text) text = re.sub(r"\s+a(\d)", r"a_\1", bad_text) expr = parse_expr(text, transformations=transformations) return expr
## very handy, now we just have to convert to TeX and printprint(sympy.latex(translate(text)))
a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} What about the original text ? It was an equation with a left-hand-side:¶
Parse both the LHS and RHS separately and combine with SymPy's Equation method
text = """Ln(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""# split on the equal signt1 = text.split('=')[0] t2 = text.split('=')[1]
## Use sympy.Eq(LHS,RHS)LHS = translate(t1)RHS = translate(t2)print(sympy.latex(sympy.Eq(LHS, RHS)))
\log{\left (Y \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2}
$\log{\left (Load \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} $
SymPly fantastic!!!¶
## extract SymPy symbols from both sides of eqnLHS_symbols = [str(x) for x in LHS.atoms(sympy.symbol.Symbol)]RHS_symbols = [str(x) for x in RHS.atoms(sympy.symbol.Symbol)]LHS_symbols
['Y']
RHS_symbols
['a_0', 'Q', 'a_5', 'a_6', 'a_2', 'a_3', 'a_1', 'a_4', 't']
## remove Q and t from the RHS list because we do not want to plug values in for themRHS_symbols.pop(RHS_symbols.index('Q'))RHS_symbols.pop(RHS_symbols.index('t'));
## create a dictionary assigning each symbol to random variablesplug_in_dict = {k: np.random.randint(10) for k in RHS_symbols }print(plug_in_dict)
{'a_6': 4, 'a_5': 7, 'a_4': 5, 'a_3': 0, 'a_2': 1, 'a_1': 0, 'a_0': 6}
## now plug in our values and let sympy simplyfy! Note, the variables we changed only appear on the RHSRHS.subs(plug_in_dict)
4*t**2 + 7*t + log(Q**2) + 5*cos(2*pi*t) + 6
print(sympy.latex(sympy.Eq(LHS, RHS.subs(plug_in_dict))))
\log{\left (Y \right )} = 4 t^{2} + 7 t + \log{\left (Q^{2} \right )} + 5 \cos{\left (2 \pi t \right )} + 6
$\log{\left (Y \right )} = 4 t^{2} + 7 t + \log{\left (Q^{2} \right )} + 5 \cos{\left (2 \pi t \right )} + 6 $
Remarks¶
I hope this was useful to anyone trying to use Python to batch process strings into mathematical expressions and LaTeX. In my case I needed to process many of these types of strings that were output from a computer code that fits regression models to input data. As you can see, if you work with mathematical expressions of any kind and already know basic Python, SymPy is undoubtedly useful. If you liked this or have experimented with your own implementations of Python, regex, and/or SymPy to do cool and useful things please share in the comments below.
|
The time reversal operator $T$ is an antiunitary operator, and I saw $T^\dagger$ in many places
(for example when some guy is doing a "time reversal" $THT^\dagger$), but I wonder if there is a well-defined adjoint for an antilinear operator? Suppose we have an antilinear operator $A$ such that $$ A(c_1|\psi_1\rangle+c_2|\psi_2\rangle)=c_1^*A|\psi_1\rangle+c_2^*A|\psi_2\rangle $$ for any two kets $|\psi_1\rangle,|\psi_2\rangle$ and any two complex numbers $c_1^*, c_2^*$. And below is my reason for questioning the existence of $A^\dagger$: Let's calculate $\langle \phi|cA^\dagger|\psi\rangle$. On the one hand, obviously $$ \langle \phi|cA^\dagger|\psi\rangle=c\langle \phi|A^\dagger|\psi\rangle. $$ But on the other hand, $$ \langle \phi|cA^\dagger|\psi\rangle =\langle \psi|Ac^*|\phi\rangle^*=\langle \psi|cA|\phi\rangle^*=c^*\langle \psi|A|\phi\rangle^*=c^*\langle \phi|A^\dagger|\psi\rangle, $$ from which we deduce that $c\langle \phi|A^\dagger|\psi\rangle=c^*\langle \phi|A^\dagger|\psi\rangle$, almost always false, and thus a contradiction! So where did I go wrong if indeed $A^\dagger$ exists?
The time reversal operator $T$ is an antiunitary operator, and I saw $T^\dagger$ in many places
I) First of all, one should never use the Dirac bra-ket notation (in its ultimate version where an operator acts to the right on kets and to the left on bras) to consider the definition of adjointness, since the notation was designed to make the adjointness property look like a mathematical triviality, which it is not. See also this Phys.SE post.
II) OP's question(v1) about the existence of the adjoint of an antilinear operator is an interesting mathematical question, which is rarely treated in textbooks, because they usually start by assuming that operators are $\mathbb{C}$-linear.
III) Let us next recall the mathematical definition of the adjoint of a linear operator. Let there be a Hilbert space $H$ over a field $\mathbb{F}$, which in principle could be either real or complex numbers, $\mathbb{F}=\mathbb{R}$ or $\mathbb{F}=\mathbb{C}$. Of course in quantum mechanics, $\mathbb{F}=\mathbb{C}$. In the complex case, we will use the standard physicist's convention that the inner product/sequilinear form $\langle \cdot | \cdot \rangle$ is conjugated $\mathbb{C}$-linear in the first entry, and $\mathbb{C}$-linear in the second entry.
Recall Riesz' representation theorem: For each continuous $\mathbb{F}$-linear functional $f: H \to \mathbb{F}$ there exists a unique vector $u\in H$ such that $$\tag{1} f(\cdot)~=~\langle u | \cdot \rangle.$$
Let $A:H\to H$ be a continuous$^1$ $\mathbb{F}$-linear operator. Let $v\in H$ be a vector. Consider the continuous $\mathbb{F}$-linear functional
$$\tag{2} f(\cdot)~=~\langle v | A(\cdot) \rangle.$$
The value $A^{\dagger}v\in H$ of the adjoint operator $A^{\dagger}$ at the vector $v\in H$ is by definition the unique vector $u\in H$, guaranteed by Riesz' representation theorem, such that $$\tag{3} f(\cdot)~=~\langle u | \cdot \rangle.$$
In other words, $$\tag{4} \langle A^{\dagger}v | w \rangle~=~\langle u | w \rangle~=~f(w)=\langle v | Aw \rangle. $$
It is straightforward to check that the adjoint operator $A^{\dagger}:H\to H$ defined this way becomes an $\mathbb{F}$-linear operator as well.
IV) Finally, let us return to OP's question and consider the definition of the adjoint of an antilinear operator. The definition will rely on the complex version of Riesz' representation theorem. Let $H$ be given a complex Hilbert space, and let $A:H\to H$ be an antilinear continuous operator. In this case, the above equations (2) and (4) should be replaced with
$$\tag{2'} f(\cdot)~=~\overline{\langle v | A(\cdot) \rangle},$$
and
$$\tag{4'} \langle A^{\dagger}v | w \rangle~=~\langle u | w \rangle~=~f(w)=\overline{\langle v | Aw \rangle}, $$
respectively. Note that $f$ is a $\mathbb{C}$-linear functional.
It is straightforward to check that the adjoint operator $A^{\dagger}:H\to H$ defined this way becomes an antilinear operator as well.
--
|
We present the first observation of exclusive $e^+e^-$ production in hadron-hadron collisions, using $p\bar{p}$ collision data at \mbox{$\sqrt{s}=1.96$ TeV} taken by the Run II Collider Detector at Fermilab, and corresponding to an integrated luminosity of \mbox{532 pb$^{-1}$}. We require the absence of any particle signatures in the detector except for an electron and a positron candidate, each with transverse energy {$E_T>5$ GeV} and pseudorapidity {$|\eta|<2$}. With these criteria, 16 events are observed compared to a background expectation of {$1.9\pm0.3$} events. These events are consistent in cross section and properties with the QED process \mbox{$p\bar{p} \to p + e^+e^- + \bar{p}$} through two-photon exchange. The measured cross section is \mbox{$1.6^{+0.5}_{-0.3}\mathrm{(stat)}\pm0.3\mathrm{(syst)}$ pb}. This agrees with the theoretical prediction of {$1.71 \pm 0.01$ pb}.
We report the first measurement of the cross section for Z boson pair production at a hadron collider. This result is based on a data sample corresponding to 1.9 fb-1 of integrated luminosity from ppbar collisions at sqrt{s} = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. In the llll channel, we observe three ZZ candidates with an expected background of 0.096^{+0.092}_{-0.063} events. In the llnunu channel, we use a leading-order calculation of the relative ZZ and WW event probabilities to discriminate between signal and background. In the combination of llll and llnunu channels, we observe an excess of events with a probability of $5.1\times 10^{-6}$ to be due to the expected background. This corresponds to a significance of 4.4 standard deviations. The measured cross section is sigma(ppbar -> ZZ) = 1.4^{+0.7}_{-0.6} (stat.+syst.) pb, consistent with the standard model expectation.
We have measured the differential cross section for the inclusive production of psi(2S) mesons decaying to mu^{+} mu^{-1} that were produced in prompt or B-decay processes from ppbar collisions at 1.96 TeV. These measurements have been made using a data set from an integrated luminosity of 1.1 fb^{-1} collected by the CDF II detector at Fermilab. For events with transverse momentum p_{T} (psi(2S)) > 2 GeV/c and rapidity |y(psi(2S))| < 0.6 we measure the integrated inclusive cross section sigma(ppbar -> psi(2S)X) Br(psi(2S) -> mu^{+} mu^{-}) to be 3.29 +- 0.04(stat.) +- 0.32(syst.) nb.
Azimuthal decorrelations between the two central jets with the largest transverse momenta are sensitive to the dynamics of events with multiple jets. We present a measurement of the normalized differential cross section based on the full dataset (L=36/pb) acquired by the ATLAS detector during the 2010 sqrt(s)=7 TeV proton-proton run of the LHC. The measured distributions include jets with transverse momenta up to 1.3 TeV, probing perturbative QCD in a high energy regime.
This letter describes the observation of the light-by-light scattering process, $\gamma\gamma\rightarrow\gamma\gamma$, in Pb+Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV. The analysis is conducted using a data sample corresponding to an integrated luminosity of 1.73 nb$^{-1}$, collected in November 2018 by the ATLAS experiment at the LHC. Light-by-light scattering candidates are selected in events with two photons produced exclusively, each with transverse energy $E_{\textrm{T}}^{\gamma} > 3$ GeV and pseudorapidity $|\eta_{\gamma}| < 2.37$, diphoton invariant mass above 6 GeV, and small diphoton transverse momentum and acoplanarity. After applying all selection criteria, 59 candidate events are observed for a background expectation of 12 $\pm$ 3 events. The observed excess of events over the expected background has a significance of 8.2 standard deviations. The measured fiducial cross section is 78 $\pm$ 13 (stat.) $\pm$ 7 (syst.) $\pm$ 3 (lumi.) nb.
Inclusive production of $\mathrm{D^{*\pm}}$ mesons in two-photon collisions was measured by the L3 experiment at LEP. The data were collected at a centre-of-mass energy $\sqrt{s} = 189$ GeV with an integrated luminosity of $176.4 \mathrm{pb^{-1}}$. Differential cross sections of the process $\mathrm{e^+e^- \to D^{*\pm} X}$ are determined as functions of the transverse momentum and pseudorapidity of the $\mathrm{D^{*\pm}}$ mesons in the kinematic region 1 GeV $< p_{T}^{\mathrm{D^*}} < 5 $ GeV and $\mathrm{|\eta^{D^*}|} < 1.4$. The cross section integrated over this phase space domain is measured to be $132 \pm 22(stat.) \pm 26(syst.)$ pb. The differential cross sections are compared with next-to-leading order perturbative QCD calculations.
Single and multi-photon events with missing energy are analysed using data collected with the L3 detector at LEP at a centre-of-mass energy of 189 GeV, for a total of 176 pb^{-1} of integrated luminosity. The cross section of the process e+e- -> nu nu gamma (gamma) is measured and the number of light neutrino flavours is determined to be N_\nu = 3.011 +/- 0.077 including lower energy data. Upper limits on cross sections of supersymmetric processes are set and interpretations in supersymmetric models provide improved limits on the masses of the lightest neutralino and the gravitino. Graviton-photon production in low scale gravity models with extra dimensions is searched for and limits on the energy scale of the model are set exceeding 1 TeV for two extra dimensions.
Using 1.8 fb-1 of pp collisions at a center-of-mass energy of 7 TeV recorded by the ATLAS detector at the Large Hadron Collider, we present measurements of the production cross sections of Upsilon(1S,2S,3S) mesons. Upsilon mesons are reconstructed using the di-muon decay mode. Total production cross sections for p_T<70 GeV and in the rapidity interval |Upsilon|<2.25 are measured to be 8.01+-0.02+-0.36+-0.31 nb, 2.05+-0.01+-0.12+-0.08 nb, 0.92+-0.01+-0.07+-0.04 nb respectively, with uncertainties separated into statistical, systematic, and luminosity measurement effects. In addition, differential cross section times di-muon branching fractions for Upsilon(1S), Upsilon(2S), and Upsilon(3S) as a function of Upsilon transverse momentum p_T and rapidity are presented. These cross sections are obtained assuming unpolarized production. If the production polarization is fully transverse or longitudinal with no azimuthal dependence in the helicity frame the cross section may vary by approximately +-20%. If a non-trivial azimuthal dependence is considered, integrated cross sections may be significantly enhanced by a factor of two or more. We compare our results to several theoretical models of Upsilon meson production, finding that none provide an accurate description of our data over the full range of Upsilon transverse momenta accessible with this dataset.
We study charged particle production (pT>0.5 GeV/c, |η|<0.8) in proton-antiproton collisions at total center-of-mass energies s=300 GeV, 900 GeV, and 1.96 TeV. We use the direction of the charged particle with the largest transverse momentum in each event to define three regions of η-ϕ space: “toward”, “away”, and “transverse.” The average number and the average scalar pT sum of charged particles in the transverse region are sensitive to the modeling of the “underlying event.” The transverse region is divided into a MAX and MIN transverse region, which helps separate the “hard component” (initial and final-state radiation) from the “beam-beam remnant” and multiple parton interaction components of the scattering. The center-of-mass energy dependence of the various components of the event is studied in detail. The data presented here can be used to constrain and improve QCD Monte Carlo models, resulting in more precise predictions at the LHC energies of 13 and 14 TeV.
A study of WZ production in proton-proton collisions at sqrt(s) = 7 TeV is presented using data corresponding to an integrated luminosity of 4.6 fb^-1 collected with the ATLAS detector at the Large Hadron Collider in 2011. In total, 317 candidates, with a background expectation of 68+/-10 events, are observed in double-leptonic decay final states with electrons, muons and missing transverse momentum. The total cross-section is determined to be sigma_WZ(tot) = 19.0+1.4/-1.3(stat.)+/-0.9(syst.)+/-0.4(lumi.) pb, consistent with the Standard Model expectation of 17.6+1.1/-1.0 pb. Limits on anomalous triple gauge boson couplings are derived using the transverse momentum spectrum of Z bosons in the selected events. The cross section is also presented as a function of Z boson transverse momentum and diboson invariant mass.
The pair production of Z bosons is studied using the data collected by the L3 detector at LEP in 1998 in e+e- collisions at a centre-of-mass energy of 189 GeV. All the visible final states are considered and the cross section of this process is measured to be 0.74 +0.15 -0.14 (stat.) +/- 0.04 (syst.) pb. Final states containing b quarks are enhanced by a dedicated selection and their production cross section is found to be 0.18 +0.09 -0.07 (stat.) +/- 0.02 (syst.) pb. Both results are in agreement with the Standard Model predictions. Limits on anomalous couplings between neutral gauge bosons are derived from these measurements.
A search for highly ionising, penetrating particles with electric charges from |q| = 2e to 6e is performed using the ATLAS detector at the CERN Large Hadron Collider. Proton-proton collision data taken at $\sqrt{s}$=7 TeV during the 2011 running period, corresponding to an integrated luminosity of 4.4 fb$^{-1}$, are analysed. No signal candidates are observed, and 95% confidence level cross-section upper limits are interpreted as mass-exclusion lower limits for a simplified Drell--Yan production model. In this model, masses are excluded from 50 GeV up to 430, 480, 490, 470 and 420 GeV for charges 2e, 3e, 4e, 5e and 6e, respectively.
The results of a search for pair production of the scalar partners of bottom quarks in 2.05 fb^-1 of pp collisions at sqrt{s} = 7 TeV using the ATLAS experiment are reported. Scalar bottoms are searched for in events with large missing transverse momentum and two jets in the final state, where both jets are identified as originating from a b-quark. In an R-parity conserving minimal supersymmetric scenario, assuming that the scalar bottom decays exclusively into a bottom quark and a neutralino, 95% confidence-level upper limits are obtained in the tilde{b}_1 - tilde{chi}^0_1 mass plane such that for neutralino masses below 60 GeV scalar bottom masses up to 390 GeV are excluded.
|
Also it was stated there that maxwell's equations are invariant under Lorentz transformation but not under Galilean transformation?
Please provide me with some explanation regarding this.
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Also it was stated there that maxwell's equations are invariant under Lorentz transformation but not under Galilean transformation?
Please provide me with some explanation regarding this.
I'll point out the more detailed differences below, but a nice rule of thumb to follow for these is that since the
Galilean transformation gets it's name from a man who lived several centuries ago, the physics formulation for them is more basic than the Lorentz transformation, which is a more modern interpretation of physics. That way you can remember that the Galilean transformation is more of a crude approximation of the motion of particles, while Lorentz transformation are more exact. Galilean Transformation
This is what most people's intuitive understanding of a particle in motion would be. Say we are inside a train that is moving. Now, if we start running across the train in the same direction, we would expect that to a viewer outside of the train, the speed of you running inside the train is the train's speed plus the speed at which you are running.
However, this is not necessarily true. Back then, they expected this was a simple rule that followed from the laws of physics, but in more recent centuries it was proven not to be the case.
Below are the Galilean transformation equations. This is if we are moving in the x direction. We find that our position in another frame (not moving frame) is always given by our original position plus the velocity times time (the velocity of our moving frame, ie, the train).
Notice that time in the first frame equals time in the second frame! This is not necessarily true! This is precisely why nobody really went against this transformation for 3 centuries or so. I mean who would have guessed that a guy inside a train measure time more slowly than a guy outside the train?
$x' = x - vt$
$y' = y$
$z' = y$
$t' = t$
Lorentz transformation
This gets a bit more complicated. Using the same example, it is proven that the speed that an outside observer watches you move is actually not your running speed plus the train's speed. This has to do with being in a
moving reference frame when you, inside the moving train, measure your speed. To you, inside the train, you are the only thing doing the moving. You pretend the train is stationary, and the rest of the world is moving by you. Now an outside observer thinks that he/she is staionary, and that you are inside a moving reference frame. This brings up the principle of relativity.
Let's increase our speeds a lot to demonstrate the Lorentz transformation. Say the train is moving at .75c (.75 the speed of light) and then inside the train, you move at .5c. This would mean (using Galilean transformations) that an outside observer sees you moving at 1.25c! This is impossible, since Einstein tells us we can never move faster than the speed of light. Lorentz transformations take care of this paradox. To keep it sipmle, there is a scale factor that limits us from ever reaching a speed greater than the speed of light.
The key assumption that makes this possible is that
time is not the same in all reference frames. This allows for length contraction, which allows for all of physics to still work while maintaining that all particles cannot exceed the speed of light. Below are the equations
$\beta = \frac{v}{c}$
$\gamma = \frac{1}{\sqrt{1 - \beta^2}}$
$t' = \gamma \left(t - \frac{vx}{c^2}\right)$
$x' = \gamma(x - vt)$
$y' = y$
$z' = z$
Please look at "On The Galilean and Lorentz Transformations" published by American Open Advanced Physics Journal, Vol. 1, No. 2, November 2013, PP: 11 - 32, Available online at http://rekpub.com/Journals.php. In this paper, detail distinction between the two transformations is given. But the answer to this question is completely different from Answer 1.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
When light is red shifted from distant galaxies, the photons have lost energy. When dark energy pushes objects apart, those objects have gained energy from a larger gravitational potential. Is the amount of energy that dark energy applies to push objects apart equal to the amount of energy lost because light from distant galaxies is red shifted?
Is the amount of energy that dark energy applies to push objects apart
equalto the amount of energy lost because light from distant galaxies is red shifted?
No, dark energy existed prior to the expansion and the shift towards blue or red is based on the direction of movement of the emitter relative to the observer, so the amount isn't equal.
Sources:
Wikipedia: Dark Energy - "Change in expansion over time":
"Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period
precedingcosmic acceleration.".
Wikipedia: "Integrated Sachs–Wolfe effect":
"The integrated Sachs–Wolfe (ISW) effect is also caused by gravitational redshift, but it occurs between the surface of last scattering and the Earth, so it is not part of the primordial CMB. It occurs when the Universe is dominated in its energy density by something other than matter. If the Universe is dominated by matter, then large-scale gravitational potential energy wells and hills do not evolve significantly. If the Universe is dominated by radiation, or by dark energy, though, those potentials do evolve,
subtly changingthe energyof photons passing through them.
There are two contributions to the ISW effect. The "early-time" ISW occurs immediately after the (non-integrated) Sachs–Wolfe effect produces the primordial CMB, as photons course through density fluctuations while there is still enough radiation around to affect the Universe's expansion. Although it is physically the same as the late-time ISW, for observational purposes it is usually lumped in with the primordial CMB, since the
matter fluctuationsthat cause it are in practice undetectable.".
Wikipedia: "Theories of Dark Energy":
The Equation of State (EoS) of Dark Energy for 4 common models by Redshift:
This question was closed on our Physics.SE site: "Could some Red and Blue shifts be the result of light passing through “dark matter”? [closed]", while this: "Redshifted Photon Energy" was not.
See also:
"New Aspects of Photon Propagation in Expanding Universes" (Oct 6 2016), by H.-J. Fahr and M. Heyl.
Wikipedia: "Friedmann equations":
"The Friedmann equations are a set of equations in physical cosmology that govern the expansion of space in homogeneous and isotropic models of the universe within the context of general relativity.".
Wikipedia: "Cosmic Expansion History":
Since the densities of various species scale as different powers of $a$, e.g. $a^{-3}$ for matter etc., the Friedmann equation can be conveniently rewritten in terms of the various density parameters as
$$H(a)\equiv {\frac {\dot {a}}{a}}=H_{0}{\sqrt {(\Omega _{c}+\Omega _{b})a^{-3}+\Omega _{\text{rad}}a^{-4}+\Omega _{k}a^{-2}+\Omega _{DE}a^{-3(1+w)}}}$$
where $w$ is the equation of state of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various $\Omega$ parameters add up to 1 by construction.
As you can see the contribution to shift (either way) is very small compared to the amount of dark energy present. The dark matter particles interact with each other and other particles only through gravity and possibly the weak force.
|
The spacing in the following output looks off to me. In particular, the integral symbol has not grown to accommodate the height of the integrand, the spacing in the fraction seems large, and the enclosing brackets do not rise high enough in the matrix. The issue remains whether
mathtools is loaded or not, but since I use it, I thought that a solution (if one exists) should work with the package.
I'm wondering if this is by design, or whether I've loaded the packages correctly/something is peculiar with my setup. If it is by design, is there a parameter that can be altered to change the spacing globally?
\documentclass{article}\usepackage{mathtools}\usepackage{luatextra}\usepackage{unicode-math}\setmathfont{TG Pagella Math}\begin{document}\[\int_{x=0}^{\infty}\frac{1}{x}\,\mathrm dx\]\[\begin{bmatrix*}1&1&1\\1&1&1\\1&1&1\end{bmatrix*}\]\end{document}
|
(Edited as suggested)
I have following code involving connected nodes:
\documentclass{article}\usepackage{tikz}\tikzstyle{every picture}+=[remember picture]\begin{document}\begin{equation} P(t) = \tikz[baseline]{\node[fill=blue!50, anchor=base] (t1) {$ \epsilon_{0}\chi^{(1)}E(t) $};} + \tikz[baseline]{\node[fill=red!50, anchor=base] (t2) {$ \tikz[baseline]{\node[fill=green!25, anchor=base] (t21) {$ \epsilon_{0}\chi^{(2)}E^{2}(t) $};} + \tikz[baseline]{\node[fill=yellow!25, anchor=base] (t22) {$ \epsilon_{0}\chi^{(3)}E^{3}(t) $};} + \cdots $};}\end{equation}\begin{itemize} \item \tikz[baseline]{\node[anchor=base] (n1) {Linear};} \item \tikz[baseline]{\node[anchor=base] (n2) {Nonlinear};} \begin{itemize} \item \tikz[baseline]{\node[anchor=base] (n21) {2. order};} \item \tikz[baseline]{\node[anchor=base] (n22) {3. order};} \end{itemize}\end{itemize}\begin{tikzpicture}[overlay] \path[blue, ->, line width=1pt] (n1.north east) edge[out=45, in=-90] (t1.south); \path[red, ->, line width=1pt] (n2.north east) edge[out=45, in=-90] (t2.south); \path[red, ->, dashed, line width=0.75pt] (n21.east) edge[out=0, in=-90] (t21.south); \path[red, ->, dashed, line width=0.75pt] (n22.east) edge[out=0, in=-90] (t22.south);\end{tikzpicture}\end{document}
This code gives me following output, where is some offset in x coordinate inside of node t2:
If I comment out the creation of this node (t2), the dashed arrows now pointing in correct locations:
What I want is the first figure (red solid line and red fill) with dashed arrows pointing at positions as shown in the second figure. Is there a way how to do this?
Thanks a lot for any suggestions
|
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
|
I’m working on a number theory proof that has been giving me some trouble for a while. I will explain the problem and the attempts I’ve made.
Let $x\in \mathbb{R}$ and $d \in \mathbb{Z}$ where both $x, d > 0$ (i.e. positive values). Prove that the number of integers, say k, that are $\leq $ $x$ and divisible by $d$ is $[\frac{x}{d}]$ (given that [x] is the greatest integer function).
So I’ve decided to try using a proof by contradiction, but I don’t think I’m doing it correctly, but I will list the steps I’ve taken below.
Suppose not, that is suppose that the number of integers divisible by $d$ and less than $x$ does not equal [$\frac{x}{d}$].
$k \neq$ [$\frac{x}{d}$]
This would imply that $k >$ [$\frac{x}{d}$] or $k <$ [$\frac{x}{d}$], but both of this cases lead to contradictions.
If $k >$ [$\frac{x}{d}$] then that implies that [$\frac{x}{d}$] does not produce the greatest integer because if it did, each of the integers in k could be covered by a factor of [$\frac{x}{d}$]
If $k <$ [$\frac{x}{d}$]then that implies that not all values in k are less than $x$ and divisible by $d$, but this is the definition of values in $k$
Therefore both these are false and $k = $ [$\frac{x}{d}$]
Now I’m having a feeling this is incorrect, but I’m not sure where to go from here and if my solution Is correct or not. Any help would be appreciated.
|
I am interested in showing continuity/boundedness of the weak solution to the following problem pde:
\begin{align*} 0 &= \mathbf{q} + \mathbf{\nabla}u && \quad x\in \Omega,\\ 0 &= \mathbf{\nabla} \cdot \mathbf{q} && \quad x\in \Omega,\\ 0 &= u && \quad x\in \partial \Omega_D,\\ g &= \mathbf{q}\cdot \mathbf{\eta} &&\quad x\in \partial\Omega_N. \end{align*}
How can I show that the norm to the weak solution to this problem is bounded by the Neumann Data? In other words, How can I show there exists a $C$ dependent only on the domain so that
$$ \| \mathbf{q}\|_{H^{\mathrm{div}}(\Omega)} \le C \| g \|_{H^{-1/2}(\partial\Omega_N)} $$
I would especially appreciate references to papers or books. If this is handled in any of the standard references (Grisvard, or Gilbarg and Trudinger) or the like, and I have missed it, could you tell me specifically where this is handled?
To the mods, I posted this question earlier today, on math.stackexchange. I have taken it down from there as I think it is more appropriate here.
|
Planck's law
Until stars were formed a few hundred million years after the Big Bang (BB), the brightness of the Universe was extremely homogeneous and given by a near-perfect blackbody Planck spectrum with a temperature of $T = T_0(1+z)$, where $T_0=2.725\,\mathrm{K}$ is the current temperature of the CMB, and $z$ is the redshift corresponding to the time $t$. That is, the brightness at wavelength $\lambda$ is: $$ B_\lambda(\lambda,T) = \frac{2hc^2}{\lambda^5} \frac{1}{e^{hc/\lambda k_\mathrm{B} T} - 1}, $$ where $h$, $c$, $k_\mathrm{B}$ are Planck's constant, the speed of light, and Boltzmann's constant, respectively.
Perceived brightness
I'm going to assume that you're human, and hence that the brightness you're interested in is the optical wavelength region, i.e. around $\lambda\sim550\,\mathrm{nm}$. The peak of a Planck spectrum shifts toward higher frequencies the higher the temperature is, and hence the ratio of optical-to-UV/X-ray/gamma radiation will decrease. But regardless, the
absolute brightness will always increase at any wavelength for larger temperatures.
At $t\simeq10\,\mathrm{s}$, what later becomes our observable Universe was already $\sim30$ light-years in radius (although the observable Universe of
that epoch was only 8000 km). The scale factor (ratio of the size at that time to the current size) was thus $a\sim7\times10^{-10}$, the corresponding redshift $z\sim1.4\times10^9$, and hence the temperature of the Universe was $T \sim 3.7\times10^9\,\mathrm{K}$.
Plugging into the equation above, and dividing by $4\pi$ to get the brightness per solid angle, I get that the brightness in the optical was $$ B_\lambda(550\,\mathrm{nm},3.7\times10^9\,\mathrm{K}) \simeq 3\times 10^{19}\,\mathrm{W}\,\mathrm{m}^{-3}\,\mathrm{sr}^{-1}. $$ So, what does this number mean? To get a feeling of how it would look, we can compare the amount of light received from a human field of view, to the light received when looking directly at the Sun. Andersen et al. (2018) did exactly this, in order to calculate the period of time that a human could see anything in the early Universe. They found that while the Universe became too dim for a human to sense any light around $t=5.7$ million years after BB, it was as bright as looking at the Sun when the Universe was around $T\simeq1600\,\mathrm{K}$, just over 1 million years after BB, and so had a brightness in the optical of around $B_\lambda(550\,\mathrm{nm},1600\,\mathrm{K}) = 1.5\times 10^7\,\mathrm{W}\,\mathrm{m}^{-3}\,\mathrm{sr}^{-1}$, or a factor of $1.7\times10^{12}$ times smaller than at $t\simeq10\,\mathrm{s}$.
In other words, ten seconds after the Big Bang
the Universe was a trillion times brighter than looking at the Sun. Brightness today
Today, the spectrum of the Universe is no longer a Planck spectrum, but is given by a mixture of cosmological and astrophysical processes. In this answer about the cosmic background radation, you can see that the brightness of the optical peak is roughly two orders of magnitude dimmer than the CMB peak brightness which, in turn, from Planck's law today has a brightness $\sim10^{24}$ times smaller than at $t\simeq10\,\mathrm{s}$. "Optical light" is here defined as a much broader region than what a human would see is, very roughly ten times as broad, so the perceived brightness would be another order of magnitude lower. Hence,
today the Universe is 27 orders of magnitude less bright than at the photon epoch. Brightness and color through the history of the Universe
The figure below shows, in green, the brightness of the CMB as a function of time after the Big Bang. A secondary $y$ axis on top show the corresponding temperature of the Universe. The background color shows the color of the Universe as would be perceived by a human being, calculated by convolving the spectrum of the radiation with the response function of the human eye: The first few tens of thousands of years, the Universe is a pale sapphire blue, turning white as it reaches the temperature of the Sun ($T_\odot \simeq 5\,778\,\mathrm{K}$). At $t\sim200\,\mathrm{Myr}$, stars begin to form and the calculation of the spectrum becomes more complicated (so I've grayed it it out), but today the Universe has reached a cosmic latte. Note that, as mentioned above, only between $t\sim1\,\mathrm{Myr}$ and $t\sim6\,\mathrm{Myr}$ could you actually see anything; prior to this epoch you'd go blind, and after this epoch, it'd be too dim (but you could still see the color using sunglasses/binoculars, respectively).
At $T\lesssim hc/\lambda k_\mathrm{B} \simeq 30\,000\,\mathrm{K}$ the exponential factor in the Planck law blows up so the brightness at $\lambda$ quickly decreases. The brightness is further decreased by the fact that, at $t\sim52\,\mathrm{kyr}$, the Universe transitions from being radiation-dominated to being matter-dominated and so the expansion goes from $a(t)\propto t^{1/2}$ to $a(t)\propto t^{2/3}$, i.e. faster.
At the time of recombination ($t\sim379\,\mathrm{kyr}$), the Universe is somewhat brighter than at $t\sim1\,\mathrm{Myr}$. Here, $T\sim3000\,\mathrm{K}$, so $B_\lambda(550\,\mathrm{nm},3000\,\mathrm{K}) = 3\times 10^{10}\,\mathrm{W}\,\mathrm{m}^{-3}\,\mathrm{sr}^{-1}$, or a billion times less bright than at $t\simeq10\,\mathrm{s}$.
A note on decoupling and mean free path
Before the photons decoupled from matter, they scattered frequently on free electrons, so their mean free path was small compared to the size of the (observable) Universe. It is often said thad the Universe was "foggy" until decoupling, but I think people often overestimate this fogginess. The mean free path is $\ell = 1 / n_e \sigma_\mathrm{T}$, where $n_e$ is the number density of free electrons and $\sigma_\mathrm{T}\simeq6.65\times 10^{-25}\,\mathrm{cm}^2$ is the Thomson cross section of the electron. Calculating $n_e$ from the ionization state of the gas, this works out$^\dagger$ to roughly 2000 light-years just before recombination begins to kick in at a time $t\simeq200\,000\,\mathrm{yr}$ after BB, 20 light-years at $t\simeq50\,000\,\mathrm{yr}$ when matter started dominating over radiation, 16 kilometers at $t\simeq15\,\mathrm{m}$ when nucleosynthesis ended, and 20 meters at $t\simeq10\,\mathrm{s}$, when leptons and antileptons annihilated and the photon epoch began.
But this scattering is not really important for how
bright the Universe were. Photons arrive at your eye all the time, and if they have scattered multiple times from their point of origin, you won't know where they were created, but you will still see them.
$^\dagger$
I wrote a Python code called timeline to calculate $\ell$ and other quantities of the Universe as a , available on GitHub here.
|
I have a matrix $P \in M_n(\mathbb N)$, where
$$ P = \begin{bmatrix} 0 & P_{12} & \ldots & P_{1n}\\ P_{21} & 0 & \ldots & P_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ P_{n1} & P_{n2} & \ldots & 0 \end{bmatrix}$$
with $P_{ii} = 0$ for all $i \in \{1,2,\dots,n\}$. I need to find matrices $A, B \in M_n(\mathbb N)$ that satisfy $A + B = P$ and that satisfy the following constraints
$$\forall i\in [\![ 1,n]\!] \sum_{k=1}^{n} A_{ik} = \sum_{k=1}^{n} A_{ki}$$
such that $\sum_{i=1}^{n} \sum_{k=1}^{n} A_{ik}$ is
maximized.
I need to implement an algorithm to solve this problem. On my input data I have approximately: $ n = 16 $ and $ \sum_{i=1}^{n}\sum_{k=1}^{n} P_{ik} = 60000 $ so the brute-force approach is out of question. I don't know what could be a good approximation algorithm so my current approach is to reduce the problem to a binary integer programming problem and then apply Branch-and-Cut but I have serious doubt on it effectiveness for this specific problem.
Finding the optimal solution in polynomial time would be perfect (not sure if it's possible), but I can satisfy myself with a good approximation algorithm. Not having a strong background in CS I'm a bit confused, help would be greatly appreciated !
|
Each equation gives information about the body's location on each axis in Cartesian coordinate system (A is some constant and $t$ is time). We know that $\sin^2(x)+\cos^2(x)=1$ (Pythagora's theorem applied to unit circle which gives us the radius of unit circle). This is the answer given for this problem:
Each velocity component is given by$$v_x=\frac{\mathrm dx}{\mathrm dt}=-\sin(t)$$$$v_y=\frac{\mathrm dy}{\mathrm dt}=\cos(t)$$$$v_z=\frac{\mathrm dz}{\mathrm dt}=A$$so the body's speed is
$$v=\sqrt{v_x^2+v_y^2+v_z^2}$$ Since $v$ is constant, $a_\mathrm t=\frac{\mathrm dv}{\mathrm dt}=0$ (I guess this is tangential acceleration while $a_\mathrm n$is normal acceleration), so radius of the curve is $$R=\frac{v^2}{a_\mathrm n}=\frac{v^2}{a}=\frac{v^2}{\sqrt{a_x^2+a_y^2+a_z^2}}$$
Accelerations are just derivations of each velocity component.
The answer isn't one, how is that possible? It would be one if there was no vertical movement so how does that affect unit circle?
|
I know that the integral $\int_0^1 \frac{(x+1)^n-1}{x} dx,$ for $n \in \mathbb{Z}^+$, can be evaluated by expanding the numerator with the binomial theorem and integrating term by term. You get the nice expression $$\int_0^1 \frac{(x+1)^n-1}{x} dx = \sum_{k=1}^n \binom{n}{k} \frac{1}{k}.$$ My question is this:
Is there some other "nice" expression for $$\int_0^1 \frac{(x+1)^n-1}{x} dx,$$ when $n$ is a positive integer?
Mathematica and Wolfram Alpha both tell me that $$\int_0^1 \frac{(x+1)^n-1}{x} dx = n \text{HypergeometricPFQ}[\{1, 1, 1 - n\}, \{2, 2\}, -1],$$ but the hypergeometric function is just another way of writing $\sum \limits_{k=1}^n \binom{n}{k} \frac{1}{k}.$
I'm led to think that there might be such a nice expression because we do get one for the same integrand but slightly different bounds: $$\int_{-1}^0 \frac{(x+1)^n-1}{x} dx = \sum_{k=1}^{n} \binom{n}{k} \frac{(-1)^{k+1}}{k} = H_n,$$ where $H_n$ is the $n$th harmonic number. On the other hand, alternating binomial sums often evaluate to simpler expressions than the corresponding sums that don't alternate, so perhaps the existence of the simpler expression here doesn't mean much for my question.
I would accept a known special function as an answer (other than the hypergeometric one I already mention), or a simple expression in terms of well-known numbers, like the harmonic numbers.
|
We are here with you hands in hands to facilitate your learning & don't appreciate the idea of copying or replicating solutions. Read More>>
MTH101 Calculus And Analytical Geometry GDB Solution & Discussion
For a functiona pointand a positive numberFind
Moreover find a number such that
Note:
Please follow the following methodology to find
.+ http://bit.ly/vucodes
+ http://bit.ly/papersvu
(Link for Past Papers, Solved MCQs, Short Notes & More)
sorry, saw ur msg late..ab to gdb band hogya.
Assalam o allikum Amna
sis can you help me please for this solution
mujhay b Math ki koi samjh nahi hai
email me also and guide me please
my id is:
sagarkinaray786@gmail.com
Please Someone upload the GDB file then I will try to solve the GDB . Thanks
I am copying my solution. It is only for learning purpose so please do not just copy paste otherwise I might get Zero :)
Note: This solution might not be 100% correct >>>>
Finding
\[\mathop {\lim }\limits_{x \to 3} f(X) = \mathop {\lim }\limits_{x \to 3} (3 - 2X) = 3 - 2\mathop {\lim }\limits_{x \to 3} (X) = 3 - 2\left( 3 \right) = - 3 = L\]
At the moment we have,
\[L = - 3\]
\[{X_0} = 3\]
\[ \in = 0.02\]
f(X) is in interval of
\[(L - \in ,L + \in ) = ( - 3.02, - 2.98)\]
Defining δ in terms of ∈
\[\left| {f(X) - L} \right| < \in = > \left| {(3 - 2x) - ( - 3)} \right| < \in = > 2\left| {X - 3} \right| < \in = > \left| {X - 3} \right| < \in /2\]
Finding
δ where \[\left| {X - {X_0}} \right| < \delta = > \left| {X - 3} \right| < \delta \] Since both above left hand inequalities satisfy therefore \[\delta = \in /2 = > 0.02/2 = 0.01\] So the subset intervals are \[({X_0} - \delta ,{X_0})U({X_0} + \delta ,{X_0}) = > (2.99,3)U(3,3.01)\] Finally proving value of δ keeps within ∈
\[\left| {X - {X_0}} \right| < \delta = > \left| {X - 3} \right| < \in /2 = > 2X - 6 < \in = > 2x - 3 - 3 < \in = > ( - 2X + 3) - ( - 3) < \in \]
Which is similar to
\[\left| {f(X) - L} \right| < \in = > \left| {(3 - 2x) - ( - 3)} \right| < \in \]
Hence limit of f(x) exists as x approaches 3
THEEK hai ya kaya sir g ?
kuch samajh nahi aa raha hai kindly math type ki file upload kardo
MTH101 gdb solution 2015
|
Homework Helper
1,020 0
Hi,
I'm having some trouble with solving this indefinite integral.
[tex] \int {\sqrt {\frac{{6\cos ^2 x + \sin x\cos (2x) + \sin x}}{{2 - \sin x}}} } dx [/tex]
I was able to lose the sin(x) and get a cos(x) out of the square root by doing this:
[tex] \int {\sqrt {\frac{{6\cos ^2 x + \sin x(2\cos^2 x -1) + \sin x}}{{2 - \sin x}}} } dx [/tex]
[tex] \int {\sqrt {\frac{{6\cos ^2 x + 2\sin x\cos^2 x -\sin x + \sin x}}{{2 - \sin x}}} } dx [/tex]
[tex] \int {\sqrt {\frac{{\cos ^2 x(6 + 2\sin x)}}{{2 - \sin x}}} } dx [/tex]
[tex] \int \cos x{\sqrt {\frac{{6 + 2\sin x}}{{2 - \sin x}}} } dx [/tex]
Then I did a substitution, [tex]y = \sin x \Leftrightarrow dy = \cos xdx[/tex] to get:
[tex] \int {\sqrt {\frac{{6 + 2y}}{{2 - y}}} } dy [/tex]
I think I can say this looks a lot better than the initial integral, but after that I used about 3 more substitutions and 2 sides of paper. I finally got something but it wasn't all correct I'm affraid. I was wondering if I started out wrong or if someone sees an easy way to continue (or an easier to start).
Naughty integral if you ask me
I'm having some trouble with solving this indefinite integral.
[tex]
\int {\sqrt {\frac{{6\cos ^2 x + \sin x\cos (2x) + \sin x}}{{2 - \sin x}}} } dx
[/tex]
I was able to lose the sin(x) and get a cos(x) out of the square root by doing this:
[tex]
\int {\sqrt {\frac{{6\cos ^2 x + \sin x(2\cos^2 x -1) + \sin x}}{{2 - \sin x}}} } dx
[/tex]
[tex]
\int {\sqrt {\frac{{6\cos ^2 x + 2\sin x\cos^2 x -\sin x + \sin x}}{{2 - \sin x}}} } dx
[/tex]
[tex]
\int {\sqrt {\frac{{\cos ^2 x(6 + 2\sin x)}}{{2 - \sin x}}} } dx
[/tex]
[tex]
\int \cos x{\sqrt {\frac{{6 + 2\sin x}}{{2 - \sin x}}} } dx
[/tex]
Then I did a substitution, [tex]y = \sin x \Leftrightarrow dy = \cos xdx[/tex] to get:
[tex]
\int {\sqrt {\frac{{6 + 2y}}{{2 - y}}} } dy
[/tex]
I think I can say this looks a lot better than the initial integral, but after that I used about 3 more substitutions and 2 sides of paper. I finally got something but it wasn't all correct I'm affraid.
I was wondering if I started out wrong or if someone sees an easy way to continue (or an easier to start).
Naughty integral if you ask me
Last edited:
|
Another way we often think about numbers is as abstract quantities that can be measured: length, area, and volume are all examples.
In a measurement model, you have to pick a
basic unit. The basic unit is a quantity — length, area, or volume — that you assign to the number one. You can then assign numbers to other quantities based on how many of your basic unit fit inside.
For now, we’ll focus on the quantity length, and we’ll work with a number line where the basic unit is already marked off.
Addition and Subtraction on the Number Line
Imagine a person — we’ll call him Zed — who can stand on the number line. We’ll say that the distance Zed walks when he takes a step is exactly one unit.
When Zed wants to add or subtract with whole numbers on the number line, he always starts at 0 and faces the positive direction (towards 1). Then what he does depends on the calculation.
If Zed wants to
add two numbers, he walks forward (to the right of the number line) however many steps are indicated by the first number (the first addend). Then he walks forward (to your right on the number line) the number of steps indicated by the second number (the second addend). Where he lands is the sum of the two numbers.
Example
: 3 + 4
If Zed wants to add 3 + 4, he starts at 0 and faces towards the positive numbers. He walks forward 3 steps, then he walks forward 4 more steps.
Zed ends at the number 7, so the sum of 3 and 4 is 7. 3 + 4 = 7. (But you knew that of course! The point right now is to make sense of the
number line model.)
When Zed wants to
subtract two numbers, he he walks forward (to the right on the number line) however many steps are indicated by the first number (the minuend). Then he walks backwards (to the left on the number line) the number of steps indicated by the second number (the subtrahend). Where he lands is the difference of the two numbers.
Example
: 11 – 3
If Zed wants to subtract 11 – 3, he starts at 0 and faces the positive numbers (the right side of the number line). He walks forward 11 steps on the number line, then he walks backwards 3 steps.
Zed ends at the number 8, so the difference of 11 and 3 is 8. 11 – 3 = 8. (But you knew that!)
Think / Pair / Share Work out each of these exercises on a number line. You can actually pace it out on a life-sized number line or draw a picture: $$4 + 5 \qquad 6 + 9 \qquad 10 - 7 \qquad 8 - 1$$ Why does it make sense to walk forward for addition and walk backwards for subtraction? In what way is this the same as “combining” for addition and “take away” for subtraction”? What happens if you do these subtraction problems on a number line? Explain your answers. $$6 - 9 \qquad 1 - 7 \qquad 4 - 11 \qquad 0 - 1$$ Could you do the subtraction problems above with the dots and boxes model? Multiplication and Division on the Number Line
Since multiplication is really repeated addition, we can adapt our addition model to become a multiplication model as well. Let’s think about 3 × 4. This means to add four to itself three times (that’s simply the definition of multiplication!):
$$3 \times 4 = 4 + 4 + 4 \ldotp$$
So to multiply on the number line, we do the process for addition several times.
To multiply two numbers, Zed starts at 0 as always, and he faces the positive direction. He walks forward the number of steps given by the second number (the second
factor). He repeats that process the number of times given by the first number (the first factor). Where he lands is the product of the two numbers.
Example
: 3 × 4
If Zed wants to multiply 3 × 4, he can think of it this way:
$$\begin{split} 3& \qquad \qquad \qquad \times \\ \downarrow & \\ \text{how many times}\; & \text{to repeat it} \end{split} \begin{split} 4& \\ \downarrow & \\ \text{how many steps}\; & \text{to take forward} \end{split}$$
Zed starts at 0, facing the positive direction. The he repeats this three times: take four steps forward.
He ends at the number 12, so the product of 3 and 4 is 12. That is, 3 × 4 = 12.
Remember our quotative model of division: One way to interpret 15 : 5 is:
How many groups of 5 fit into 15?
Thinking on the number line, we can ask it this way:
Zed takes 5 steps at a time. If Zed lands at the number 15, how many times did he take 5 steps?
To calculate a division problem on the number line, Zed starts at 0, facing the positive direction. He walks forward the number of steps given by the second number (the
divisor). He repeats that process until he lands at the first number (the dividend). The number of times he repeated the process gives the quotient of the two numbers.
Example
: 15 ÷ 5
If Zed wants to compute 15 : 5, he can think of it this way:
He starts at 0, facing the positive direction.
Zed takes 5 steps forward. He is now at 5, not 15. So he needs to repeat the process. Zed takes 5 steps forward again. He is now at 10, not 15. So he needs to repeat the process. Zed takes 5 more steps forward. He is at 15, so he stops.
Since he repeated the process three times, we see there are 3 groups of 5 in 15. So the quotient of 15 and 5 is 3. That is, 15 : 5 = 3.
Think / Pair / Share Work out each of these exercises on a number line. You can actually pace it out on a life-sized number line or draw a picture: $$2 \times 5 \qquad 7 \times 1 \qquad 10 : 2 \qquad 6 : 1$$ Can you think of a way to interpret these multiplication problems on a number line? Explain your ideas. $$4 \times 0 \qquad 0 \times 5 \qquad 3 \times (-2) \qquad 2 \times (-1)$$ What happens if you try to solve these division problems on a number line? Can you do it? Explain your ideas. $$0 : 2 \qquad 0 : 10 \qquad 3 : 0 \qquad 5 : 0$$
|
Let $g$ be a Riemannian metric on the $d$-dimensional flat space $\mathbb R^d$, and consider the usual Lagrangian $$L(x, \dot x) = \tfrac 1 2 g_{ij}(x) \dot x^i \dot x^j.$$ Let $\hat g := \sqrt g$ denote the square root of the metric $g$, implicitly defined by the formula $\hat g_{ai} \hat g_{bj} \delta^{ab} = g_{ij}$, where $\delta^{ab}$ is the identity 2-tensor. I want to introduce phase-space-type coordinates $$u_i = \hat g_{ij} \dot x^j.$$ In the coordinates $(x,u)$, the metric on $u$ is just the Euclidean metric: $\langle u, u \rangle := \delta^{ij} u_i u_j$.
Let $x = x(t)$ denote a geodesic for the metric $g$, and define $u(t) := \hat g_{ij} \dot x^j$. These coordinates are convenient, because along the geodesic, $u(t)$ remains on the sphere of radius $|u(0)|$: $$\tfrac{d}{dt} \langle u, u \rangle = \tfrac{d}{dt} \delta^{ij} u_i u_j = \tfrac{d}{dt} g_{ij} \dot x^i \dot x^j = 0,$$ since geodesics are parametrized by unit speed. Conveniently, this means that $\langle u, \dot u \rangle \equiv 0$.
(You may wonder why don't I just use Hamiltonian phase-space coordinates $(x,p)$. In my research, I consider $g$ as a parameter, ranging over all possible Riemannian metrics on the plane $\mathbb R^2$. Hamiltonian coordinates have the nice property that for a fixed metric, the energy shells $\{ g^{ij}(x) p_i p_j = \mathrm{constant} \}$ are invariant under the geodesic flow. Unfortunately, these energy shells are not independent of the metric parameter $g$. In the coordinates $(x,u)$, on the other hand, the shells $\{ \langle u, u \rangle = \mathrm{constant} \}$ are just spheres in Euclidean space, and do not depend on $g$. In particular, it is important to me that these spherical shells are invariant under rotations in the phase space $\mathbb R^d \times \mathbb R^d$).
I want to calculate the geodesic equation in the coordinates $(x,u)$, particularly for the case that $d=2$. It is easy to see that $\dot x^j = \hat g^{ji} u_i$, where the superscripts denote the inverse of $\hat g$. When I calculate $\dot u$, though, I get a mess: $$\dot u_a = \big( \hat g_{ab,c} \hat g^{cj} \hat g^{bi} - \hat g_{ab} \Gamma_{uv}^b \hat g^{ui} \hat g^{vj} \big) u_i u_j,$$ where $\Gamma_{uv}^b$ are the Christoffel symbols for the metric $g$. I tried simplifying this expression, to no effect. There is plenty of symmetry around (e.g., $\langle u, \dot u \rangle = 0$), and I'm sure that the formula for $\dot u$ takes a much, much simpler form.
Question: Is there a simple expression in these coordinates for the evolution of $u(t)$?
Let me explain why the above expression is inadequate. For the metric $g$, let $U_g$ denote the vector field given by $U_g(x,u) = (u, \dot u)$ (where $\dot u$ is the expression above), so that solutions to the differential equation $(\dot x, \dot u) = U_g(x,u)$ are geodesics for the metric $g$. I need to calculate the (Euclidean) divergence $\operatorname{div} U$. I am pretty sure that in the end, $\operatorname{div} U$ can be expressed in some simple geometric quantities involving the metric (like the Riemannian divergence $\operatorname{div}_g$ of some vector field, scalar curvature $K_g$, etc.). For the messy $\dot u$ above, though, it is impossible for me to see what the true character of $\operatorname{div} U$ is.
|
The least count of the watch used for the measurement of time period is $0.01$ s
This information is just telling you to round off to the second decimal place, as you correctly did.
The sample mean is $\mu = 0.56$ and the sample standard deviation is $\sigma = 0.02$. The answer the text is referring to is
$$\frac \sigma \mu = 0.0357 = 3.57 \%$$
But I would say that this is not entirely correct. The standard error is not $\sigma$, but
$$\frac \sigma {\sqrt N}$$
Where $N$ is the number of measurements. In our case,
$$\frac \sigma {\sqrt N}=0.009$$
So the real percentage error should be
$$\frac{0.009}{0.56} = 0.0161 = 1.61 \%$$
Update: a more careful discussion
As requested, I will try to explain more why we don't need to explicitly include the resolution of the instrument ($0.01/2$) in our calculation.
In my previous discussion I explained why the solution reported in your text was $35.7 \%$, but actually that reasoning is not really correct.
The sample mean of your data set is not really $\mu=0.56$, but $\mu=0.0556$, as you correctly wrote. But since they (incorrectly) used the standard deviation, $0.02$, as standard error, we have to round off the mean and write our result as
$$0.56 \pm 0.02$$
Because it would clearly be silly to write
$$0.556 \pm 0.02$$
because if we are not sure of the second decimal place we bother writing the third?
But if the correct standard error is used, we get
$$0.556 \pm 0.009$$
You may notice a strange thing:
the number of significative digits has increased, even if our instrument had a resolution of only $0.01/2=0.005$. This is a property of the mean and it is why we use the mean in the first place: via the mean operation, we can increase the number of significative digits and circumvent the limitations of our instrument.
Take for example the case in which we have two measurements: $2$ and $7$, with resolution of $0.5$ clearly. The mean is $9/2=4.5$, so we have gained one significative place.
You can then see that with an infinite number of measurement our result becomes exact,
regardless of the resolution of the instrument, because of the $\sqrt N$ term in the denominator of the standard error.
|
With regard to the density parameter derived from Friedmann Equations which is:
$$ Age = D_H\int_{z}^{\infty}\frac{1}{(1+z)\sqrt{\Omega_R(1+z)^4 + \Omega_M(1+z)^3 + \Omega_K(1+z)^2 + \Omega_L(1+z)^{(3(1+w))}}} dz $$
(setting $z$ to $0$ will provide the current age of the universe)
where $D_H=$ Hubble Distance, $z$ = Redshift, $\Omega_R =$ Radiation density, $\Omega_M =$ Matter density (incl. dark matter), $\Omega_K =$ Curvature, $\Omega_L =$ Dark energy density and $w=$ equation of state.
The above equation is used to compute age of the universe in almost any given density parameters except for when there is only matter-dominated scenario in which $\Omega_M > 1$. In such cases, because the integration is trying to compute the square root of negative numbers (negative redshifts; or scale factor ($a$) $>=1$ for future fate of the universe), we will obtain a complex answer.
What is the proper equation to use to compute for scenarios where $\Omega_M >1$ and with $a=1.5$ ?
$z= (1/a)-1 = -0.33$ for $a = 1.5$
|
Since I just finished optimizing a lot of them in a software, DifferentialEquations.jl, I decided to just lay out a comparison of the main Order 4/5 methods. The Fehlberg method was left out because it's commonly known to be less efficient than the DP5 method.
Backstories Dormand-Prince 4/5
The Dormand-Prince method was developed to be accurate as a 4/5 pair with local extrapolation usage (i.e. step with the order 5 pair. This is because it was designed to be close to optimal (i.e. minimal) principle truncation error coefficient (under the restraint of also having the minimal number of steps to achieve order 5). It has an order 4 interpolation which is free, but needs extra steps for an order 5 interpolation.
Cash-Karp 4/5
The Cash-Karp method was developed to satisfy different constraints, namely to deal with non-smooth problems better. They chose the $c_i$, the percentage of the timestep in the $i$th step (i.e. $t+c_i \Delta t$ is the time the $i$th step is calculated at) to be as uniform as possible, yet still achieve order 5. Then it also was derived to have embedded 1st, 2nd, 3rd, and 4th order methods with this uniformity of the $c_i$. They are spaced in such a manner that you can find out where a stiff part starts by which difference is large. Moreover, note that the more stiff the equation, the worse a higher order method does (because it needs bounds on higher derivatives). So they develop a strategy which uses the 5 embedded methods to "quit early": i.e. if you detect stiffness, stop at stage $i<6$ to decrease the number of function calls and save time. So in the end, this "pair" was developed with a lot of other constraints in mind, and so there's no reason to expect it would be "more accurate", at least as a 4/5 pair. If you add all of this other machinery then, on (semi-)stiff problems, it will be more accurate (but in that case you may want to use a different method like a W-Rosenbrock method). This is one reason why this pair hasn't become standard over the DP5 pair, but it still can be useful (maybe it would be good for a hybrid method which switches to a stiff solver when stiffness is encountered?).
Bogacki & Shampine 4/5
To round out the answer, let's discuss the Bogacki & Shampine pair that was mentioned in the comment. The BS5 method drops the constraint of "using the least function calls" (it uses 8 instead of 6) in order to do 2 things:
Get really low principle truncation error coefficients.
Produce an order 5 interpolation with lower error coefficients.
These coefficients are so low that for many problems with tolerances that users likely use, it measures as though its 6th order. Their paper shows that for cheap function calls, this can be more efficient than DP5 by about the same amount at DP5 was over RKF5 (the Fehlberg method).
You might put two-and-two together and see: wait a second, Shampine is the same person who developed the MATLAB ODE suite, this was after the BS5 pair paper was published, why doesn't MATLAB's
ode45 use the BS5 pair? One reason is, it was mostly done before the BS5 pair was relaesed. The other reason is because the
ode45 function was developed to minimize time. While the BS5 pair is more efficient (i.e. gets lower accuracy), the purpose of
ode45 is to have good enough error to make a good enough plot. This means that, in order to deal with the large steps, it also produces two extra interpolated solutions between every step. For the DP5 method, there is a "free" order 4 interpolation, and so this is much faster than using BS5. Since it is also "accurate enough" at moderate tolerances, this method is set as the standard because it gives a better standard user experience than BS5 when doing interactive computing (so this choice was context specific).
Tsitorous 4/5
Here's one less people know about. It's derived in this paper. It's derived using less assumptions than the DP5 method, and tries to get a pair with lower principle truncation error coefficients. In its tests, it states that it achieves this. It also has a free order 4 interpolation like the DP5 method.
Numerical tests
I wrote the numerical package DifferentialEquations.jl to be a pretty comprehensive set of solvers for Julia. Along the warpath, I implemented over 100 Runge-Kutta methods, and hand-optimized plenty. Three of the hand optimized integrators are the DP5, BS5, and Tsit5 methods (I did not do CK5 because, as noted in the backstory, it's main case is for problems which are kind of stiff. I think the better way to handle them is to use DP5/BS5 and switch to stiff solvers as necessary in a manner like LSODE, but that's a story for a different time) (one way to see they are close to optimal is that these methods are faster than the Hairer
dopri5 implementations, so they are at least decent implementations). Tests between a lot of Runge-Kutta methods on nonstiff equations can be found in the benchmarks folder. I am adding more as I go along, but you can see from the linear ODE and the Three-Body problem work precision diagrams, I measure the DP5 and Tsit5 methods to have almost identical efficiency, beating out the BS5 method in the linear ODE, while it's DP5 and BS5 that are almost identical on the Three-Body problem with Tsit5 behind. From this information, at least for now, I have settled on the DP5 method as the default, matching previous recommendations. That may change with future tests (or you could add benchmarks! Feel free to contribute, or Star the repo to give this effort more support).
Conclusion
In conclusion, the Order 5 pairs go like this:
The Dormand-Prince 4/5 pair is a good go-to pair since it's well-optimized in terms of principle truncation error coefficient and has a cheap order 4 interpolation, which makes it fast for producing decent plots.
The Cash-Karp pair has more constraints on it to better handle stiff equations. However, to get the full benefit you'll want to use the full algorithm with the 5 embedded methods.
Bogacki & Shampine Order 5 method may be the most efficient in terms of producing error per function calls (it has a double error estimator, so in harder problems it probably does better), but that allows it to take larger timesteps. However, if you're just wanting to produce a smooth plot, you then have to counter-act this method: use a lower tolerance (so it will take longer than DP5 but with less error) or use more interpolated steps. In the end, this meant that it might not be better for interactive applications, although it might be better for some scientific computing applications.
The Tsitorous 4/5. Was developed fairly recently (2011) to beat out the DP5 in a head-to-head comparison. My tests don't give me a reason to believe that it's so much better than DP5 that it should now be considered as the new standard method, but future tests may begin to side in its favor.
Edit
I did improve the Tsit5 implementation. It now does better than DP5 on most tests, both the DifferentialEquations.jl and the Hairer dopri implementations (though one might be surprised that the DifferentialEquations.jl implementations are actually faster, which of course helps the Tsit5 implementation). I now recommend it as the default order 4/5 method.
If you are interested in comparing two integrators solve
$$\frac{dq}{dt} = p \\ \frac{dp}{dt} = -q$$
with initial values $(q,p) = (1,0)$ and $dt = 0.1$ for both of them. Then plot the errors
$$dq = q - \cos(t) ; dp = p + \sin(t)$$
for a reasonable range of $t$ (0 to 100, one thousand time steps in all) and choose the integrator with the smaller error. This harmonic oscillator test will show you the phase and amplitude errors with very little effort.
P J Channell and C Scovel. Symplectic integration of Hamiltonian systems. Journal of Computational Physics 73, 468 (1973).
|
Let $(M^{2n},\omega)$ be a symplectic manifold with an integral symplectic form $\omega$. Due to the work of M.Gromov and D.Tischler (M.Gromov "A topological technique for the construction of solutions of differential equations and inequalities", D.Tischler "Closed 2-forms and an embedding theorem for symplectic manifolds"), there exists a symplectic embedding $$ (M,\omega) \rightarrow (\mathbb{C}P^{2n+1},\omega_{FS}),$$ where $\omega_{FS}$ denote by the Fubini-Study form on the projective space. For example, Kodaira-Thurston manifold is a symplectic submanifold of $\mathbb{C}P^5$.
My questions are as follows :
Is there an example of non-Kaehler symplectic manifold $(M,\omega)$ which can be embedded into $\mathbb{C}P^n$ for some $n \leq 4$? (There is no restriction of the dimension of $M$.)
Is there an example of non-Kaehler symplectic manifold $(M,\omega)$ of dimension $2n$ which can be embedded into $\mathbb{C}P^{n+1}$? (I mean, $M$ is a submanifold of codimension 2)
I really appriciate for your any comments.
|
We assume that $G\in G(n,p),p=\frac{\ln n +\ln \ln n +c(n)}{n}$. Then the following fact is well known:
\begin{eqnarray} Pr [G\mbox{ has a Hamiltonian cycle}]= \begin{cases} 1 & (c(n)\rightarrow \infty) \\ 0 & (c(n)\rightarrow - \infty) \\ e^{-e^{-c}} & (c(n)\rightarrow c) \end{cases} \end{eqnarray}
I want to know results about the number of Hamiltonian cycles on random graphs.
Q1. How many is the expected number of Hamiltonian cycles on $G(n,p)$?
Q2. What is the probability $Pr [G \textrm{ has a *unique* Hamiltonian cycle}]$ for the edge probability $p$ on $G(n,p)$?
|
Physicists tend to be a bit casual about sign conventions when it seems to be obvious. So let's attempt to be completely rigourous.
The key step is getting the flight time $t$ since the range is just $v\cos\theta\, t$. We do this using the SUVAT equation:
$$ v = u + at $$
We'll use the usual conventions that up and right are positive, so $v_y$ and $v_x$ are both positive. The gravitational acceleration points downwards, so it has a negative value. However there are two ways we can treat this. Suppose we set $g = -9.81$ m/s$^2$, then our equation becomes:
$$ -v\sin\theta = v\sin\theta + gt \tag{1} $$
and rearranging for the time gives:
$$ t = \frac{-2v\sin\theta}{g} $$
If we put $g = -9.81$ m/s$^2$ we get a positive value for $t$. So far so good.
But we could also take $g$ to be the
magnitude of the gravitational acceleration so it is always a positive quantity. In that case we have to write equation (1) as:
$$ -v\sin\theta = v\sin\theta - gt \tag{2} $$
Note that we now have a minus sign on the right hand side. We put this in because we are taking $g$ to be a magnitude. Rearranging this gives:
$$ t = \frac{2v\sin\theta}{g} $$
So we get the same equation for $t$ as before, but without the minus sign.
When you're looking at this sort of system you have to know what sign convention has been used. Has $g$ been taken to be $-9.81$ m/s$^2$ or has it been taken to be $+9.81$ m/s$^2$and the minus sign inserted in the equation instead?
If we go back to equation (1) and multiply by $v\cos\theta$ then rearrange to get the range we end up with:
$$ d = \frac{-v^2\sin2\theta}{g} $$
and obviously equation (2) gives us the same without the minus sign:
$$ d = \frac{v^2\sin2\theta}{g} $$
And there you have the answer to your question. The range equation you cite has been derived by taking $g$ to be positive and putting the minus sign in the SUVAT equation instead. So you need to put $g$ in as a positive number or you'll get the wrong answer.
|
The Schwarzchild metric is for the gravitational field of an object of mass $M$ with no electric charge and no angular momentum. The metric is
$$ {ds}^{2} = \frac{dr^2}{1 - \frac{r_\mathrm{s}}{r}} - c^2dt^2\left(1-\frac{r_\mathrm{s}}{r}\right) + r^2 \left(d\theta^2 + \sin^2\theta \ d\varphi^2\right) $$
with $r_s = \frac{2GM}{c^2},$ $\theta$ being longitude, and $\varphi$ being the latitude, and $ds^2$ being the spacetime interval between two points in spacetime.
In this equation are the longitude $\theta$, $\varphi$ defined based on the two points in space, or are they defined by a coordinate system that was defined independently of the points in spacetime? For instance if after finding a planet that had no rotation and no electric charge we were to assign a coordinate system for the longitude and latitude of this planet, for the Schwarzchild metric for two points in spacetime near this planet would we use the same coordinate system we had defined previously for longitude and latitude or would we define a new coordinate system for longitude and latitude based on these two points in spacetime?
|
Suppose that $K$ is a finite extension of $\mathbb Q$, say of degree $n$. By the primitive element theorem, $K=\mathbb Q(\alpha)$. Then $\alpha$ has $n$ conjugates and we correspondingly get $n$ embeddings of $K$ into $\mathbb C$. But I believe that all these embeddings need to be have the same images in $\mathbb C$ (as an example, $\mathbb Q(\sqrt 2)$ and $\mathbb Q(-\sqrt 2)$ are really the same field). Is there any relation between the number of embeddings with distinct images and the order of the group $Aut(K/\mathbb Q)$? On working a few examples, it seems that if $|Aut(K/\mathbb Q)|=a$ and there are $b$ embeddings with distinct images in $\mathbb C$, then $ab=n$, the degree of the extension. Is this true? If so, how can I prove it? I don't seem to be having much success. I'd appreciate some help.
For an algebraic extension $K/\mathbb{Q}$ of degree $n$ there are $n$ embeddings of $K$ into $\mathbb{C}$, see (The number of) embeddings of an algebraic extension of $\mathbb{Q}$ into $\mathbb{C}$. For example, let $K=\mathbb{Q}(\sqrt[4]{2})$. Then $a=|Aut_{\mathbb{Q}}(K)|=2$, $b=4$ and $n=4$. Hence $ab\neq n$.
Yes, of course there is, and this is one of the main results in fields extensions-Galois Theory.
Theorem: If $\;K/F\;$ is a fields extension of degree $\;n\;$ and $\;S,\,F\subset S\subset K\;$ is the separable closure of this extension, then there are $\;[S:F]\;$different embeddings $\;K\mapsto\overline{\Bbb F}\;$ , when this last is any algebraically closed field containing $\;F\;$ ( we can take
the algebraic closure of $\;F\;$) .
If we assume the extension $\;K/F\;$ is finite and separable, then the number of embeddings equals $\;n=[K:F]\;$ iff $\;K/F\;$ is
normal, which would mean the extension $\;K/F\;$ is Galois. In this last case, any such embedding is an automorphism of $\;K/F\;$ , meaning $\;\sigma K=K\;$ for any embedding $\;\sigma:K\to\overline F\;$, and $\;\sigma f=f\;,\;\;\forall\,f\in F\;$ .
Maybe an example is appropriate. Let $p$ be a polynomial of degree $3$ with rational coefficients and one real root $r$ and two complex (conjugate) roots ($c$ and $\overline{c}$). The splitting field $F$ of $p$ is an extensioin of degree $6$ and the Galois group of $p$ is the symmetric group $S_3$, the group that permutes the three roots. The Galois group consists of field automorphisms of $F$. On the other hand the field $M$ generated by the root $r$ is a subfield of $F$ is of degree $3$ and is not normal. The only elements of the Galois group that leave this field invariant (and so define automorphisms on $M$) are thos that permute the other two root. This makes that $M$ has two other different embeddings in $F$ that are images of automorphisms of $F$.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Ever since day one of of my Mathematical Logic course, this fact has really bothered me. I cannot wrap my head around how an empty set is a subset of every possible set. Could someone kindly explain how this is true? Any help is appreciated!
If you're comfortable with proof by contrapositive, then you may prefer to prove that for any set $A,$ if $x\notin A,$ then $x\notin\emptyset$. But of course, $x\notin\emptyset$ is trivial since $\emptyset$ has no elements at all. Hence, $x\notin A\implies x\notin\emptyset,$ so by contrapositive, $x\in\emptyset\implies x\in A,$ meaning $\emptyset\subseteq A$.
By definition, $A$ is a subset of $B$ if every element of $A$ is in $B$.
If we set $A=\emptyset$, then the above statement is vacuously true. Every element of $A$ is in fact an element of $B$ since the former has no elements.
A remarkably simple way of trying to grasp why that is the case is as follows:
We can consider any set and throw away all its elements, then we are left with the subset { }. This means that { } is a subset of any set.
I'm practicing my set proving skills, so shoot me down if I'm wrong.
I'm going to use proof contradiction as an alternative approach since that approach has not been mentioned as answer in this topic.
So we want to prove:
${ \emptyset \subseteq A }$
In other words
${\forall_x [(x \in \emptyset) \to (x \in A)]}$ def. subset
So since proof by contradiction is used, lets try to prove the opposite and hope to run into a contradiction.
${\lnot \forall_x [(x \in \emptyset) \to (x \in A)]}$
After moving the negation inside, we get
${\exists_x[(x \in \emptyset) \land \lnot(x \in A)]}$
So we want to demonstrate there exists an element of the empty set that isn't in A.
since ${x \in \emptyset}$ will always be false we get
${\exists_x[F \land \lnot(x \in A)]}$
${\exists_x[F]}$ domination law
So there doesn't exist such an element. And this contradicts with our assumption that such an element exists. Therefor we have proven ${ \emptyset \subseteq A }$
I am quite comfortable with the other two answers but there is a softer way to answer this question. I think the following can be made formal using the Axiom of Choice (perhaps in conjunction with a function $Y\rightarrow \{0,1\}$), but if you really want to be that formal just use one of the two perfectly good arguments above.
This soft approach is as follows. Where $X$ and $Y$ are sets, a subset of $X\subseteq Y$ corresponds to a choice of elements from $Y$.
For example, where $Y=\{1,2,3,4\}$, a subset $X$ is formed by answering the questions
$$1\in X?,\,2\in X?,\,3\in X?,\,4\in X?$$
For example, the subset $\{1,4\}\subset Y$ corresponds to choosing one and four but not two and three.
If we choose all of the elements of $Y$ we have the full subset and so, as it consists of a choice of elements of $Y$ --- namely all of them --- we have that
$$Y\subseteq Y.$$
Now what if we choose
none of the elements of $Y$? This is a choice of elements and so is a subset of $Y$. It is of course the empty set and in this sense we have$$\emptyset\subset Y.$$So a subset corresponds to a choice of elements from $Y$, choosing none is a choice, and therefore the empty set is always a subset.
You will need some of the more formal arguments from above to understand why $\emptyset\subseteq\emptyset$.
|
In statistics, the
Breusch–Godfrey test, named after Trevor S. Breusch and Leslie G. Godfrey, [1] [2] is used to assess the validity of some of the modelling assumptions inherent in applying regression-like models to observed data series. In particular, it tests for the presence of serial dependence that has not been included in a proposed model structure and which, if present, would mean that incorrect conclusions would be drawn from other tests, or that sub-optimal estimates of model parameters are obtained if it is not taken into account. The regression models to which the test can be applied include cases where lagged values of the dependent variables are used as independent variables in the model's representation for later observations. This type of structure is common in econometric models. A similar assessment can be also carried out with the Durbin–Watson test.
Because the test is based on the idea of Lagrange multiplier testing, it is sometimes referred to as
LM test for serial correlation. [3]
Contents Background 1 Procedure 2 Software 3 See also 4 References 5 Further reading 6 Background
The Breusch–Godfrey serial correlation LM test is a test for autocorrelation in the errors in a regression model. It makes use of the residuals from the model being considered in a regression analysis, and a test statistic is derived from these. The null hypothesis is that there is no serial correlation of any order up to
p. [4]
The test is more general than the Durbin–Watson statistic (or Durbin's
h statistic), which is only valid for nonstochastic regressors and for testing the possibility of a first-order autoregressive model (e.g. AR(1)) for the regression errors. The BG test has none of these restrictions, and is statistically more powerful than Durbin's h statistic. Procedure
Consider a linear regression of any form, for example
Y_t = \alpha_0+ \alpha_1 X_{t,1} + \alpha_2 X_{t,2} + u_t \,
where the residuals might follow an AR(
p) autoregressive scheme, as follows: u_t = \rho_1 u_{t-1} + \rho_2 u_{t-2} + \cdots + \rho_p u_{t-p} + \varepsilon_t. \,
The simple regression model is first fitted by ordinary least squares to obtain a set of sample residuals \hat{u}_t.
Breusch and Godfrey proved that, if the following auxiliary regression model is fitted
\hat{u}_t = \alpha_0 + \alpha_1 X_{t,1} + \alpha_2 X_{t,2} + \rho_1 \hat{u}_{t-1} + \rho_2 \hat{u}_{t-2} + \cdots + \rho_p \hat{u}_{t-p} + \varepsilon_t \,
and if the usual R^2 statistic is calculated for this model, then the following asymptotic approximation can be used for the distribution of the test statistic
n R^2\,\sim\,\chi^2_p, \,
when the null hypothesis {H_0: \lbrace \rho_i = 0 \text{ for all } i \rbrace } holds (that is, there is no serial correlation of any order up to
p). Here n is the number of data-points available for the second regression, that for \hat{u}_t, n=T-p, \,
where
T is the number of observations in the basic series. Note that the value of n depends on the number of lags of the error term ( p). Software In R, this test is performed by function bgtest, available in package lmtest. [5] [6] In Stata, this test is performed by the command estat bgodfrey. [7] [8] In SAS, the GODFREY option of the MODEL statement in PROC AUTOREG provides a version of this test. In Python Statsmodels, the acorr_breush_godfrey function in the module statsmodels.stats.diagnostic [9] See also References ^ ^ ^ ^ Macrodados 6.3 Help – Econometric Tools ^ ^ ^ ^ ^ Breusch-Godfrey test in Python http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.diagnostic.acorr_breush_godfrey.html?highlight=autocorrelation Further reading
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
Update November 16th: Oui, the constants are constant now!
Although it's just a bunch of conventions, I have been sort of excited about the systems of units – and the SI units in particular – for more than three decades.
The most recent blog post, one from July 2017, announced plans to redefine the fundamental SI units so that some universal constants become known constants, much like \(c\) which became \[
c = 299,792,458\,{\rm m/s}
\] after a 1983 reform of the SI system. In particular, I have rooted for a reform that would turn Planck's constant \(h\) to a known constant since my childhood – but a clear blog post from April 2012 is the most explicit thing I can link to.
The dates have been determined for years... and it just happens that next week, a meeting (26th meeting of The General Conference on Weights and Measures, November 13th-16th) takes place in Versailles, France. A vote will take place over there. If the prevailing result is Oui, we will be able to memorize Planck's constant, without any error margins.
The Science Magazine's Adrian Cho has some fresh details:
h = 6.62607015 \times 10^{-34}\,{\rm J}\cdot {\rm s}
\] exactly. Note that the Wikipedia page on the Planck constant quotes the most accurate measurement as ...70040(81) instead of ...70150 which is chosen in the newly proposed definition of the kilogram. For some reason, the soon-to-be-permanent value of \(h\) was shifted to a slightly higher value. Note that the relative error margin of the \(h\) measurements is about \(1.1\times 10^{-8}\).
Just like the platinum-iridium cylinder will be thrown away to the dumping ground, the laboratories that measure \(h\) may also be thrown to the garbage bins because the value of \(h\) will be perfectly known without them. Alternatively, the experimental physicists may rebrand themselves as the people who measure the mass of a platinium-iridium object thrown away to the French river – note that their result will no longer say that this platinium-iridium object was "exactly" 1 kilogram by its weight.
Fundamental physicists surely think that the reduced Planck constant, or the Dirac constant \(\hbar\), is more fundamental or more natural than \(h\), and maybe that one should have gotten a simple rational value in the SI units. But you may still use \(\hbar = h / 2\pi\) so its value will be precisely known, too.
Aside from the kilogram, one ampere, one kelvin, and one mole should be redefined to peg other universal constant to known values. Those aren't quite the values \(1=c=\hbar=N_A=k_B=\epsilon_0=\dots\) used by the adult physicists – but fixed known values different from \(1\) eliminate the uncertainty just as well as the values equal to \(1\).
I believe that this page, physics.info/constants/, already contains the newest (red) truly constant values of the corresponding universal constants whose error margins will be set to zero. Aside from \(c\) and \(h\) mentioned above, we might have:\[
\begin{align}
e&= 1.602176634 \times 10^{-19}\,{\rm C}\\
N_A &= 6.02214076\times 10^{23}/{\rm mol}\\
k_B &= 1.380649\times 10^{-23}\,{\rm J/K}\\
K_{cd} &= 683\,{\rm lm/W}
\end{align}
\] and the page also contains the standard gravity and the standard atmosphere (9.80665 and 101,325 with the units you know).
Next week, I would surely vote Yes for this reform – well, I consider myself a long-term proponent of this kind of a reform. Universal constants are constants appearing in the fundamental laws of Nature and play a similar role as \(\pi\) or \(e\approx 2.718\) in mathematics. If they may be made perfectly precise with a choice of units, they should be.
For the normal people, nothing will change at all. The physical constants will have more or less the same values, within error margins that are narrower than the normal people may distinguish, and the SI units will have the same meaning as before. The reform will only affect the life of the experimenters who were capable of measuring the constants with the globally competitive precision. They will have to translate their measurements to the new units and realize that it's different numbers than before that will inherit the relative error margins.
Versailles. With all this useless wealth, why wouldn't you throw a dull platinum-iridium cylinder to the trash?
The relative precision of the measured Planck constant is about 3 times worse than what the speed of light was right before its numerical value was fixed. But it's not such a big difference. We often can and need to measure distances and durations with the accuracy of \(10^{-15}\) or so – but the masses? Every quantity that has a "kilogram" in it is just fine with the \(10^{-8}\) precision. Think about milk in the supermarket.
|
Integration Exercises on indefinite and definite integration of basic algebraic and trigonometric functions.
This is level 1 ? Use the ^ key to type in a power or index and use the forward slash / to type a fraction. Press the right arrow key to end the power or fraction. Click the Help tab above for more.
Each of your answers should end with +c for the constant of integration.
Instructions
Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help.
When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file.
Transum.org
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
Please contact me if you have any suggestions or questions.
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 3 October 'Starter of the Day' page by S Mirza, Park High School, Colne:
"Very good starters, help pupils settle very well in maths classroom."
Comment recorded on the 5 April 'Starter of the Day' page by Mr Stoner, St George's College of Technology:
"This resource has made a great deal of difference to the standard of starters for all of our lessons. Thank you for being so creative and imaginative."
Answers
There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer.
A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves.
Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members.
If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe
Go Maths
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school.
Maths Map
Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic.
Teachers
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
© Transum Mathematics :: This activity can be found online at:
www.Transum.org/go/?Num=800
Close
Level 1 - Indefinite integration of basic polynomials with integer coefficient solutions
Level 2 - Indefinite integration of basic polynomials with integer and fraction coefficient solutions
Level 3 - Definite integration of basic polynomials
Level 4 - Integration of expressions containing fractional indices
Level 5 - Integration of basic trigonometric, exponential and reciprocal functions
Level 6 - Integration of the composites of basic functions with the linear function ax + b
Exam Style Questions - A collection of problems in the style of GCSE or IB/A-level exam paper questions (worked solutions are available for Transum subscribers).
Differentiation - A multi-level set of exercises providing practice differentiating expressions
The video above is from the wonderful HEGARTYMATHS
Use the ^ key to type in a power or index then the right arrow or tab key to end the power.
For example: Type 3x^2 to get 3x
2.
Use the forward slash / to type a fraction then the right arrow or tab key to end the fraction.
For example: Type 1/2 to get ½.
Fractions should be given in their lowest terms.
Answers to definite integral questions should be given as exact fractions or to three significant figures if the decimal answer does not terminate.
The following identities may also prove useful:$$\sin^2x = \frac{1}{2} - \frac{1}{2} \cos 2x \text{ and } \cos^2x = \frac{1}{2} + \frac{1}{2} \cos 2x$$
Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen.
Close
|
Defining parameters
Level: \( N \) = \( 6041 = 7 \cdot 863 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 8 \) Sturm bound: \(5958144\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(6041))\).
Total New Old Modular forms 1494708 1431773 62935 Cusp forms 1484365 1423161 61204 Eisenstein series 10343 8612 1731 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(6041))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 6041.2.a \(\chi_{6041}(1, \cdot)\) 6041.2.a.a 1 1 6041.2.a.b 2 6041.2.a.c 83 6041.2.a.d 101 6041.2.a.e 112 6041.2.a.f 132 6041.2.c \(\chi_{6041}(6040, \cdot)\) n/a 574 1 6041.2.e \(\chi_{6041}(3453, \cdot)\) n/a 1148 2 6041.2.g \(\chi_{6041}(1725, \cdot)\) n/a 1148 2 6041.2.i \(\chi_{6041}(8, \cdot)\) n/a 185760 430 6041.2.k \(\chi_{6041}(13, \cdot)\) n/a 246820 430 6041.2.m \(\chi_{6041}(2, \cdot)\) n/a 493640 860 6041.2.o \(\chi_{6041}(5, \cdot)\) n/a 493640 860
"n/a" means that newforms for that character have not been added to the database yet
|
Current browse context:
astro-ph.HE
Change to browse by: Bookmark(what is this?) Astrophysics > High Energy Astrophysical Phenomena Title: Optical, J, and K light curves of XTE J1118+480 = KV UMa: the mass of the black hole and the spectrum of the non-stellar component
(Submitted on 12 Sep 2019)
Abstract: Optical, J, and K photometric observations of the KV UMa black hole X-ray nova in its quiescent state obtained in 2017-2018 are presented. A significant flickering within light curves was not detected, although the average brightness of the system faded by $\approx 0.1^m$ during 350 days. Changes in the average brightness were not accompanied with the increase or the decrease of the flickering. From the modelling of five light curves the inclination of the KV UMa orbit and the black hole mass were obtained: $i=74^{\circ}\pm 4^{\circ}$, $M_{BH}=(7.06\div 7.24)M_{\odot}$ dependently on the used mass ratio. The non-stellar component of the spectrum in the range $\lambda=6400\div 22000$\r{A} can be fitted by a power law $F_{\lambda}\sim \lambda^{\alpha}$, $\alpha\approx -1.8$. The accretion disk orientation angle changed from one epoch to another. The model with spots on the star was inadequate. Evolutionary calculations using the "Scenario Machine" code were performed for low mass X-ray binaries, a recently discovered anomalously rapid decrease of the orbital period was taken into account. We showed that the observed decrease can be consistent with the magnetic stellar wind of the optical companion which magnetic field was increased during the common envelope stage. Several constraints on evolutionary scenario parameters were done. Submission historyFrom: Alexey Bogomazov [view email] [v1]Thu, 12 Sep 2019 10:34:52 GMT (1934kb)
|
Standard treatments of the Buckingham Pi Theorem seem to imply that given a dimensionless function $f$ of variables $q_1, q_2, \dots, q_n$ with associated dimension matrix having rank $r$, there exists a function $\phi$ of $\nu = n-r$ variables and pi groups $\pi_1, \pi_2, \dots, \pi_\nu$ such that $$ f(q_1, q_2, \dots, q_n) = \phi(\pi_1, \pi_2, \dots, \pi_\nu). \tag{$\star$} $$ A pi group is a function of the $q_i$ that takes the form $q_1^{a_1}q_2^{a_2}\cdots q_n^{a_n}$ for some rational $a_1, a_2, \dots, a_n$. If all the $a_i$ are zero, I'll call the pi group "trivial."
A simple counterexample
Consider the case $n=1$ with $q_1 = q$ for notational simplicity, and let the function $f:\mathbb R\to \mathbb R$ be defined by $$ f(q) = \mathrm{sgn}(q) $$ where $\mathrm{sgn}$ is the sign function. This function has the property that $$ f(\lambda q) = f(q) $$ for all positive $\lambda$ -- it's scale-invariant under positive changes of scale, so no matter what dimensions one assigns to $q$, this function is dimensionless, but it cannot be written as a function of pi groups. Indeed if $q$ is dimensionful, then there will be no non-trivial pi groups to speak of, and $\mathrm{sgn}(q)$ is not a pi group.
A less reductive/contrived but essentially equivalent counterexample
Consider the formula for quadratic drag in one dimension: $$ F_d(\rho, A, v) = -\frac{1}{2}C_d\rho A v|v| = -\frac{1}{2}C_d\rho A v^2\mathrm{sgn}(v). $$ Dividing both sides by $\rho A v^2$ reveals a dimensionless quantity $$ f(\rho, A, v) = \frac{F_d(\rho,A, v)}{\rho A v^2} = -\frac{1}{2}C_d\mathrm{sgn}(v) $$ which cannot be written in terms of pi groups. There are no non-trivial pi groups that can be formed from density, area, and velocity, and $\mathrm{sgn}(v)$ is not a pi group.
Error in proof?
In addition to these counterexamples, as I was reading a proof of the Pi Theorem in Bluman and Kumei's
Symmetries and Differential Equations, I came across a step in the proof (the statement "Consequently $F$ is independent of $X_n$" right after eq. 1.23) that I believe is erroneous given that we only allow for positive scale transformations when changing units, and I think this error corresponds to missing the possibility of needing both pi groups and sign functions of variables in writing dimensionless quantities in terms of other dimensionless quantities. The crux of the problem with the argument seems to me the assertion that if a function is positive scale invariant, namely $f(\lambda q) = f(q)$ for all positive $\lambda$, then it is constant. The sign function is a counterexample to this assertion. I think the most that can be said about a function $f:\mathbb R\to\mathbb R$ that is positive scale invariant is that it's constant for all negative values and constant for all positive values for potentially different constants. My question in a nutshell
If a dimensionless quantity $f$ is a function of
non-negative quantities $q_1, q_2, \dots, q_n$, then it seems the standard statement of the theorem is correct -- e.g. the proof in Bluman and Kumei goes through without error as far as I can tell, but if one or more variables is allowed to be negative, shouldn't equation ($\star$) above instead be written as$$ f(q_1, q_2, \dots, q_n) = \phi(\mathrm{sgn}(q_1), \mathrm{sgn}(q_2), \dots, \mathrm{sgn}(q_n), \pi_1, \pi_2, \dots, \pi_\nu)? \tag{$\star\star$}$$I believe that Bluman and Kumei's proof can be modified to prove this slightly weaker version of the theorem.
|
The answer is
yes if $R$ is any atomic domain, e.g., $R$ is a Noetherian domain.
Claim 1. Let $R$ be any integral domain. The set $S = S_R$ is saturated in the sense that if $ab \in S_R$ for some $a,b \in R$, then $a \in S_R$.
Proof. Let $x \in R$ and let $I = Ra + Rx$. As $Ib$ is principal, so is $I$.
Lemma. Let $R$ be any integral domain. If $p \in S_R$ is irreducible, then $Rp$ is maximal. In particular, $p$ is prime.
Proof. Let $p$ be an irreducible element in $S_R$. Given $x \in R \setminus Rp$, there is $d \in R$ such that $Rp + Rx = Rd$. As $d$ divides both $p$ and $x$ and $p$ doesn't divide $x$, the element $d$ is a unit. Therefore $Rp + Rx = R$.
Claim 2. If $R$ is an atomic domain, then $S_R$ is multiplicatively closed.
Proof. Since $S_R$ is saturated and $R$ is atomic, it suffices to show that $pa \in S_R$ for every irreducible element $p \in S_R$ and every $a \in S_R$.
To do so, consider $x \in R$. If $x \in Rp$, write $x = px'$. As $a \in S_R$, there is $d \in R$ such that $Ra + Rx' = Rd$. Hence $Rpa + Rx = Rpd$. If $x \notin Rp$, then $Rp + Rx = R$ by the above lemma. As $a \in S_R$, we can find $d \in R$ such that $Ra + Rx = Rd$. Writing $1$ (resp. $d$) as a linear combination of $p$ and $x$ (resp. of $a$ and $x$), we observe that $d = 1 \cdot d \in Rpa + Rx$. Therefore $Rpa + Rx = Rd$, which completes the proof.
Let us denote by $R^{\times}$ the unit group of $R$.The obvious examples of sets $S_R$ are $R \setminus \{0\}$ (obtained, e.g., when $R$ is a Bézout domain) and $R^{\times}$ (obtained, e.g., for $R = \mathbb{Z}[X]$, a non-Bézout unique factorization domain).
If $R$ is atomic, the above proof shows that $S_R$ is the multiplicative submonoid of $R \setminus \{0\}$ generated the units of $R$ and the prime elements $p$ of $R$ such that $Rp$ is a maximal ideal of $R$. Hence it is not too difficult to find rings $R$ such that $R^{\times} \subsetneq S_R \subsetneq R \setminus \{0\}$. Indeed, we have
Corollary. Let $R$ be a Dedekind domain which is not a PID, e.g., $R$ is the ring of integers of a number field with class number $> 1$. Then $S_R \subsetneq R \setminus \{0\}$.
Proof. By assumption, we can find a maximal ideal $\mathfrak{m}$ which is not principal. Let $a \in \mathfrak{m} \setminus \{0\}$. The element $a$ cannot be a product of prime elements since otherwise $\mathfrak{m}$ would be principal ideal generated by one of them. As a result, $a \notin S_R$.
In the case $R = \mathbb{Z}[\sqrt{-5}]$, we have $S_R = R \setminus \mathfrak{m}$ where $\mathfrak{m} = R \cdot 2 + R \cdot (1 + \sqrt{-5})$. More generally, if $R$ is the ring of integers of some number field, then $S_R$ is the complement of the union of finitely many non-principal maximal ideals.
Claim 1 can be used to inspect the state of affairs regarding polynomial rings $R[X]$ with $R$ an integral domain. For those rings $R$, the set $S = S_{R[X]}$ is one of the two trivial sets on many instances.We call $f \in R[X]$ a
$u$-polynomial if $f(r)$ is a unit of $R$ for every $r \in R$.
Claim 3. Let $R$ be an integral domain which is not a field. Assume moreover that $R$ is a unique factorization domain (UFD) with infinitely many primes or that there is no non-constant $u$-polynomial over $R$.
Then $S_{R[X]} = (R[X])^{\times} \simeq R^{\times}$.
Proof. We begin with an observation. Let $a, b \in R$. As the ideal generated by $a$ and $X - b$ is principal if and only if $a \in R^{\times}$, we deduce that $a \notin S_{R[X]}$ and $X - b \notin S_{R[X]}$ for every $a \in R \setminus R^{\times}$ and every $b \in R$. Now let $f \in S_{R[X]}$ and let $a \in R \setminus \{0\}$. The ideal generated by $f$ and $a$ is a principal ideal generated by some $g \in R[X]$. Since $g$ divides $a$ and $f$, it is a constant polynomial which lies in $S_{R[X]}$ by Claim 1. Hence $g$ is unit of $R$ by the above remark. As result, the reduction of $f$ modulo $Ra$ is a unit of $(R/aR)[X]$ for every non-zero element $a$ of $R$. If $R$ is a UFD with infinitely many primes, then $f$ must be a constant polynomial, hence a unit.
Otherwise, let us consider the ideal generated by $f$ and $X - b$ for some $b \in R$. It is a principal ideal generated by some $h \in R[X]$ which divides both $f$ and $X - b$. As it cannot be $X - b$ multiplied by a unit of $R$ by our first remark, it is a unit of $R$. Therefore $f(b)$ is a unit of $R$ too. Since this holds for every $b \in R$, $f$ is $u$-polynomial.
Note that non-constant $u$-polynomials over UFDs which aren't fields
do exist, see e.g., [1, Example 3.b]. Indeed, take $\mathcal{P} =\{p \text{ prime } \vert\, p \equiv 3 \text{ mod } 4 \} \subset \mathbb{Z}$ and set $$\mathbb{Z}_{\mathcal{P}} = \{ \text{ rational numbers } \frac{m}{n} \text{ with no prime factor of } n \text{ in } \mathcal{P}\}.$$ Then $x^2 + 1$ is $u$-polynomial over the UFD $\mathbb{Z}_{\mathcal{P}}$.
In the course of Claim 3's proof, we showed
Claim 4. Let $R$ be an integral domain which is not a field. Then
every element of $S_{R[X]}$ is a $u$-polynomial over $R$.
This easily yields
Claim 5. Let $R$ be any integral domain. Then
$S_{R[X, Y]} = (R[X, Y])^{\times} \simeq R^{\times}$.
Proof. Cleary, $R[X]$ is not a field. Let $f(X, Y) \in (R[X])[Y]$ be a non-constant polynomial with coefficients in $R[X]$. Then $f(X, X^n)$ is a non-constant polynomial with coefficients in $R$ for all $n$ large enough. Therefore $f(X, Y)$ isn't a $u$-polynomial over $R[X]$. Consequently, $u$-polynomials over $R[X]$ are constant polynomials and the result follows from Claim 4.
The OP defines a seemingly narrower set $$N_R = \{ a \in R \setminus \{0\} : a \vert x \text{ or } x \vert a \text{ for every } x \in R\}$$ which satisfies $$R^{\times} \subset N_R \subset S_R \subset R \setminus \{0\}.$$
OP's first claim. If $R$ is a local domain, then $S_R = N_R$.
OP's second claim. If $R$ is any integral domain, then $N_R$ is multiplicatively closed and saturated.
From this, we infer
Corollary 1. If $R$ is an atomic domain with at least two non-associated irreducible elements, then $N_R = R^{\times}.$
Proof. As $R$ has at least two non-associated irreducible elements, $N_R$ cannot contain any irreducible element. Since $N_R$ is saturated and $R$ is atomic, $N_R$ cannot contain any non-unit element.
In particular, we have $S_R = R^{\times}$ for any regular local domain $R$ which is not a principal ideal domain. We also obtain
Remark. If $R$ is a principal ideal domain with more than one prime, then $R^{\times} = N_R \subsetneq S_R = R \setminus \{ 0 \}$.
We obtain an alternative for Noetherian local domains $R$ with Krull dimension $1$: either $R$ is a principal ideal domain and obviously $S_R = R \setminus \{ 0 \}$, or $R$ isn't and $S_R = R^{\times}$. This follows from
Corollary 2. Assume that $R$ is an atomic domain such that $S_R = N_R$, e.g, $R$ is a Noetherian local domain. Then either $R$ is a discrete valuation ring (DVR) and $S_R = R \setminus \{0\}$ or $S_R = R^{\times}$.
Proof. Assume that $S_R$ contains a non-unit element. As $R$ is atomic and $S_R$ is saturated, the set $S_R$ contains an irreducible element $p$. Since $p \in N_R$, the element $p$ divides any non-unit of $R$. Therefore $Rp$ is the unique maximal ideal of $R$. Since an irreducible element of $R$ lies necessarily in $Rp$, $p$ is the unique irreducible element of $R$, up to multiplication by a unit. Since $R$ is atomic $R$, we deduce that $R$ is a DVR.
[1] S. H. Weintraub, "Values of polynomials over integral domains", 2014.
|
Define the problem $W$:
Input:A multi-set of numbers $S$, and a number $t$.
Question:What is the smallest subset $s \subseteq S$ so that $\sum_{k \in s} k = t$, if there is one? (If not, return
none.)
I am trying to find some polytime equivalent decision problem $D$ and provide a polytime algorithm for the non-decision problem $W$ assuming the existence of a polytime algorithm for $D$.
Here is my attempt at a related decision problem:
$\mathrm{MIN\text{-}W}$:
Input:A multi-set of numbers $S$, two numbers $t$ and $k$.
Question:Is there a subset $s \subseteq S$ so that $\sum_{k \in s} k = t$ and $|s| \leq k$?
Proof of polytime equivalence:
Assume $W \in \mathsf{P}$.
solveMIN-W(S, t, k):1. S = sort(S)2. Q = {}3. for i=1 to k:4. Q.add(S_i)5. res = solveW(Q, t)6. if res != none and res = t: return Yes7. return No
I'm not sure about this algorithm though. Can anyone help please?
|
I am trying to better understand Mercer's Theorem, by applying it to some specific kernels.
Background
Let $D \subset \mathbb{R}^N$ be a closed, bounded subset. We associate a function $K: D \times D \rightarrow \mathbb{R}$ with an operator (the Hilbert-Schmidt Integral Operator) $T_K: L^2(D) \rightarrow L^2(D)$ being $$ [T_K\phi](x) = \int_{D} K(x,s) \phi(s) \, ds. $$ We say that $K$ is positive semi-definite if for every set of $m$ vectors $(x_i)_{i \in \{ 1,2,\ldots,m\}} \in D$, the corresponding Gram matrix $[K(x_i, x_j)]_{ij}$ is positive semi-definite.
Mercer's Theorem states that if $K$ is symmetric, positive semi-definite, then it holds that $$ K(s,t) = \sum_{j = 1}^{\infty} \lambda_j e_j(s)e_j(t), \quad \quad ... (1) $$ where $\lambda_j$ is the $j$-th eigenvalue of $T_K$, and $e_j$ is the corresponding eigenfunction.
Question
While I understand this statement in an abstract sense, I have a hard time working through the theorem with an actual kernel. Take two simple examples:
$K_1(x,y) = x^{\intercal}y$ $K_2(x,y) = (x^{\intercal}y)^2$
To be clear, $x, y$ are vectors in $\mathbb{R}^N$. It's easily checked that $K_1, K_2$ are symmetric and positive semi-definite. So in this case, Mercer's Theorem should be applicable. But I have trouble determining what exactly $T_{K_1}$ or $T_{K_2}$ are, and furthermore what the corresponding $\lambda_j, e_j$ are so that (1) will hold.
Can someone please enlighten me? Thanks.
Edit (some clarifications):
To see that $K_1$ is positive semi-definite, fix $(x_i)_{i \in \{ 1,2,\ldots,m\}} \in D$, and denote the matrix $X$ whose $i$-th column is $x_i$. Then $[K_1(x_i, x_j)]_{ij} = X^T X$. Thus, for any $z \in \mathbb{R}^N$, we have $$ z^T X^T X z = \| Xz\|^2 \geq 0, $$ proving positive semi-definiteness.
|
Learning Outcomes
Compare two fractions Compare two numbers given in different forms
In this section, we will go over techniques to compare two numbers. These numbers could be presented as fractions, decimals or percents and may not be in the same form. For example, when we look at a histogram, we can compute the fraction of the group that occurs the most frequently. We might be interested in whether that fraction is greater than 25% of the population. By the end of this section we will know how to make this comparison.
Comparing Two Fractions
Whether you like fractions or not, they come up frequently in statistics. For example, a probability is defined as the number of ways a sought after event can occur over the total number of possible outcomes. It is commonly asked to compare two such probabilities to see if they are equal, and if not, which is larger. There are two main approaches to comparing fractions.
Approach 1: Change the fractions to equivalent fractions with a common denominator and then compare the numerators
The procedure of approach 1 is to first find the common denominator and then multiply the numerator and the denominator by the same whole number to to make the denominators common.
Example \(\PageIndex{1}\)
Compare: \(\frac{2}{3}\) and \(\frac{5}{7}\)
Solution
A common denominator is the product of the two: \(3\:\times7\:=\:21\). We convert:
\[\frac{2}{3}\:\frac{7}{7}\:=\frac{14}{21}\nonumber \]
and
\[\frac{5}{7}\:\frac{3}{3}=\frac{15}{21}\nonumber \]
Next we compare the numerators and see that \(14\:<\:15\), hence
\(\frac{2}{3}<\:\frac{5}{7}\)
Example \(\PageIndex{2}\)
In statistics, we say that two events are independent if the probability of the second occurring is equal to the probability of the second occurring given that the first occurs. The probability of rolling two dice and having the sum equal to 7 is \(\frac{6}{36}\). If you know that the first die lands on a 4, then the probability that the sum of the two dice is a 7 is \(\frac{1}{6}\). Are these events independent?
Solution
We need to compare \(\frac{6}{36}\)and \(\frac{1}{6}\). The common denominator is 36. We convert the second fraction to
\[\frac{1}{6}\frac{6}{6}=\frac{6}{36}\nonumber \]
Now we can see that the two fractions are equal, so the events are independent.
Approach 2: Use a calculator or computer to convert the fractions to decimals and then compare the decimals
If it is easy to build up the fractions so that we have a common denominator, then Approach 1 works well, but often the fractions are not simple, so it is easier to make use of the calculator or computer.
Example \(\PageIndex{3}\)
In computing probabilities for a uniform distribution, fractions come up. Given that the number of ounces in a medium sized drink is uniformly distributed between 15 and 26 ounces, the probability that a randomly selected medium sized drink is less than 22 ounces is \(\frac{7}{11}\). Given that weight of in a medium sized American is uniformly distributed between 155 and 212 pounds, the probability that a randomly selected medium sized American is less than 195 pounds is \(\frac{40}{57}\). Is it more likely to select a medium sized drink that is less than 22 ounces or to select a medium sized American who is less than 195 pounds?
Solution
We could get a common denominator and build the fractions, but it is much easier to just turn both fractions into decimal numbers and then compare. We have:
\[\frac{7}{11}\approx0.6364\nonumber \]
and
\[\frac{40}{57}\approx0.7018\nonumber \]
Notice that
\[0.6364\:<\:0.7018 \nonumber \]
Hence, we can conclude that it is less likely to pick the medium sized 22 ounce or less drink than to pick the 195 pound or lighter medium sized person.
Exercise
If you guess on 10 true or false questions, the probability of getting at least 9 correct is \(\frac{11}{1024}\). If you guess on six multiple choice questions with three choices each, then the probability of getting at least five of the six correct is \(\frac{7}{729}\). Which of these is more likely?
Comparing Fractions, Decimals and Percents
When you want to compare a fraction to a decimal or a percent, it is usually easiest to convert to a decimal number first, and then compare the decimal numbers.
Example \(\PageIndex{4}\)
Compare 0.52 and \(\frac{7}{13}\).
Solution
We first convert \(\frac{7}{13}\) to a decimal by dividing to get 0.5385. Now notice that
\[0.52 < 0.5385\nonumber \]
Thus
\[0.52\:<\frac{\:7}{13}\nonumber \]
Example \(\PageIndex{5}\)
When we preform a hypothesis test in statistics, We have to compare a number called the p-value to another number called the level of significance. Suppose that the p-value is calculated as 0.0641 and the level of significance is 5%. Compare these two numbers.
Solution
We first convert the level of significance, 5%, to a decimal number. Recall that to convert a percent to a decimal, we move the decimal over two places to the right. This gives us 0.05. Now we can compare the two decimals:
\[0.0641 > 0.05\nonumber \]
Therefore, the p-value is greater than the level of significance.
This is an application of comparing fractions to probability.
|
In this MO post, I ran into the following family of polynomials: $$f_n(x)=\sum_{m=0}^{n}\prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}.$$ In the context of the post, $x$ was a prime number, and $f_n(x)$ counted the number of subspaces of an $n$-dimensional vector space over $GF(x)$ (which I was using to determine the number of subgroups of an elementary abelian group $E_{x^n}$).
Anyway, while I was investigating asymptotic behavior of $f_n(x)$ in Mathematica, I got sidetracked and (just for fun) looked at the set of complex roots when I set $f_n(x)=0$. For $n=24$, the plot looked like this: (The real and imaginary axes are from $-1$ to $1$.)
Surprised by the unusual symmetry of the solutions, I made the same plot for a few more values of $n$. Note the clearly defined "tails" (on the left when even, top and bottom when odd) and "cusps" (both sides).
You can see that after $n=60$-ish, the "circle" of solutions started to expand into a band of solutions with a defined outline. To fully absorb the weirdness of this, I animated the solutions from $n=2$ to $n=112$. The following is the result.
Pretty weird right!? Anyhow, here are my questions:
First, has anybody ever seen anything at all like this before? What's up with those "tails?" They seem to occur only on even $n$, and they are surely distinguishable from the rest of the solutions. Look how the "enclosed" solutions rotate as $n$ increases. Why does this happen? [Explained in edits.] Anybody have any idea what happens to the solution set as $n\rightarrow \infty$?Thanks to @WillSawin, we now know that all the roots are contained in an annulus that converges to the unit circle, which is fantastic. So, the final step in understanding the limit of the solution sets is figuring out what happens onthe unit circle. We can see from the animation that there are many gaps, particularly around certain roots of unity; however, they do appear to be closing. The natural question is, which points on the unit circle "are roots in the limit"? In other words, what are the accumulation points of $\{z\left|z\right|^{-1}:z\in\mathbb{C}\text{ and }f_n(z)=0\}$? Is the set of accumulation points dense? @NoahSnyder's heuristic of considering these as a random family of polynomials suggests it should be- at least, almost surely. These are polynomials in $\mathbb{Z}[x]$. Can anybody think of a way to rewrite the formula (perhaps recursively?) for the simplified polynomial, with no denominator? If so, we could use the new formula to prove the series converges to a function on the unit disc, as well as cut computation time in half. [See edits for progress.] Does anybody know a numerical method specifically for finding roots of high degree polynomials? Or any other way to efficiently compute solution sets for high $n$? [Thanks @Hooked!]
Thanks everyone. This may not turn out to be particularly mathematically profound, but it sure is
neat. EDIT: Thanks to suggestions in the comments, I cranked up the working precision to maximum and recalculated the animation. As Hurkyl and mercio suspected, the rotation was indeed a software artifact, and in fact evidently so was the thickening of the solution set. The new animation looks like this:
So, that solves one mystery: the rotation and inflation were caused by tiny roundoff errors in the computation. With the image clearer, however, I see the behavior of the cusps more clearly. Is there an explanation for the gradual accumulation of "cusps" around the roots of unity? (Especially 1.)
EDIT: Here is an animation $Arg(f_n)$ up to $n=30$. I think we can see from this that $f_n$ should converge to some function on the unit disk as $n\rightarrow \infty$. I'd love to include higher $n$, but this was already rather computationally exhausting.
Now, I've been tinkering and I may be onto something with respect to point $5$ (i.e. seeking a better formula for $f_n(x)$). The folowing claims aren't proven yet, but I've checked each up to $n=100$, and they seem inductively consistent. Here denote $\displaystyle f_n(x)=\sum_{m}a_{n,m}x^m$, so that $a_{n,m}\in \mathbb{Z}$ are the coefficients in the simplified expansion of $f_n(x)$.
First, I found $\text{deg}(f_n)=\text{deg}(f_{n-1})+\lfloor \frac{n}{2} \rfloor$. The solution to this recurrence relation is $$\text{deg}(f_n)=\frac{1}{2}\left({\left\lceil\frac{1-n}{2}\right\rceil}^2 -\left\lceil\frac{1-n}{2}\right\rceil+{\left\lfloor \frac{n}{2} \right\rfloor}^2 + \left\lfloor \frac{n}{2} \right\rfloor\right)=\left\lceil\frac{n^2}{4}\right\rceil.$$
If $f_n(x)$ has $r$ more coefficients than $f_{n-1}(x)$, the leading $r$ coefficients are the same as the leading $r$ coefficients of $f_{n-2}(x)$, pairwise.
When $n>m$, $a_{n,m}=a_{n-1,m}+\rho(m)$, where $\rho(m)$ is the number of integer partitions of $m$. (This comes from observation, but I bet an actual proof could follow from some of the formulas here.) For $n\leq m$ the $\rho(m)$ formula first fails at $n=m=6$, and not before for some reason. There is probably a simple correction term I'm not seeing - and whatever that term is, I bet it's what's causing those cusps.
Anyhow, with this, we can make
almost make a recursive relation for $a_{n,m}$,$$a_{n,m}= \left\{ \begin{array}{ll} a_{n-2,m+\left\lceil\frac{n-2}{2}\right\rceil^2-\left\lceil\frac{n}{2}\right\rceil^2} & : \text{deg}(f_{n-1}) < m \leq \text{deg}(f_n)\\ a_{n-1,m}+\rho(m) & : m \leq \text{deg}(f_{n-1}) \text{ and } n > m \\ ? & : m \leq \text{deg}(f_{n-1}) \text{ and } n \leq m \end{array} \right.$$but I can't figure out the last part yet. EDIT:Someone pointed out to me that if we write $\lim_{n\rightarrow\infty}f_n(x)=\sum_{m=0}^\infty b_{m} x^m$, then it appears that $f_n(x)=\sum_{m=0}^n b_m x^m + O(x^{n+1})$. The $b_m$ there seem to me to be relatively well approximated by the $\rho(m)$ formula, considering the correction term only applies for a finite number of recursions.
So, if we have the coefficients up to an order of $O(x^{n+1})$, we can at least prove the polynomials converge on the open unit disk, which the $Arg$ animation suggests is true. (To be precise, it looks like $f_{2n}$ and $f_{2n+1}$ may have different limit functions, but I suspect the coefficients of both sequences will come from the same recursive formula.) With this in mind, I put a bounty up for the correction term, since from that all the behavior will probably be explained.
EDIT: The limit function proposed by Gottfriend and Aleks has the formal expression $$\lim_{n\rightarrow \infty}f_n(x)=1+\prod_{m=1}^\infty \frac{1}{1-x^m}.$$I made an $Arg$ plot of $1+\prod_{m=1}^r \frac{1}{1-x^m}$ for up to $r=24$ to see if I could figure out what that ought to ultimately end up looking like, and came up with this:
Purely based off the plots, it seems not entirely unlikely that $f_n(x)$ is going to the same place this is, at least inside the unit disc. Now the question is, how do we determine the solution set at the limit? I speculate that the unit circle may become a dense combination of zeroes and singularities, with fractal-like concentric "circles of singularity" around the roots of unity... :)
|
How would I prove that $\mathbb{N}$ has no limit points? Why does $\mathbb{N}$ have no limit points?
I have tried proving with integers but clearly this is not the same.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How would I prove that $\mathbb{N}$ has no limit points? Why does $\mathbb{N}$ have no limit points?
I have tried proving with integers but clearly this is not the same.
This question appears to be off-topic. The users who voted to close gave this specific reason:
Let us suppose $\mathbb{N}$ has a limit point say $a$. Then, for any $\varepsilon$>0 $\exists$ an open neighborhood $\eta=(a-\varepsilon,a+\varepsilon)$ s.t. $\eta-\{a\}\cap\mathbb{N}\neq\emptyset$. Which is a contradiction as $\mathbb{N}$ contains no points other than integers. So $\mathbb{N}$ has no limit points.
For a point $x$ to be a limit point, we need a sequence {$x_n$} where $x_n \ne x$ in our set to approach $x$.
That does not happen for the set of natural numbers because points are at least one unit apart from each other.
|
We are still waiting for a good solution for Problem 2014-15.
GD Star Rating loading...
For a (simple) graph \(G\), let \(o(G)\) be the number of odd-sized sets of pairwise non-adjacent vertices and let \(e(G)\) be the number of even-sized sets of pairwise non-adjacent vertices. Prove that if we can delete \(k\) vertices from \(G\) to destroy every cycle, then \[ | o(G)-e(G)|\le 2^{k}.\]
The best solution was submitted by Minjae Park (박민재, 수리과학과 2011학번). Congratulations!
Here is his solution.
An alternative solution was submitted by 김경석 (+3, 경기과학고 3학년). One incorrect solution was received (BHJ).
Let \[p(x)=x^n+x^{n-1}+a_{n-2}x^{n-2}+\cdots+a_1 x_1 + a_0\] be a polynomial. Prove that if \(p(z)=0\) for a complex number \(z\), then \[ |z| \le 1+ \sqrt{\max (|a_0|,|a_1|,|a_2|,\ldots,|a_{n-2}|)}.\]
For a (simple) graph \(G\), let \(o(G)\) be the number of odd-sized sets of pairwise non-adjacent vertices and let \(e(G)\) be the number of even-sized sets of pairwise non-adjacent vertices. Prove that if we can delete \(k\) vertices from \(G\) to destroy every cycle, then \[ | o(G)-e(G)|\le 2^{k}.\]
Let \(\theta\) be a fixed constant. Characterize all functions \(f:\mathcal R\to \mathcal R\) such that \(f”(x)\) exists for all real \(x\) and for all real \(x,y\), \[ f(y)=f(x)+(y-x)f'(x)+ \frac{(y-x)^2}{2} f”(\theta y + (1-\theta) x).\]
Prove or disprove that for all positive integers \(m\) and \(n\), \[ f(m,n)=\frac{2^{3(m+n)-\frac12} }{{\pi}} \int_0^{\pi/2} \sin^{ 2n – \frac12 }\theta \cdot \cos^{2m+\frac12}\theta \, d\theta\] is an integer.
The best solution was submitted by 김경석 (경기과학고등학교 3학년). Congratulations!
Here is his solution.
Alternative solutions were submitted by 이병학 (2013학번, +2), 박훈민 (2013학번, +2), 배형진 (공항중학교 3학년, +2). One incorrect solution was submitted (LSC).
Prove or disprove that for all positive integers \(m\) and \(n\), \[ f(m,n)=\frac{2^{3(m+n)-\frac12} }{{\pi}} \int_0^{\pi/2} \sin^{ 2n – \frac12 }\theta \cdot \cos^{2m+\frac12}\theta \, d\theta\] is an integer.
(A typo is fixed on Saturday.)
|
Planets orbit around stars, satellites orbit around planets, even stars orbit each other. So the question is: Why don't galaxies orbit each other in general, as it's rarely observed? Is it considered that 'dark energy' is responsible for this phenomenon?
There are plenty of satellite galaxies orbiting larger galaxies. The question is how long are you willing to wait for an orbit?
The Milky Way has a mass $M$ of something like $6\times10^{11}$ solar masses, or $10^{42}\ \mathrm{kg}$. The small Magellanic Cloud is at a distance $R$ of $2\times10^5$ light years, or $2\times10^{21}\ \mathrm{m}$. A test mass orbiting a mass $M$ at a separation $R$ will have a period of $$ P = 2\pi \sqrt{\frac{R^3}{GM}} = \text{2 billion years}. $$ Such a system could undergo at most $7$ orbits in the entire history of the universe. The universe isn't old enough for the nearest major galaxy to have completed a single orbit around us at its current separation.
Even if you did wait long enough, galaxies aren't particularly good at holding their shape. If you put them in a situation where gravity is strong enough to bend their path into a closed orbit, odds are they will also be tidally torn apart by that same gravity. And we see this all the time, as for example with the Mice Galaxies:
They do! There's an entire class of galaxy, called a 'satellite galaxy' which is defined entirely based on them orbiting a larger galaxy (which would be called a 'central galaxy'). Our own milky-way is known to have many orbiting satellite galaxies, or at least 'dwarf-galaxies'. If dwarf-galaxies aren't enough, the milky-way itself is gravitationally bound to the andromeda galaxy, and they are effectively orbitting eachother. Because of the tremendous size-scales, however, the orbital period is billions of years --- in many cases, far longer than the age of the universe, so that a pair like the milky-way---andromeda 'local group' actually hasn't completed a single complete-orbit in the history of the universe. That's why we can definitely never (even hope to) see galaxies orbit in real-time.
galaxy come in many different sizes: some of the small-er ones do rotate ["orbit"] around the edge of a large galaxy ... one can also visualize galaxy-clusters, in which the entire cluster rotates .....
|
Counting Plane Graphs with Exponential Speed-Up Abstract
We show that one can count the number of crossing-free geometric graphs on a given planar point set exponentially faster than enumerating them. More precisely, given a set
P of n points in general position in the plane, we can compute pg( P), the number of crossing-free graphs on P, in time at most \(\frac{{\rm poly}(n)}{\sqrt{8}^n} \cdot{\sf pg}(P)\). No similar statements are known for other graph classes like triangulations, spanning trees or perfect matchings.
The exponential speed-up is obtained by enumerating the set of all triangulations and then counting subgraphs in the triangulations without repetition. For a set
P of n points with triangular convex hull we further improve the base \(\sqrt{8}\approx 2.8284\) of the exponential to 3.347. As a main ingredient for that we show that there is a constant α > 0 such that a triangulation on P, drawn uniformly at random from all triangulations on P, contains, in expectation, at least n/ α non-flippable edges. The best value for α we obtain is 37/18. KeywordsCounting crossing-free configurations plane graphs triangulations constrained Delaunay triangulation edge flips Preview
Unable to display preview. Download preview PDF.
References 1. 2. 3. 4. 5. 6.Katoh, N., Tanigawa, S.: Fast Enumeration Algorithms for Non-Crossing Geometric Graphs. In: Proc. 24th Ann. Symp. on Comput. Geom., pp. 328–337 (2008)Google Scholar 7. 8. 9.Sharir, M., Sheffer, A.: Counting Triangulations of Planar Point Sets (2010), http://arxiv.org/abs/0911.3352 10. 11.Sharir, M., Welzl, E.: Random Triangulations of Planar Point Sets. In: Proc. 22nd Ann. ACM Symp. on Comput. Geom., pp. 273–281 (2006)Google Scholar
|
To begin our study, we will look at subspaces \(U\) of \(V\) that have special properties under an operator \(T\) in \(\mathcal{L}(V,V)\).
Definition \(\PageIndex{1}\): invariant subspace
Let \(V\) be a finite-dimensional vector space over \(\mathbb{F}\) with \(\dim(V)\ge 1\), and let \(T\in \mathcal{L}(V,V)\) be an operator in \(V\). Then a subspace \(U\subset V\) is called an
invariant subspace under \(T\) if
\begin{equation*}
Tu \in U \quad \text{for all \(u\in U\).} \end{equation*} That is, \(U\) is invariant under \(T\) if the image of every vector in \(U\) under \(T\) remains within \(U\). We denote this as \(TU = \{ Tu \mid u\in U \} \subset U\).
Example \(\PageIndex{1}\)
The subspaces \(\kernel(T)\) and \(\range(T)\) are invariant subspaces under \(T\). To see this, let \(u\in\kernel(T)\). This means that \(Tu=0\). But, since \(0\in\kernel(T)\), this implies that \(Tu=0\in \kernel(T)\). Similarly, let \(u\in \range(T)\). Since \(Tv\in \range(T)\) for all \(v\in V\), we certainly also have that \(Tu \in \range(T)\).
Example \(\PageIndex{2}\)
Take the linear operator \(T:\mathbb{R}^3\to\mathbb{R}^3\) corresponding to the matrix
\begin{equation*}
\begin{bmatrix} 1&2&0\\ 1&1&0\\0&0&2 \end{bmatrix} \end{equation*}
with respect to the basis \((e_1,e_2,e_3)\). Then \(\Span(e_1,e_2)\) and \(\Span(e_3)\) are both invariant subspaces under \(T\).
An important special case of Definition 7.1.1 involves one-dimensional invariant subspaces under an operator \(T\) in \(\mathcal{L}(V,V)\). If \(\dim(U) = 1\), then there exists a nonzero vector \(u\) in \(V\) such that
\[ U = \{ au \mid a \in \mathbb{F} \}.\]
In this case, we must have
\[ T u = \lambda u \quad ~\text{for some \(\lambda \in \mathbb{F}\)}. \]
This motivates the definitions of eigenvectors and eigenvalues of a linear operator, as given in the next section.
|
For $1 \leq i \leq n$ and $1 \leq j \leq k$:
Let $a_{ij}$ be the $j$th bit of string $a_i$, and let $b_{ij}$ be the $j$th bit of string $b_i$.
Define $x \oplus y$ as $x$ xor $y$, and define $x \parallel y$ as the concatenation of $x$ and $y$.
Then $A = \parallel_{j=1}^{k}\bigoplus_{i=1}^{n} a_{ij}$ and $B = \parallel_{j=1}^{k}\bigoplus_{i=1}^{n} b_{ij}$.
We want to know the probability that $A=B$, or when $\parallel_{j=1}^{k}\bigoplus_{i=1}^{n} a_{ij} = \parallel_{j=1}^{k}\bigoplus_{i=1}^{n} b_{ij}$
Since each bit column is independent, and the bits are assigned randomly, we can compute the result for a single column and extend this to the rest of the columns.
In other words, $P(A=B) = P\left(\bigoplus_{i=1}^{n} a_{i1} = \bigoplus_{i=1}^{n} b_{i1}\right)^k$.
This is the same as asking when $(\bigoplus_{i=1}^{n} a_{i1}) \oplus (\bigoplus_{i=1}^{n} b_{i1}) = 0$, since $x \oplus x = 0$. And this is the same as asking when $(\bigoplus_{i=1}^{2n} c_{i1}) = 0$ for some randomly-assigned bitcolumn $c_{i1}$.
The probability that a sequence of random bits xors to $0$ is the same as the probability that there is an even number of $1$'s present, which occurs with probability $1/2$ overall, independent of $n$.
$$P(A=B) = \frac{1}{2^k}$$
|
C3.8 Analytic Number Theory - Material for the year 2019-2020
Basic ideas of complex analysis. Elementary number theory. Some familiarity with Fourier series will be helpful but not essential.
16 lectures
Assessment type:
The aim of this course is to study the prime numbers using the famous Riemann $\zeta$-function. In particular, we will study the connection between the primes and the zeros of the $\zeta$-function. We will state the Riemann hypothesis, perhaps the most famous unsolved problem in mathematics, and examine its implication for the distribution of primes. We will prove the prime number theorem, which states that the number of primes less than $X$ is asymptotic to $X/\log X$.
In addition to the highlights mentioned above, students will gain experience with different types of Fourier transform and with the use of complex analysis.
Introductory material on primes. Arithmetic functions: Möbius function, Euler's $\phi$-function, the divisor function, the $\sigma$-function. Multiplicativity. Dirichlet series and Euler products. The von Mangoldt function.
The Riemann $\zeta$-function for $\Re (s) > 1$. Euler's proof of the infinitude of primes. $\zeta$ and the von Mongoldt function.
Schwarz functions on $\mathbf{R}$, $\mathbf{Z}$, $\mathbf{R}/\mathbf{Z}$ and their Fourier transforms. *Inversion formulas and uniqueness*. The Poisson summation formula. The meromorphic continuation and functional equation of the $\zeta$-function. Poles and zeros of $\zeta$ and statement of the Riemann hypothesis. Basic estimates for $\zeta$.
The classical zero-free region. Proof of the prime number theorem. Implications of the Riemann hypothesis for the distribution of primes.
Full printed notes will be provided for the course, including the non-examinable topics (marked with asterisks above). The following books are relevant to the course.
G. H. Hardy and E. M. Wright, An introduction to the Theory of Numbers(Sixth edition, OUP 2008). Chapters 16, 17, 18. H. Davenport, Multiplicative number theory(Third Edition, Springer Graduate texts 74), selected parts of the first half. M. du Sautoy, Music of the primes(this is a popular book which could be useful background reading for the course).
|
This question is from Lang's Algebra Chapter VI Exercise Q8
Let $f(x)=x^4+ax^2+b$ be an irreducible polynomial over $\mathbb{Q}$, with roots $\pm\alpha$, $\pm\beta$ and splitting field $K$.
I have shown that the Galois Group is either $\mathbb{Z_{4}}$ or $\mathbb{Z_{2}}\times\mathbb{Z_{2}}$ or $D_{8}$
The second part of this question asks me to show the Galois group is $\mathbb{Z_{4}}$ if and only if $\frac{\alpha}{\beta}-\frac{\beta}{\alpha}\in\mathbb{Q}$
I have also shown that if the Galois group is $\mathbb{Z_{4}}$, then $\frac{\alpha}{\beta}-\frac{\beta}{\alpha}\in\mathbb{Q}$. However, I don't know how to show the opposite.
I have tried that since there are only four possibilities of $\sigma(\alpha)$ and four possiblities for $\sigma(\beta)$, by $\sigma(\frac{\alpha}{\beta}-\frac{\beta}{\alpha})=\frac{\alpha}{\beta}-\frac{\beta}{\alpha}$ we can only have following three possibilities:
1) $\sigma(\alpha)=-\alpha$, $\sigma(\beta)=-\beta$
2) $\sigma(\alpha)=\beta$, $\sigma(\beta)=-\alpha$
3) $\sigma(\alpha)=-\beta$, $\sigma(\beta)=\alpha$
Clearly, the second case is what we want, but I don't know how to get rid of the case 1) and case 3).
There is a similar post here Galois group of a biquadratic quartic, but I don't know how to connect this post to my question.
Any hints or explanations are really appreciated!!
EDIT 1:
By the hints from Jyrki, I have shown that the case 1) and case 3) are actually the power of case 2), case 1) is the second power, and the case 3) is the third power.
Moroever, in case 2), the order of $\sigma$ is $4$ so that we have the cyclic group of order $4$
Then, I found I skipped one step here. I have not shown that the extension degree is four. I think, to show this, I have to show that $K(\alpha, \beta)=K(\alpha)$ so that since $f(x)$ is irreducible with $\alpha$ being a root, the degree is $4$. However, I have no idea how to show this right now.
Any ideas?
Thank you!
EDIT 2:
The answer from Jyrki is definitely right, the reason of this edition here is to add one more little point in the whole proof.
After all the arguments, we could only say that the Galois group has a cyclic subgroup, in our case, say $<\sigma_{2}>$ of order four, but it cannot conclude that the Galois group is $<\sigma_{2}>\cong \mathbb{Z_{4}}$. The reason behind is that the automorphism $\sigma_{2}$ was picked from the Galois group, and without knowing the degree of the splitting field over $\mathbb{Q}$, we cannot conclude the size of the Galois Group. In other words, if the splitting field is of degree $8$ over $\mathbb{Q}$, we could still get $<\sigma_{2}>$, but it is the subgroup of the Galois Group $\cong D_{8}$
Thus, the degree here is important. The splitting field is $\mathbb{Q}(\alpha,\beta)$, so the degree is $4$ if and only if $\mathbb{Q}(\alpha,\beta)=\mathbb{Q}(\alpha)$, since if you think about the intermediate field $\mathbb{Q}(\alpha)$ between $\mathbb{Q}(\alpha,\beta)$ and $\mathbb{Q}$, the degree between $\mathbb{Q}(\alpha)$ and $\mathbb{Q}(\alpha,\beta)$ can only be $1$ or $2$ (cannot be $3$ as the polynomial cannot only have one rational root, cannot be $4$ as otherwise the Galois group is of order $16$, which not divides $24=|S_{4}|$, but the Galois group should be a subgroup of $S_{4}$, as $f(x)$ irreducible).
Then, if the degree between $\mathbb{Q}(\alpha)$ and $\mathbb{Q}(\alpha,\beta)$ is $2$, we get the Galois group to be $D_{8}$, if it is $1$, we get it to be $\mathbb{Z_{4}}$ or $\mathbb{Z_{2}\times Z_{2}}$. In current case it is $\mathbb{Z_{4}}$, but the point is that if the degree is $1$, $\mathbb{Q}(\alpha,\beta)=\mathbb{Q}(\alpha)$ (The "only if" part is really similar).
Therefore, to prove the degree is $4$, we need to show $\mathbb{Q}(\alpha,\beta)=\mathbb{Q}(\alpha)$. To show this, we need to show that $\alpha$ generates $\beta$.
First note that, since $\pm\alpha$, $\pm\beta$ are four roots of $f(x)=x^4+ax^2+b$. We could factor $f(x)$ into $f(x)=(x-\alpha)(x+\alpha)(x-\beta)(x+\beta)$, but it will finally give us $f(x)=x^{4}-(\alpha^{2}+\beta^{2})x^{2}+\alpha^{2} \beta^{2}$. Since $f(x)\in\mathbb{Q}[x]$, $-(\alpha^{2}+\beta^{2})=-\alpha^{2}-\beta^{2}\in\mathbb{Q}$. Thus, $-\alpha^{2}-\beta^{2}=d$ for some $d\in\mathbb{Q}$. Thus, $\beta^{2}=-d-\alpha^{2}$. Then, $\frac{\alpha}{\beta}-\frac{\beta}{\alpha}=\frac{\alpha^{2}-\beta^{2}}{\alpha \beta}=\frac{2\alpha^{2}+d}{\alpha \beta}$. Since $\frac{\alpha}{\beta}-\frac{\beta}{\alpha}\in\mathbb{Q}$, then $\frac{2\alpha^{2}+d}{\alpha \beta}\in\mathbb{Q}$, and thus $\frac{2\alpha^{2}+d}{\alpha \beta}=c$ for some $c\in\mathbb{Q}$. Thus, $\frac{2\alpha^{2}+d}{c\alpha}=\beta$.
Thus, $\mathbb{Q}(\alpha,\beta)=\mathbb{Q}(\alpha)$, since $f(x)$ is irreducible of order $4$ with $\alpha$ as a root over $\mathbb{Q}$, then $\mathbb{Q}(\alpha,\beta)=\mathbb{Q}(\alpha)$ is of degree $4$ over $\mathbb{Q}$.
Now, we finish the whole proof.
|
Suppose $K/\mathbb{Q}\_p$ is a finite extension with residue field $k$. Fix a uniformizer $\pi\in K$ and choose a coherent sequence $(\pi^{1/p^n})$ of $p$-power roots of $\pi$, and let $K_\infty/K$ be the extension of $K$ generated by these roots. The field $K_\infty$ is arithmetically pro-finite and so we can consider its field of norms, which happens to be isomorphic to the function field $k((u))$. Let $\mathcal{R}$ be a Cohen ring for $k((u))$, equipped with a lift $\varphi$ of Frobenius. Conventionally, we take this to be the $p$-adic completion of $W(k)[[u]]_{(u)}$ and $\varphi$ sends $u$ to $u^p$.
Then we have equivalences: $G_{K_\infty}$-reps $\Leftrightarrow$ $G_{k((u))}$-reps $\Leftrightarrow$ etale $\varphi$-modules over $\mathcal{R}$.
Although I've known this basic theory for a while, I realized I did not know the answers to some elementary questions, and couldn't immediately find them in the literature. It seems to me that these are toy examples of the theory and should appear somewhere!
Question 1: Let $\chi$ be the restriction of the $p$-adic cyclotomic character to $G_{K_\infty}$. Is there a nice description of the corresponding character of $G_{k((u))}$? In other words, what is the norm field extension of $k((u))$ corresponding to $K_\infty(\mu_{p^\infty})$?
Question 2: By the second equivalence, there should be a 'cyclotomic' period in $\mathcal{R}$ that transforms under $G_{k((u))}$ by the character that is the answer to Question 1. Can we write it down explicitly?
|
This was an exercise to use the approach here to estimate the sum $\sum_{p_2 \leq x} \log (p_2)^2,$ in which $p_2$ are numbers containing two prime factors (repetitions allowed). $\pi_2(x)$ is the number of $p_2$ not exceeding x.
My question is whether I have done anything illegal in adapting this method. The numbers (below) suggest it works. If someone happens to know the correct form of A(x) that would also be appreciated. (Edit: Landau, Handbuch, p.203. I may be able to fill in the error from this.)
$$\sum_{n\leq x} \pi_{2}(n)(\log(n+1)^2-\log(n)^2)$$
$$= \sum_{n\leq x}\pi_2(n)\frac{2\log m}{m} $$
$$= \sum_{n\leq x}( \frac{n\log\log n}{\log n} + O(A ))(\frac{2\log(n)}{n} +O(\frac{\log(n)}{n^2})) $$
$$= 2\log\log n + O(B) $$
Summing by parts:
$$\sum_{p_2\leq x}\log (p_{2})^2 = \sum_{n\leq x} \log(n)^2(\pi_2(n)-\pi_2(n-1))$$
$$= \log(x)^2\pi_2(x) - \sum_{n \leq x}\pi_2(n)(\log(n+1)^2- \log (n)^2) $$
$$= \log(x)^2(\frac{x\log\log x}{\log x} + O(A) ) - c\log\log x + O(B) $$
$$= x\log x \log\log x - c \log\log x + O(C) $$
So the sum of the squares of the logs of the near-primes less than x is asymptotically
$$\sum_{p_2 \leq x} \log(p_{2})^2 \sim x\log x \log\log x$$
Some numbers: $$ \begin{array}{r|c|c|c} x&(\sum_{p_2\leq x}\log(p_2)^2) / (x\log x\log\log x)\\ \hline\\ 10000 &0.867 \\ 100000 & 0.918\\ 500000&0.941 \end{array} $$
|
Let $k$ be a commutative ring with $1$. Let $L$ be a $k$-Lie algebra, which is not necessarily free as a $k$-module. Let $S\left(L\right)$ denote the symmetric algebra of $L$ (over $k$), constructed as a quotient of the tensor algebra $T\left(L\right)$ of $L$. Let $U\left(L\right)$ denote the universal enveloping algebra of $L$ (over $k$), constructed as the quotient of the tensor algebra $T\left(L\right)$ modulo the two-sided ideal generated by all elements of the form $x\otimes y-y\otimes x-\left[x,y\right]$ with $x\in L$ and $y\in L$.
The canonical filtration of the tensor algebra $T\left(L\right)$ descends to a filtration of the universal enveloping algebra $U\left(L\right)$. The associated graded algebra of $U\left(L\right)$ - let us call it $GU\left(L\right)$ - is commutative (as is easily seen) and generated by the elements $\overline{\sigma\left(v\right)}\in U_1\left(L\right) / U_0\left(L\right)$ for $v\in L$ (where $\sigma$ denotes the map $L\to T\left(L\right)\to U\left(L\right)$). Thus, there exists a surjective $k$-algebra homomorphism $S\left(L\right)\to GU\left(L\right)$ which maps $v$ to $\overline{\sigma\left(v\right)}$ for every $v\in L$ (according to the universal property of the symmetric algebra).
One version of the Poincaré-Birkhoff-Witt theorem (abbreviated PBW theorem) says that
under certain conditions, this homomorphism is actually an isomorphism. The Wikipedia page says that it is so if any of the following four cases holds (here I am quoting Wikipedia):
(1) $L$ is a flat $k$-module,
(2) $L$ is torsion-free as an abelian group,
(3) $L$ is a direct sum of cyclic modules (or all its localizations at prime ideals of $k$ have this property), or
(4) $k$ is a Dedekind domain.
A reference is given to a paper which I have no access to:
P.J. Higgins,
Baer Invariants and the Birkhoff-Witt theorem, J. of Alg. 11, 469-482, (1969)
Most internet sources which prove PBW only prove it under the condition that $L$ is a free $k$-module. (Out of these proofs I consider Garrett's version most readable.) I am interested in a proof in case (2). I know that it is enough to consider the case when $k$ is a $\mathbb Q$-algebra.
The following two sources might give such a proof, if only I could understand them:
Source 1:
T. Ton-That, T.-D. Tran,
Poincaré's proof of the so-called Birkhoff-Witt theorem Rev. Histoire Math., 5 (1999), pp. 249-284. As this is only formulated for $k$ a field, this needs some modifications, but that's not my main worry. I fail to understand this paragraph on pages 277-278:
"The first four chains are of the form
$U_1 = XH_1,\ U'_1 = H'_1Z,\ U_2 = YH_2,\ U'_2 = H'_2T$,
where each chain $H_1$, $H'_1$, $H_2$, $H'_2$ is a closed chain of degree $p - 1$; therefore by induction, each is the head of an identically zero regular sum. It follows that $U_1$, $U'_1$, $U_2$, $U'_2$ are identically zero, and therefore each of them can be considered as the head of an identically zero regular sum of degree $p$."
I don't understand the "It follows that $U_1$, $U'_1$, $U_2$, $U'_2$ are identically zero" part. This seems to be equivalent to $H_1 = H'_1 = H_2 = H'_2 = 0$, which I don't believe (the head of an identically zero regular sum isn't necessarily zero), but the authors are only using the weaker assertion that each of $U_1, U'_1, U_2, U'_2$ is the head of an identically zero regular sum of degree $p$ - which, however, is still far from being obvious to me.
Source 2:
P.M. Cohn, A remark on the Birkhoff-Witt theorem, J. London Math. Soc. 38, 197-203, (1963). This gives a rather strange argument, which doesn't really rhyme for me. Probably I don't understand it though. If anybody could write it up in modern terms I would be very thankful. (If you want to know where exactly I am stuck, it's "$1w=w\in M_n$" on page 202, but I fear that there are also some things I have not really grasped before that point.)
Is there any
accessible (I'm not at a university campus right now, and I need this rather soon) source for a proof of PBW in case (2)? UPDATE (5 JUNE 2011): (a) I do have Higgins's paper now. It is a beautiful and well-written piece of mathematics; I can hardly believe my eyes that something that well-written has been published in a journal!
This paper does not explicitly prove PBW in the cases (1) and (2), but the things it proves (combined with Lazard's theorem that flat modules are direct limits of free modules) are enough to conclude that PBW holds in the cases (1) and (2).
(b) Emanuela Petracci has a proof of PBW in case (2), and she even claims that it is Cohn's proof. This is probably just modesty, as she shows substantially more. I am going to read the proof when I have more time. (c) My question about Source 1 still stands, although I don't really need that proof now that I know better ones. (d) I have read the Deligne-Morgan proof; it is beautiful (although I would hardly call it straightforward, Theo; it is algebraic acrobatics in its purest form).
|
The simplest forcing to add a dominating function is Hechler forcing $\newcommand{\D}{\mathbb{D}}\D$. In set-theoretic circles, conditions in $\D$ are pairs $(s,f)$ where $s$ is a finite sequence of natural numbers and $\newcommand{\N}{\mathbb{N}}f:\N\to\N$, extension is defined by $(s,f) \leq_{\D} (t,g)$ if $t \supseteq s$, $g \geq f$, and $t(n) \geq f(n)$ for $|s| \leq n \lt |t|$. A $\D$-generic filter $G$ defines a function $g = \bigcup \lbrace s : (s,f) \in G\rbrace$ which eventually dominates every ground model function.
Since the statement you're trying to force is localized in the sense that you only want $g$ to dominate all total $X$-computable functions, you can get away with an index-based variant of Hechler forcing. In that case, conditions of $\D_X$ are pairs $(s,i)$ where $s$ is a (coded) finite sequence of natural numbers and $i$ is an index for a
total $X$-computable function $\varphi_i^X$, extension is defined by $(s,i) \leq_{\D_X} (t,j)$ if $(s,\varphi_i^X) \leq_{\D} (t,\varphi_j^X)$ in the sense described above. A $\D_X$-generic filter defines a function $g$ as above which eventually dominates every total $X$-computable function.
Note that we cannot expect $\D_X$ conditions to form a set since "$\varphi_i^X$ is total" is a $\Pi^0_2(X)$-complete statement. This is not a major problem since generics are constructed externally and we understand what "$\varphi_i^X$ is total" means from outside the ground model. Note that if the set of conditions exists in the ground model, then $\D_X$ is just a variation on Cohen forcing. However, in general, the ground model will have a very different perception of $\D_X$ and the generic will be quite different from a plain Cohen generic set.
To see that $\D_X$ preserves $\Sigma^0_1$-induction, first show that if some extension $(t,j) \leq_{\D_X} (s,i)$ forces a $\Sigma^0_1$-statement (which may use a fixed ground model set parameter in addition to the generic function $g$) then there is another extension $(u,i) \geq (s,i)$ that also forces the same $\Sigma^0_1$-statement. It follows from this that if $A(x)$ is a $\Sigma^0_1$ statement in the forcing language, then the set $$\lbrace x \in \N : (s,i) \nVdash \lnot A(x)\rbrace$$ is actually $\Sigma^0_1$-definable over the ground model. By $\Sigma^0_1$-induction in the ground model, this set has a minimal element $x_0$ and there is an extension $(t,j) \geq (s,i)$ (even with $j = i$) such that $$(t,j) \Vdash A(x_0) \land (\forall x \lt x_0)\lnot A(x).$$ This shows that it is dense to either force $\forall x \lnot A(x)$ or to force that there is a minimal $x$ that satisfies $A(x)$. Therefore, forcing with $\D_X$ preserves $\Sigma_1$-induction.
The use of the indexed variant $\D_X$ instead of the full second-order forcing $\D$ is very useful here since $\D$ can be quite devastating to weak subsystems of second-order arithmetic. Indeed, if the ground model satisfies arithmetic comprehension, then every $\Pi^1_1$ statement becomes $\Sigma^0_2$ in the generic extension. So forcing with $\D$ will not preserve systems weaker than $\Pi^1_1$-CA
0 containing ACA 0. The index-based variant $\D_X$ is not so devastating since it is equivalent to Cohen forcing over any model of ACA 0.
|
Since the total mass-energy for the neutrino presumably does not change when a neutrino changes lepton flavor, though the mass is different, what compensates for the gain or loss of mass? Does the propagation speed of the neutrino change?
There are a couple of misconceptions here.
The flavor states are not mass states. That is, the electron neutrino does not have a mass $m_{\nu_e}$ and the muon neutrino a mass of $m_{\nu_\mu}$. Rather, there are two different basis' in which to examine the neutrino. So a neutrino known to be $l$ flavored, is a mixture of mass states (numbered) like $$ |l> = \sum_{i=1}^3 U_{li}^* |i> $$ where $|l>$ is a flavor state (for $l = e, \mu, \tau$); $|i>$ is a mass state; and $U$ is the unitary mixing matrix.
Neutrinos interact in the flavor basis, but must propagate in the mass basis, so a neutrino mixing experiment probes the probability of detecting a neutrino in state $\beta$ and it was created in state $\alpha$ $$ P_{\alpha,\beta} = \left| <\beta|\alpha(t)> \right|^2 = \left| <\beta|Ue^{iEt}U^*|\alpha>\right|^2 $$ which is a pretty complicated expression in the full three-flavor analysis.
Nor is it a single mass state which propagates, all three do using the usual $e^{iEt}$ propagator, where $E = \sqrt{m_i^2 - p^2}$, which is where the mixing enters, because this reduces to oscillating trig functions.
The expression above becomes $$ P_{\alpha,\beta} = \left| \sum_i \sum_j U^*_{\alpha,i}U_{\beta,j} \sin \left(2X_{i,j} \right) <\beta|\alpha> \right|^2 $$ where $X_{i,j} = \frac{m_i^2 - m_j^2}{4E}L$, which can be reduced further because $<\beta|\alpha> = \delta_{\alpha\beta}$ and by using some trig identities, but all that is left as an exercise.
Finally, note that all the neutrinos we can interact with have energies measures in MeVs or GeVs, and all the mass states are understood to be less than 1 eV, so all neutrinos are ultra-relativistic: they move a the speed of light for nearly all practical purposes. (The exception here is the hope of comparing the arrival time of the neutrino and light wave-fronts from distant supernovae.
If this were not the case, you would expect to the the probability distribution for a initially well defined neutrino pulse to differentiate by mass state as a function of time, with the leading edge being composed of the lightest state (i.e $m_1$ [$m_3$] if the normal [inverted] hierarchy obtains), and the trailing edge of the heaviest state ($m_3$ [$m_2$]). But those states would I've been informed of a more rigorous way to treat this part of the problem. Overview at https://physics.stackexchange.com/a/21382/520. still mix to all flavors, it just that the mixture would be time dependant.
The reason neutrino oscillations are confusing to those students who think carefully about them is partially because of the history of how the neutrinos were discovered.
Originally, it was thought that the neutrinos were massless and so the flavor eigenstates were the only states that existed. Then the neutrinos were named electron neutrino $\nu_e$, muon neutrino $\nu_\mu$ and tau neutrino $\nu_\tau$. But these were not the mass eigenstates. We usually call the mass eigenstates $\nu_1$, $\nu_2$, $\nu_3$.
So rather than think of the situation as one involving the transmission of a single neutrino of a known (or unknown) mass, think of the situation as one involving three Feynman diagrams involving three different neutrinos $\nu_1$, $\nu_2$ and $\nu_3$. Each diagram contributes a complex number to the amplitude. By the rules of quantum mechanics, the three diagrams interfere.
Looked at this way, the mystery of neutrino oscillation becomes simply an interference that you're already familiar with. You'd have had the same type of interference if there were three possible energies of photons emitted.
For a reference to this way of looking at things, see slide 18 and following in Smirnov's presentation: http://physics.ipm.ac.ir/conferences/lhp06/notes/smirnov1.pdf
If I understand correctly, neutrinos all have a "mass-basis" which is essentially the state of the neutrino that includes its mass and probabilities for what flavor the neutrino interacts as. Neutrinos are created with a particular initial flavor which has a probability of having one of 3 non-oscillating mass-bases ("bay-sees"?), conventionally called
v1,
v2,
v3:
ν1has about 2/3 chance of being detected as an electron-neutrino and 1/6 each of being detected as muon- or tau- neutrinos
ν2has about equal chance of each (tho not quite exactly equal)
ν3has mostly an even chance of between muon- and tau- neutrinos, and a tiny chance of being detected as an electron-neutrino
Saying for example that an electron-neutrino has a particular mass, is misleading. Instead it seems that a neutrino created as an electron-neutrino has a probability of having each mass-basis, and that mass-basis determines how the neutrino will oscillate its flavors.
So a
v1 neutrino stays a
v1 neutrino the whole way through and energy is indeed conserved. Saying that it keeps the same mass-basis all the way through, though, doesn't mean we can necessarily determine what that mass is. Its oscillation behavior affects how they interact with things like detectors, sometimes passing right through electron-neutrino detectors because it switched to one of the other two flavors while in detection range of the detector.
Read this for a good non-mathematical explanation: http://www.quantumdiaries.org/2010/08/02/solar-neutrinos-astronaut-ice-cream-and-flavor-physics/ . Thanks to Marek for putting that link up there as a comment!
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, ISSN 0370-2693, 06/2015, Volume 746, pp. 178 - 185
A sample of 1.69×10 7 fully reconstructed π 0 →γe + e - decay candidates collected by the NA48/2 experiment at CERN in 2003-2004 is analyzed to search for the...
Journal Article
PHYSICS LETTERS B, ISSN 0370-2693, 06/2015, Volume 746, pp. 178 - 185
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 10/2018, Volume 2018, Issue 10, pp. 1 - 23
Journal Article
Nuclear Inst. and Methods in Physics Research, A, ISSN 0168-9002, 2007, Volume 574, Issue 3, pp. 433 - 471
The beam and detector, used for the NA48 experiment, devoted to the measurement of , and for the NA48/1 experiment on rare and neutral hyperon decays, are...
Kaon beams | Magnetic spectrometer | Calorimeter | Trigger | CP violation | Detectors | SYSTEM | kaon beams | PERFORMANCE | PC FARM | READOUT | detectors | magnetic spectrometer | CRYSTAL | trigger | DECAYS | INSTRUMENTS & INSTRUMENTATION | SPECTROSCOPY | NUCLEAR SCIENCE & TECHNOLOGY | calorimeter | LIQUID-KRYPTON CALORIMETER | DRIFT CHAMBER ELECTRONICS | PROTON TAGGING DETECTOR | PHYSICS, PARTICLES & FIELDS | Physics | High Energy Physics - Experiment
Kaon beams | Magnetic spectrometer | Calorimeter | Trigger | CP violation | Detectors | SYSTEM | kaon beams | PERFORMANCE | PC FARM | READOUT | detectors | magnetic spectrometer | CRYSTAL | trigger | DECAYS | INSTRUMENTS & INSTRUMENTATION | SPECTROSCOPY | NUCLEAR SCIENCE & TECHNOLOGY | calorimeter | LIQUID-KRYPTON CALORIMETER | DRIFT CHAMBER ELECTRONICS | PROTON TAGGING DETECTOR | PHYSICS, PARTICLES & FIELDS | Physics | High Energy Physics - Experiment
Journal Article
PHYSICS LETTERS B, ISSN 0370-2693, 06/2017, Volume 769, pp. 67 - 76
The NA48/2 experiment at CERN collected a large sample of charged kaon decays to final states with multiple charged particles in 2003-2004. A new upper limit...
NEUTRAL HEAVY-LEPTONS | BEAM | LIMITS | UPPER-BOUNDS | PARTICLES | ASTRONOMY & ASTROPHYSICS | DARK-MATTER | VMSM | PHYSICS, NUCLEAR | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment
NEUTRAL HEAVY-LEPTONS | BEAM | LIMITS | UPPER-BOUNDS | PARTICLES | ASTRONOMY & ASTROPHYSICS | DARK-MATTER | VMSM | PHYSICS, NUCLEAR | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment
Journal Article
Journal of Instrumentation, ISSN 1748-0221, 08/2008, Volume 3, Issue 8, pp. S08004 - S08004
The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study...
Overall mechanics design | Data processing methods | Digital electronic circuits | Calorimeters | Electronic detector readout concepts | Calibration and fitting methods | Gaseous detectors | Computing | Data acquisition circuits | Detector alignment and calibration methods | Cluster finding | Front-end electronics for detector readout | Trigger concepts and systems | Large detector systems for particle and astroparticle physics | Gamma detectors | Control and monitor systems online | Detector design and construction technologies and materials | Manufacturing | Optical detector readout concepts | Data acquisition concepts | Data reduction methods | Solid state detectors | Particle tracking detectors | Online farms and online filtering | Modular electronics | Scintillation and light emission processes | Detector grounding | VLSI circuits | Instrumentation for particle accelerators and storage rings-high energy | Detector control systems | Detector cooling and thermo-stabilization | Pattern recognition | Software architectures | Voltage distributions | Scintillators | Analysis and statistical methods | Particle identification methods | Special cables | Spectrometers | Digital signal processing | Analogue electronic circuits | Pattern recognition, cluster finding, calibration and fitting methods | Instrumentation for particle accelerators and storage rings | RESISTIVE PLATE CHAMBERS | PROTON-INDUCED DAMAGE | SILICON SENSORS | PBWO4 SCINTILLATING CRYSTALS | CATHODE STRIP CHAMBERS | VACUUM PHOTOTRIODES | LEAD TUNGSTATE CRYSTALS | high energy | INSTRUMENTS & INSTRUMENTATION | FRONT-END ELECTRONICS | LEVEL GLOBAL TRIGGER | Scintillators, scintillation and light emission processes | LONG-TERM PERFORMANCE | Instrumentation and Detectors | Physics
Overall mechanics design | Data processing methods | Digital electronic circuits | Calorimeters | Electronic detector readout concepts | Calibration and fitting methods | Gaseous detectors | Computing | Data acquisition circuits | Detector alignment and calibration methods | Cluster finding | Front-end electronics for detector readout | Trigger concepts and systems | Large detector systems for particle and astroparticle physics | Gamma detectors | Control and monitor systems online | Detector design and construction technologies and materials | Manufacturing | Optical detector readout concepts | Data acquisition concepts | Data reduction methods | Solid state detectors | Particle tracking detectors | Online farms and online filtering | Modular electronics | Scintillation and light emission processes | Detector grounding | VLSI circuits | Instrumentation for particle accelerators and storage rings-high energy | Detector control systems | Detector cooling and thermo-stabilization | Pattern recognition | Software architectures | Voltage distributions | Scintillators | Analysis and statistical methods | Particle identification methods | Special cables | Spectrometers | Digital signal processing | Analogue electronic circuits | Pattern recognition, cluster finding, calibration and fitting methods | Instrumentation for particle accelerators and storage rings | RESISTIVE PLATE CHAMBERS | PROTON-INDUCED DAMAGE | SILICON SENSORS | PBWO4 SCINTILLATING CRYSTALS | CATHODE STRIP CHAMBERS | VACUUM PHOTOTRIODES | LEAD TUNGSTATE CRYSTALS | high energy | INSTRUMENTS & INSTRUMENTATION | FRONT-END ELECTRONICS | LEVEL GLOBAL TRIGGER | Scintillators, scintillation and light emission processes | LONG-TERM PERFORMANCE | Instrumentation and Detectors | Physics
Journal Article
PHYSICS LETTERS B, ISSN 0370-2693, 01/2019, Volume 788, pp. 552 - 561
Journal Article
European Physical Journal C, ISSN 1434-6044, 12/2010, Volume 70, Issue 3, pp. 635 - 657
We report results from the analysis of the K-+/- -> pi(+)pi(-) C-+/-nu (Ke4) decay by the NA48/ 2 collaboration at the CERN SPS, based on the total statistics...
MONTE-CARLO | KE4 DECAYS | PHASE-SHIFTS | CONSTANTS | ONE-LOOP | FORM-FACTORS | LENGTHS | PI-PI SCATTERING | EQUATION | ISOSPIN BREAKING | PHYSICS, PARTICLES & FIELDS
MONTE-CARLO | KE4 DECAYS | PHASE-SHIFTS | CONSTANTS | ONE-LOOP | FORM-FACTORS | LENGTHS | PI-PI SCATTERING | EQUATION | ISOSPIN BREAKING | PHYSICS, PARTICLES & FIELDS
Journal Article
Physics Letters B, ISSN 0370-2693, 01/2019, Volume 788, Issue C, pp. 552 - 561
The NA48/2 experiment at CERN reports the first observation of the decay from an exposure of charged kaon decays recorded in 2003–2004. A sample of 4919...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
09/2018
Phys. Lett B 788 (2019) 552 The NA48/2 experiment at CERN reports the first observation of the $K^\pm \to \pi^\pm \pi^0 e^+ e^-$ decay from an exposure of $1.7...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
Physics Letters B, ISSN 0370-2693, 06/2017, Volume 769, Issue C, pp. 67 - 76
The NA48/2 experiment at CERN collected a large sample of charged kaon decays to final states with multiple charged particles in 2003–2004. A new upper limit...
Nuclear and High Energy Physics | High Energy Physics - Experiment; Charge Kaons; Lepton number violation | Physics | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Nuclear and High Energy Physics | High Energy Physics - Experiment; Charge Kaons; Lepton number violation | Physics | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
The European Physical Journal C, ISSN 1434-6044, 12/2009, Volume 64, Issue 4, pp. 589 - 608
We report the results from a study of the full sample of ∼6.031×107 K ±→π ± π 0 π 0 decays recorded by the NA48/2 experiment at the CERN SPS. As first observed...
Measurement Science and Instrumentation | Nuclear Physics, Heavy Ions, Hadrons | Quantum Field Theories, String Theory | Physics | Astronomy, Astrophysics and Cosmology | Elementary Particles, Quantum Field Theory | Formulations | Charge exchange | Pions | Synchrotrons | S waves | Wave scattering | Decay
Measurement Science and Instrumentation | Nuclear Physics, Heavy Ions, Hadrons | Quantum Field Theories, String Theory | Physics | Astronomy, Astrophysics and Cosmology | Elementary Particles, Quantum Field Theory | Formulations | Charge exchange | Pions | Synchrotrons | S waves | Wave scattering | Decay
Journal Article
13. Search for direct CP violating charge asymmetries in K ± →π ± π + π - and K ± →π ± π 0 π 0 decays
European Physical Journal C, ISSN 1434-6044, 12/2007, Volume 52, Issue 4, pp. 875 - 891
Journal Article
European Physical Journal C, ISSN 1434-6044, 04/2008, Volume 54, Issue 3, pp. 411 - 423
Journal Article
|
Degree $n$ : $28$ Transitive number $t$ : $34$ Group : $C_{14}\times D_7$ Parity: $1$ Primitive: No Nilpotency class: $-1$ (not nilpotent) Generators: (1,26,22,18,13,10,6,2,25,21,17,14,9,5)(3,8,12,16,20,23,28,4,7,11,15,19,24,27), (1,11,25,8,22,4,17,27,13,23,9,19,6,16)(2,12,26,7,21,3,18,28,14,24,10,20,5,15) $|\Aut(F/K)|$: $14$
|G/N| Galois groups for stem field(s) 2: $C_2$ x 3 4: $C_2^2$ 7: $C_7$ 14: $D_{7}$, $C_{14}$ x 3 28: $D_{14}$, 28T2 98: $C_7 \wr C_2$
Resolvents shown for degrees $\leq 47$
Degree 2: $C_2$ x 3
Degree 4: $C_2^2$
Degree 7: None
Degree 14: $C_7 \wr C_2$
28T34 x 2
Siblings are shown with degree $\leq 47$
A number field with this Galois group has no arithmetically equivalent fields.
There are 70 conjugacy classes of elements. Data not shown.
Order: $196=2^{2} \cdot 7^{2}$ Cyclic: No Abelian: No Solvable: Yes GAP id: [196, 10]
Character table: Data not available.
|
Back Home
In statistics and probability theory, the standard deviation (SD) (represented by the Greek letter sigma, σ) measures the amount of variation or dispersion from the average.[1]
A low standard deviation indicates that the data points tend to be very close to the mean (also called expected value); a high standard deviation indicates that the data points are spread out over a large range of values.
$\sigma = \sqrt {\mu _2 }$
A statistical measure that represents the percentage of a fund or security's movements that can be explained by movements in a benchmark index. For fixed-income securities, the benchmark is the T-bill. For equities, the benchmark is the S&P 500.
|
Here’s a recipe for finding the coordinates of your position after $n$ steps along the spiral.
It’s simpler to number the positions on the spiral starting at $0$: position $0$ is $\langle 0,0\rangle$, the origin, position $1$ is $\langle 1,0\rangle$, position $2$ is $\langle 1,-1\rangle$, and so on. Using $R,D,L$, and $U$ to indicate steps Right, Down, Left, and Up, respectively, we see the following pattern:
$$RD\,|LLUU\,\|\,RRRDDD\,|LLLLUUUU\,\|\,RRRRRDDDDD\,|LLLLLLUUUUUU\;\|\dots\;,$$
or with exponents to denote repetition, $R^1D^1|L^2U^2\|R^3D^3|L^4U^4\|R^5D^5|L^6U^6\|\dots\;$. I’ll call each $RDLU$ group a
block; the first block is the initial $RDLLUU$, and I’ve displayed the first three full blocks above.
Clearly the first $m$ blocks comprise a total of $2\sum_{k=1}^mk=m(m+1)$ steps. It’s also not hard to see that the $k$-th block is $R^{2k+1}D^{2k+1}L^{2k+2}U^{2k+2}$, so that the net effect of the block is to move you one step up and to the left. Since the starting position after $0$ blocks is $\langle 0,0\rangle$, the starting position after $k$ full blocks is $\langle -k,k\rangle$.
Suppose that you’ve taken $n$ steps. There is a unique even integer $2k$ such that $$2k(2k+1)<n\le(2k+2)(2k+3)\;;$$ at this point you’ve gone through $k$ blocks plus an additional $n-2k(2k+1)$ steps. After some straightforward but slightly tedious algebra we find that you’re at
$$\begin{cases}\langle n-4k^2-3k,k\rangle,&\text{if }2k(2k+1)<n\le(2k+1)^2\\\langle k+1,4k^2+5k+1-n\rangle,&\text{if }(2k+1)^2<n\le 2(k+1)(2k+1)\\\langle 4k^2+7k+3-n,-k-1\rangle,&\text{if }2(k+1)(2k+1)<n\le4(k+1)^2\\\langle -k-1,n-4k^2-9k-5\rangle,&\text{if }4(k+1)^2<n\le2(k+1)(2k+3)\;.\end{cases}$$
To find $k$ easily, let $m=\lfloor\sqrt n\rfloor$. If $m$ is odd, $k=\frac12(m-1)$. If $m$ is even, and $n\ge m(m+1)$, then $k=\frac{m}2$; otherwise, $k=\frac{m}2-1$.
|
Consider the Sturm-Liouville problem
$$y''(x) + \lambda x^2 y=0, \ y(0)=0,\ y(1)=0$$
The analytical solution is given by
$$\lambda_n=4\alpha_n^2, \ y_n(x)=\sqrt{x}J_\frac14(\alpha_nx^2)$$
where $\alpha_n$is the $n$-th zero of the Bessel function of the first kind of order 1/4. Thus, $\lambda_1\approx30.93$ and $y_1(x) \approx \sqrt{x} \,J_\frac14(2.78x^2)$
{vals, funs} = NDEigensystem[ {-(y''[x]/x^2), DirichletCondition[y[x] == 0, x == 0], DirichletCondition[y[x] == 0, x == 1]}, y[x], {x, 0, 1}, 1]
yields
vals = 16.1035 which is clearly incorrect.
|
I am attempting to model the steady state behavior of a cylinder using the finite volume method (FVM) subjected to a variety of boundary conditions in Matlab. First off, I am treating the cylinder as being axisymmetric so I am only determine the temperature profile in the r-Z plane. I set up a 2D grid so that the r coordinate is on the 'x' axis and the Z coordinate is the 'y' axis. Since the cylinder is axisymmetric then the heat flux at r = 0 is 0 (insulated). I am also applying an insulated boundary condition to the bottom surface at Z=0. The side surface has convection with a constant convection coefficient and air temperature of 310 K. The top of the cylinder, z=Z, there is a heat flux of 1000 W/m^2, convection from the air at 310K, and also surface to ambient radiation with the surroundings being 298 K. The initial temperature of cylinder is 298 K (initial guess value). The problem that I am having is that the temperature distribution matches Comsol's results with a grid of 40x40 and a thermal conductivity of 1. When I increases the number of cells to 50x50, 60x60, etc, or if i change the thermal conductivity to something other than 1 then the temperature profile is still what it should be but all the temperature are significantly lower than expected. My control loop in Matlab is as follows:
Use initial values and problem constants to solve for initial surface temperature of top and side. Use surfacte temperatures and problem geomety to determine aE, aN, aS, aW, aP, and b for every cell in the grid. Use line by line TDMA in both directions to determine the temperature profile. Calculate the error between the new temperature profile and the previous one. Update the surface temperatures and aP and b because of the nonlinear radiation term that uses previous iterate values. Perform line by line TDMA again and check for convergence. If the error is not less than 0.001 then repeat the procedure.
For the boundary conditions, I used an energy balance for the top surface and the side surface. The top surface includes surface to ambient radiation and I calculated the surface temperature to be $$T_{b}=\frac{\alpha G+hT_{f}+\frac{kT_{p}}{\delta z}+\epsilon \sigma(T_{sur}^{4}+3T_{b}^{*4})}{h+4 \epsilon \sigma T_{b}^{*3}+\frac{k}{\delta z}}$$
I also do the same thing with the surface temperature on the side the is experiencing heating through convection. Since the top surface temperature depends on its previous iterate value, I use an initial guess value of 298 for the top surface. I then calculate the initial surface temperature based on that and the equation i provided for the first iteration,
The problem lies in the fact that the surface tempreature depends on both the thermal conductivity and the grid spacing. When I use many cells like 80x80, the top surface temperature for the first TDMA iteration is almost 298 which is hard to believe because its being heated by a 1000 W/m^2 heat flux and its also being heated by the 310 K air temperature. This causes the first iteration to have very little error so convergence only produces a max temperature of like 325 when it should be 450. But if I use a grid of like 40x40 then the surface temperature for the first iteration is much much higher, and convergence causes the final temperature profile to look almost exactly like the expected.
The same exact thing happens with the thermal conductivity, if i change the value then the initial surface temperatures change dramatically which causes the convergence values to be much lower than the expected.
It seems like it all has to do with that initial surface temperature but I am not sure how else to treat it. Any help would be amazing as I have spent many nights til 5am trying to figure out how I could fix this. My contact email is in my profile if you would like to see my matlab code of if you would like to talk to me more about my problem.
|
Forgot password? New user? Sign up
Existing user? Log in
∏j=1nj≢0(mod ∑j=1nj) \large \prod_{j=1}^n j \not \equiv 0 \quad \left ( \text{mod} \ \sum_{j=1}^n j \right) j=1∏nj≡0⎝⎛mod j=1∑nj⎠⎞
For how many integers nnn satisfying 1≤n≤1801 \le n \le 1801≤n≤180 is the non-congruence above fulfilled?
You may use this List of Primes as a reference.
Problem Loading...
Note Loading...
Set Loading...
|
Interesting recursive functions -
et R={i∣∃j:f(j)=i} be the set of distinct values that...can sum1 explain clearly ???wat is d meaning of dis
R={i∣∃j:f(j)=i} be the set of distinct values that f takes
R={i∣∃j:f(j)=i} be the set of distinct values that f takes
Someone please explain why is it written "takes" ? R contains the values that f can "give" right?
@MINIPanda it is recursive function.f(x){if (x==1) return x; //map to "x"else if (x==5) return y; map to "y" // base conditionif (x mod 2 == 0) f(x/2)else f(x+5)}Function assign value to it when it terminates. As you can see base value can be either 1 or 5. R = {1,5}PS: Recursive fun calls doesn't provide mapping. A value is mapped when function terminates and return something.
R={i∣∃j:f(j)=i}
R={i∣∃j:f(j)=i}
even this is ambiguous as everything boils down to f(1) and f(5) it should be mentioned that we can consider f(x)=x coz R contains all values such that there exist at least one value in Domain which should map to this x.
@Divy Kala I have written the code above for the same. Please check it and let me know if you still have doubt.
Answer is 2Its Saying we have 2 domainsN+ → N+
can anyone explain meaning of this plz??
@ Deepesh Kataria .where is f(9)
R={i∣∃j:f(j)=i}
here the set definition means that
take suppose j=1 then f(1) =6 ,this 6 is i
f(2) = 1 ,this 1 is i
f(3) =8 ,this 8 is i
f(4) = 2 ,this 4 is i
.
this way we get so many values of i which form N+ set only {1 , 2 , 3 ,4 .....}
then why are you doing f(1) =f(6) =f(3) =f(8)...??
the set builder form is not asking this.
@akriti, see defination of function again, it's in recursive form f(n) = f(n/2) ( see the recursion) f(1) = f(6) so in order to cal. f(1) we need to agin cal f(6) now this will give f(3) now again we need to fine f(3) which will be f(8).now f(8)will give f(4) and then f(4) will give f(2) and then f(2) -> f(1) so in this way it repeats itself. see the function again, the catchy thing is the recursion part.
http://math.stackexchange.com/questions/2118739/finding-recursive-function-range/2118749
We will use strong Induction Hypothesis to proof this.Suppose that $f(1) = a$ and $f(5) = b$. It is clear that $$f(5n) = b$$ for all $n$. We'll prove by induction that for all $n \ne 5k$, $f(n) = a$.First note that$$f(2) = f(\frac{2}{2}) = f(1) = a,$$$$f(3) = f(3+5) = f(8) = f(4) = f(2) = a,$$$$f(4) = f(2) = a.$$Now suppose $n = 5k + r$, where $0 \lt r \lt 5$, and for all $k\lt n$, $n$ is not divisible by $5$ bcoz $r \neq 0$Note that if $n$ is not divisible by $5$ then $n-5$ is also not divisible by $5$. Because $n-5 = 5(k-1) + r$, again $r \neq 0$.And also Note that $\frac{n}{2}$ is not divisible by $5$, bcoz if it were divisible by $5$, this will make $n$ divisible by $5$. Base case: $f(1)=f(2)=f(3)=f(4)=a$ [already solved for base cases above]Incuctive step: Now suppose $n = 5k + r$, where $0 \lt r \lt 5$, and for all $m\lt n$ which are not divisible by $5$, $f(m) = a$.($m$ already covers $n-5$ and $\frac{n}{2}$)If $n$ is odd, $f(n) = f(n-5)$, and by induction hypothesis, $f(n-5) = a$, so we get $$f(n) = a.$$If $n$ is even, $f(n) = f(n/2)$, and by induction hypothesis, $f(n/2) = a$, so we get $$f(n) = a.$$
Best solution by using mathematical Induction.Thanks :-)
@Sourav BasuYes.
$\because f(n)=f(n+5)$.
Putting $n=n-5$ [where $n>5$] will yield $f(n-5)=f(n-5+5)=f(n)$
$\text{let we have f(1) = x. Then, f(2) = f(2/2) = f(1) = x}$
$\text{f(3) = f(3+5) = f(8) = f(8/2) = f(4/2) = f(2/1) = f(1) = x }$$\text{f(5) = f(5+5) = f(10/2) = f(5) = y. }$
$\text{All $N^+$ except multiples of 5 are mapped to x and multiples}$
$\text{of 5 are mapped to y so ,$\mathbf{Answer\space is\space 2}$}$
thanx @Prince Sindhiya ,the pictorial mapping clears everything.,
choose any number let n=17, then
f(17)=f(22)=f(11)=f(16)=f(8)=f(4)=f(2)=f(1)=f(6)=f(3)=f(8)=f(4)=f(2)=f(1)=f(6)=f(3)=f(8)=f(4)=f(2)=f(1)... <this is one part>
now let n=50
f(50)=f(25)=f(30)=f(15)=f(20)=f(10)=f(5)=f(10)=f(15)=f(20)=f(10)=f(5)=f(10)=f(5)=f(10)=f(5) .....<this is other part>
so we can take any number and that will fall either of any cycle, these are the two types of values that Function f( ) can take.
For combinatorics , can add balls and bin...
|
You only need to consider the case $\mathfrak{h}_{s}^\ast(A) \lt \infty$, but you need to be a bit careful in choosing the outer approximations since swapping $\inf$ and $\sup$ certainly isn't allowed without some thinking. If you knew that you can always take the same set $E$ in the $\inf$ (which I will show in \eqref{eqn:ast} below) then you'd be essentially done, but it is impossible to say if you got so far or not.
I'm going to ignore $\alpha_s$ since it only is a scalar factor that doesn't play a rôle in the argument and the formulas will already get cumbersome enough without it.
The following argument is essentially the one from Fremlin's measure theory, Volume 4II, 471D, page 102. $\DeclareMathOperator{\diam}{diam}$ $\newcommand{\hsd}[1]{\mathfrak{h}_{s,#1}^\ast}$
Set $\delta_n = 2^{-n}$ and choose $(A_{i}^n)_{i \in \mathbb{N}}$ of diameter $\leq 2^{-n}$ such that $A \subset \bigcup_{i=1}^\infty A_{i}^n$ and that
\begin{equation}\tag{$1$}\label{eqn:eq1} \sum_{i=1}^\infty (\diam{A_{i}^n})^s \leq \hsd{2^{-n}}(A) +2^{-n},\end{equation}which is possible because of the definition of $\hsd{2^{-n}}$ as infimum.
Observe that there's no reason for the $A_{i}^n$ to be $\hsd{2^{-n}}$-measurable, let alone Borel, so we would like to “blow them up” slightly, so as to get open sets still approximating the $\hsd{2^{-n}}$-measure of $A$ well. The problem is that by doing so we will lose the diameter condition which appears in the definition of $\hsd{2^{-n}}$, but this isn't a serious problem: we can simply choose a larger $n$ and work with the sets we obtain from there. Here are the gory details:
Choose $0 \lt r_{i}^n \lt 2^{-n}$ so small that$$
(\diam{(A_{i}^n)} + 2r_{i}^n)^s \leq (\diam{A_{i}^n})^s + 2^{-n-i}.
$$Now put $U_{i}^n = \{x \in X\,:\,d(x,A_{i}^n) \lt r_{i}^n\}$ and note that $U_{i}^n \supset A_{i}^n$ is an open set whose diameter satisfies\begin{equation}\tag{$2$}\label{eqn:eq2}(\diam{U_{i}^n})^s \leq (\diam{A_{i}^n})^s + 2^{-n-i}.\end{equation}Let$$
B = \bigcap_{n=1}^\infty \bigcup_{i=1}^\infty U_{i}^n
$$and observe that $B$ is a $G_{\delta}$-set (countable intersection of open sets) containing $A$.
Given any $\delta \gt 0$, we can find $N$ such that $3 \cdot 2^{-N} \leq \delta$ so that for all $n \geq N$ we have $\diam{U_{i}^n} \leq \delta$. As $B \subset \bigcup_{i = 1}^\infty U_{i}^n$ we see from \eqref{eqn:eq1} and \eqref{eqn:eq2} that for $n \geq N$$$
\hsd{\delta}(B) \leq \sum_{i=1}^\infty (\diam{U_{i}^n})^s
\leq
\sum_{i=1}^\infty [(\diam{A_{i}^n})^s + 2^{-n-i}] \leq \hsd{2^{-n}}{(A)} + 2 \cdot 2^{-n}
$$Since $\hsd{2^{-n}}(A) \leq \mathfrak{h}_{s}^\ast(A)$ we get for $n \geq N$$$
\hsd{\delta}(B) \leq
\mathfrak{h}_{s}^\ast(A) + 2^{-n}
$$and letting $n \to \infty$ this gives
\begin{equation}\tag{$\ast$}\label{eqn:ast}\hsd{\delta}(B) \leq \mathfrak{h}_{s}^\ast(A)\end{equation}
for every $\delta \gt 0$. [
Note: this is stronger than your condition involving $\inf$ since we can specify the set $E = B$ and thus avoid the infimum]
Taking the $\sup$ over all $\delta$ in \eqref{eqn:ast} this yields$$
\mathfrak{h}_{s}^\ast(B) \leq \mathfrak{h}_{s}^\ast(A)
$$and since $B \supset A$ and $B$ is $\mathfrak{h}_{s}$-measurable (being Borel) we can finally concludethat$$
\mathfrak{h}_{s}(B) = \mathfrak{h}_{s}^\ast(A),
$$as desired.
|
Forgot password? New user? Sign up
Existing user? Log in
So guys today INCHO And INAO Are over . Tomorrow is INPHO .
How were your exams? your expectations
Note by Prakhar Bindal 2 years, 8 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
@Aniket Sanghi @aryan goyat @Archit Agrawal @Spandan Senapati @Samarth Agarwal @neelesh vij
Log in to reply
@AYUSH JAIN @Rajdeep Dhingra @Harsh Shrivastava @Piyush Sethia
@Vitthal Yellambalse @Ayush Choubey @Vatsalya Tandon
Best of luck for INPhO !
Best of luck for inpho to all
Best of luck for inpho guyz !! :p
Best of luck
How was inpho.??how much did you attempt.
What value you got for N and omega in Q6 ? Did you solve Q5 ?
I don't rem exactly.I had solved it in hurry and my scaling factor was of the order of 10^6 means that on y axis I took 1/lambda square 10^6.and I rem it was BT 1.810^29 or like that.don't rem the power but 1.85 was what I had got.
Seems like the paper was quite lengthy. And the cutoffs may hover around 45-50.or may be somewhat less.anyway it would be between 60 and 70%.
How was inpho guys...
did anyone solve ques 5
what is the expected cutoff in inao?
Hey guys is the ans for ques 5 omega=3/16 B^2 L^3 pi/MR ???
https://brilliant.org/problems/inpho/?ref_id=1316764
Easy question Still got wrong messed up the cos and sines XD
how did u approach the problem
Any idea about expected cut off for INPHO ??
maybe around 45-55
Hey do you study in Allen.kota
What are your INPhO 2017 scores ?
INjSO marks also (I didn't give the paper just interested)
Problem Loading...
Note Loading...
Set Loading...
|
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
|
Let $M=\left (\omega\mathbb{I}-A\right )\left(\omega^{*}\mathbb{I}-A^{\dagger}\right)$ be a Hermitian matrix of size $n\times n$ where $A$ is a real non symmetric matrix and $\omega=a+\mathrm{i}b$. $A^{\dagger}$ represents the conjugate transpose of $A$.
I want to compute $\det[M]^{-\frac{1}{2}}$.
I know that for a real symmetric matrix $\Sigma$ we can represent its determinant as a gaussian integral with real variables $x_i$: $$ \frac{1}{|\Sigma|^{1 / 2}}=\int \frac{1}{(2 \pi)^{n / 2}} \exp \left(-\frac{1}{2}\mathbf{x}^{T} \Sigma\mathbf{x}\right)\mathrm{d}\mathbf{x}.$$
However in my case $M$ has complex values. I was wondering if we could extend this integral representation to Hermitian matrices. Among the feedback I got, these are the candidates: \begin{equation} \det[M]^{-\frac{1}{2}}=\int \left ( \prod_{i} \frac{\mathrm{d} x_i}{\sqrt{2 \pi / i}}\right ) \exp \left\{-\frac{\mathrm{i}}{2} \sum_{i j }x_i\left (\sum_k\left(\omega \delta_{i k}-A_{i k}\right)\left(\omega^* \delta_{k j}-A_{k j}^T\right)\right ) x_j\right\}. \end{equation} \begin{equation} \det[M]^{-\frac{1}{2}}=\int\left(\prod_i \frac{d^{2} z_{i}}{\pi}\right) \exp \left\{-\sum_{i, j, k} z_{i}^{*}\left(\omega^{*} \delta_{i k}-J_{i k}^{T}\right)\left(\omega \delta_{k j}-J_{k j}\right) z_{j}\right\} \end{equation} The second one involving complex variables seems intuitively the best suited. However I do not know whether this is correct, and I could use a simpler integral then I would prefer very much so.
Why would not this work: $$ \det[M]^{-\frac{1}{2}}=\int \left ( \prod_{i} \frac{\mathrm{d} x_i}{\sqrt{2 \pi }}\right ) \exp \left\{-\frac{1}{2} \sum_{i j }x_i\left (\sum_k\left(\omega \delta_{i k}-A_{i k}\right)\left(\omega^* \delta_{k j}-A_{k j}^T\right)\right ) x_j\right\}. $$
I am very curious on what the correct way would be. Any remark or advice would be greatly appreciated!
edit: I consider the case where $A$ is real, and does not have complex entries anymore.
Second edit: I was told that I had to integrate over complex $z_i$ rather than real $x_i$. If this is true I would like to know why I can't use real integration.
|
There are several and almost similar inequalities in MSE that some of them can be proved in long page. some of these questions listed below:
For $abc=1$ prove that $\sum\limits_{cyc}\frac{a}{a^{11}+1}\leq\frac{3}{2}.$ For positive $a$, $b$, $c$ with $abc=1$, show $\sum_{cyc} \left(\frac{a}{a^7+1}\right)^7\leq \sum_{cyc}\left(\frac{a}{a^{11}+1}\right)^7$ Inequality $\frac{x}{x^{10}+1}+\frac{y}{y^{10}+1}+\frac{z}{z^{10}+1}\leq \frac{3}{2}$ Inequality: $(a^3+a+1)(b^3+b+1)(c^3+c+1) \leq 27$ If $abc=1$ so $\sum\limits_{cyc}\frac{a}{a^2+b^2+4}\leq\frac{1}{2}$ For $abc=1$ prove that $\sum\limits_\text{cyc}\frac{1}{a+3}\geq\sum\limits_\text{cyc}\frac{a}{a^2+3}$ If $abc=1$ so $\sum\limits_{cyc}\sqrt{\frac{a}{4a+2b+3}}\leq1$.
and so on. One cane pose many many similar question in this way: Let $f(x)$ be a continuous (and maybe with a special property) then prove that $\sum_{x\in\{a,b,c\}}f(x)\leq 3f(1)$ whenever $abc=1$. or one can generalize this for arbitrary number of variables: $\sum_{cyc}f(x)\leq nf(1)$ whenever $\prod_{i=1}^n x_i=1$.
My argument based on what I read in Problem-Solving Through Problems by Loren C. Larson that
principle of insufficient reason, which can be stated briefly as follows: " Where there is no sufficient reason to distinguish, there can be no distinction."
So my question is
Is a (short and) beautiful proof for similar inequalities must exist always as the OPs want for desired answer?
|
Let \(G\) be a group. A topology on \(G\) is said to be a group topology if the map \(\mu: G \times G \to G\) defined by \(\mu(g, h) = g^{-1}h\) is continuous with respect to this topology where \(G \times G\) is equipped with the product topology. A group equipped with a group topology is called a topological group. When we have two topologies \(T_1, T_2\) on a set S, we write \(T_1 \leq T_2\) if \(T_2\) is finer than \(T_1\), which gives a partial order on the set of topologies on a given set. Prove or disprove the following statement: for a give group \(G\), there exists a unique minimal group topology on \(G\) (minimal with respect to the partial order we described above) so that \(G\) is a Hausdorff space?
The best solution was submitted by 이정환 (수리과학과 2015학번). Congratulations!
Here is his solution of problem 2019-10.
An incomplete solutions were submitted by 채지석 (수리과학과 2016학번, +2).
GD Star Rating loading...
Let \(G\) be a group acting by isometries on a proper geodesic metric space \(X\). Here \(X\) being proper means that every closed bounded subset of \(X\) is compact. Suppose this action is proper and cocompact,. Here, the action is said to be proper if for all compact subsets \(B \subset X\), the set \[\{g \in G | g(B) \cap B \neq \emptyset \}\] is finite. The quotient space \(X/G\) is obtained from \(X\) by identifying any two points \(x, y\) if and only if there exists \(g \in G\) such that \(gx = y\), and equipped with the quotient topology. Then the action of \(G\) on \(X\) is said to be cocompact if \(X/G\) is compact. Under these assumptions, show that \(G\) is finitely generated.
The best solution was submitted by 이정환 (수리과학과 2015학번). Congratulations!
Here is his solution of problem 2019-08.
Alternative solutions were submitted by 조재형 (수리과학과 2016학번, +3), 채지석 (수리과학과 2016학번, +3), 김태균 (수리과학과 2016학번, +2).
GD Star Rating loading...
|
Using some suggestions from the other commenters:
The alternating group, $A_4$, has the set $H=\{I,(12)(34),(13)(24),(14)(23)\}\cong V_4$ as a subgroup. If $f\in S_4\supseteq A_4$ is a permutation, then $f^{-1}[(12)(34)]f$ has the effect of swapping $f(1)$ with $f(2)$ and $f(3)$ with $f(4)$. One of these is $1$, and depending on which it is paired with, the conjugated element may be any of $H-\{I\}$, since the other two are also swapped. Thus $H\lhd S_4$ is normal, so $H\lhd A_4$ as well. Similarly, $H$ has three nontrivial subgroups, and taking $K=\{I,(12)(34)\}\cong C_2$, this is normal because $V_4$ is abelian. But $K\not\lhd A_4$, since $$[(123)][(12)(34)][(132)]=(13)(24)\in H-K.$$
Moreover, this is a minimal counterexample, since $|A_4|=12=2\cdot 2\cdot 3$ is the next smallest number which factors into three integers, which is required for $K\lhd H\lhd G$ but $\{I\}\subset K\subset H\subset G$ so that $[G\,:\,H]>1$, $[H\,:\,K]>1$, $|K|>1$ and$$|G|=[G\,:\,H]\cdot[H\,:\,K]\cdot|K|.$$The smallest integer satisfying this requirement is 8, but the only non-abelian groups with $|G|=8$ are the dihedral group $D_4$ and the quaternion group $Q_8$, and neither of these have counterexamples. (Note that if $G$ is abelian, then all subgroups are normal.) Thus $A_4$ is a minimal counterexample. (Edit: Oops, $D_4$ has a counterexample, as mentioned in the comments: $\langle s\rangle\lhd\langle r^2,s\rangle\lhd \langle r,s\rangle=D_4$, but $\langle s\rangle\not\lhd D_4$.)
However, if $H\lhd G$ and $K$ is a characteristic subgroup of $H$, then $K$ is normal in $G$. This is because the group action $f$ defines an automorphism on $G$, $\varphi(g)=f^{-1}gf$, and because $H$ is normal, $\varphi(H)=H$ so that $\varphi|_H$ is an automorphism on $H$. Thus $\varphi(K)=K$ since $K$ is characteristic on $H$ and so $\{f^{-1}kf\mid k\in K\}=K\Rightarrow K\lhd G$.
|
I'm trying to answer this question and you are supposed to use the multiplication rule to solve it:
A deck of 52 playing cards is randomly divided into four piles of 13 cards each. Compute the probability that each pile has exactly 1 ace.
I started off by defining following 4 events: $A_{1}, A_{2}, A_{3}$ and $A_{4}$ where $A_{i}$ denotes the event that exactly one ace is found in the i$^{th}$ pile - so to find the probability I need to find the probability of the intersection of all these events which is where I can use the multiplication rule.
The multiplication rule says that $$\mathbb{P}(A_{1} \cap A_{2} \cap A_{3} \cap A_{4})= \mathbb{P}(A_{1}) \mathbb{P}(A_{2}|A_{1}) \mathbb{P}(A_{3}|A_{2} \cap A_{1}) \mathbb{P}(A_{4}|A_{3} \cap A_{2} \cap A_{1})$$
to find each of the probabilities on the RHS I looked compared the possible combinations allowed for each situation:
The number of possible card combinations such that $A_{1}$ holds is $\binom{48}{12}$ and the total number of possible card combinations for the first pile is $\binom{52}{13}$. It then follows that $$\mathbb{P}(A_{1})= \frac{\binom{48}{12}}{\binom{52}{13}}=\frac{1406}{4165}$$
When moving on to the second pile, it follows that we now have 39 cards remaining so in order for the pile 2 to have exactly one ace, it leads to $\binom{36}{12}$ possible combinations out of the $\binom{39}{13}$ total number of combinations and so we get that $$\mathbb{P}(A_{2}|A_{1}) = \frac{\binom{36}{12}} {\binom{39}{13}} =\frac{225}{703}$$.
Continuing on in this way I got that
$$\mathbb{P}(A_{3}|A_{2} \cap A_{1}) = \frac{\binom{24}{12}}{\binom{26}{13}}=\frac{13}{50}$$
and by the way I have defined my events, it means that $$\mathbb{P}(A_{4}|A_{3} \cap A_{2} \cap A_{1}) = 1$$
so by the multiplication rule I get that $$\mathbb{P}(A_{1} \cap A_{2} \cap A_{3} \cap A_{4}) = \frac{1406}{4165} \frac{225}{703} \frac{13}{50} \approx 0.0281$$
However the answer I am given says it should be $\approx 0.105$. Can anyone help me to see where I have gone wrong? Would it perhaps be that defining the events differently lead to different probabilities? Thanks!
|
I would give an economical argument:
For every field $F$ of cardinality $q=p^d$ we have $x^q -x=0$ for all $x \in F$.(this is easy)
For every $P \in \mathbb{F}_p[X]$ irreducible of degree $d$ we have $P \mid X^{q}-X$
Indeed, work in the field $F = \mathbb{F}_p[X]/P(X)$. The polynomial $P(X)$ has a root in $F$ which is also a root of $X^{q}-X$. So $P(X)$, $X^q-X$ have a common factor and therefore $P(X) \mid X^q - X$.
Let $F'$ of degree $d'$, $F$ of degree $d$ and $d' \mid d$ . Then $F'$ imbeds into $F$.
Indeed: the extension $F'/\mathbb{F}_p$ is simple ( it's finite and separable) so $F'= \mathbb{F}_p[X]/Q(X)$.
From $1$. we have:$$X^q - X = \prod_{\alpha \in F} (X-\alpha)$$
However, from $2.$ we have
$Q(X) \mid (X^{q'}- X) \mid (X^q-X)$
It follows that $Q(X)$ has $d$ roots in $F$.
We are done.
Finding effectively a root of $Q(X)$ in $F$ might be a bit involved. Here is an example where $p=3$, $Q(X)=1+2 X + X^3$ and $F= \mathbb{F}_3[X]/(2 + X + 2 X^2 + X^4 + X^6)$. We need to find $a + b X + c X^2 + d X^3 + e X^4 + f X^5$ so that the remainder of $$1 + 2(a + b X + c X^2 + d X^3 + e X^4 + f X^5) + (a + b X + c X^2 + d X^3 + e X^4 + f X^5)^3 $$after dividing by $2 + X + 2 X^2 + X^4 + X^6$ is $\ 0\!\!\! \mod 3$.
By brute force we find $3$ solutions
$$(a,b,c,d,e,f)= (0, 1, 0, 1, 2, 0),\ (1, 1, 0, 1, 2, 0), \ \text{or}\ (2, 1, 0, 1, 2, 0)$$
Therefore, we have a morphism of fields
$$\mathbb{F}_3[X]/(1+2X+X^3) \to \mathbb{F}_3[X]/(2 + X + 2 X^2 + X^4 + X^6)\\X\mapsto X + X^3 + 2 X^4$$
|
There is problem with ppx values they are ambiguous. It may be w/w, w/v, v/v, n/n.
Salt water has density significantly different to $\pu{1 g/ml}$, so $\pu{1 ppt(parts per thausand) }$ may mean $\pu{1000 mg/L}$ or $\pu{1000 mg/kg}$, with the recalculation factor of the solution density.
The former ($\pu{ppt w/v as 1000 mg/L}$) is more probable, but check it. E.g prepare a salt solution of known concentration and check, in which ppt variant is your refractometry measurement calibrated.
$$\pu{1 ppt(w/v)} = \pu{1000 mg/L}$$
$$\pu{1 ppt(w/w) = 1000 mg/kg = \frac{\pu{1000 mg/L}}{ \rho(\pu{g/mL})} }$$
where $\rho$ is density of solution in $\pu{g/ml}$
$\pu{1000 mg NaCl}$ is equivalent to $1000\cdot \frac {M_{\ce{Cl}}} {M_{\ce{Na}}+M_{\ce{Cl}}}=\frac {35.453}{22.990+35.453}=\pu{606.6 mg Cl }$
$$\pu{1 ppt NaCl(w/v)} = \pu{606.6 mg/L Cl}$$
$$\pu{1 ppt NaCl(w/w) = \frac{\pu{606.6 mg/L Cl}}{ \rho(\pu{g/mL})} }$$
|
As far as I know, Integer Linear Programming(ILP) problem is NP-complete. According to the following paper, Binary Linear Programming problem(BLP) can be solved in Polynomial time. http://dx.doi.org/10.4236/ajor.2016.61001
I'm not familiar with the convex Quadratic Problem mentioned in the paper. However, I know that ILP can be converted to Binary Linear Programming problem in polynomial time, which means ILP will also be P, rather than NP-complete, if this paper is correct.
If the paper above is something rubbish, then for the following specific BLP problem, can it be solved in polynomial time of n?
The very special point of this BLP is that the coefficients $A$ are also binary.
\begin{align} \min_{x} \quad & [c]_{1 \times n}[x]_{n \times 1} \\ \text{s.t.} \quad & [A]_{m \times n}[x]_{n \times 1} = [1]_{m \times 1} \\ \; & x\in\mathbb{B}\\ \; & A\in\mathbb{B}\\ \; & \forall j\in\{1,2,...,n\}, \sum\limits _{i=1} ^m A_{i,j} =k;k\in Int^+ \end{align}
I strongly believe this problem can be solved in Polynomial time, as I've tried "bintprog"(Tomlab/CPLEX) in Matlab on this problem of size up to n≈14000, and it can be solved in 5s on my laptop(intel i5-4210U). I've done some search in the IBM CPLEX document, but still haven't got a clue yet.
Right now I can only confirm the time complexity of this problem as $ O(n^m)$, but this is not a strong Polynomial Time notation as $ m $ can be as large as $ n $.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.