text stringlengths 256 16.4k |
|---|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
Lets say I have point P1(10,10) and P2(20,20).
I want to find a P3 which is on between this two points and 3 units away from P1.
What is the formula to find P3 ?
Known values: X1, X2, Y1 , Y2, distance.
Wanted values: X3, Y3
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Here are some hints:
In general terms, the unit vector is
$$\hat{u} = \frac{x_2-x_1}{D}\hat{x} + \frac{y_2-y_1}{D}\hat{y},$$
where $\hat{x}, \hat{y}$ are unit vectors in the $x$ and $y$ directions, and $D = \sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ is the distance between $P_1$ and $P_2$.
Then, if you're looking for the point a distance $d$ away from $P_1$ along the line through $P_1$ and $P_2$, then, vector-based the answer is
$$\vec{P_3} = \vec{P_1} + d\hat{u}.$$
Splitting up the components gives:
$$x_3 = x_1 + \frac{d}{D}(x_2-x_1)$$
$$y_3 = y_1 + \frac{d}{D}(y_2-y_1).$$
A point on the line through $P_1$ and $P_2$ will have the form $$\mathbf x = (1-\lambda)P_1 + \lambda P_2$$ for some $\lambda \in \mathbb R$. If $\lambda \in [0,1]$, then $\mathbf x$ will be on the line segment between $P_1$ and $P_2$. In particular, when $\lambda = 1$, then $\mathbf x = P_2$.
Let's scale $\lambda$ to $\lambda'$ so that $\mathbf x = P_2$ when $\lambda' = 10\sqrt2$, which is the distance from $P_1$ to $P_2$. In other words, we have $\lambda' = 10\sqrt2\lambda$, so $$\mathbf x = \left(1-\frac{\lambda'}{10\sqrt2}\right)P_1 + \frac{\lambda'}{10\sqrt2}P_2.$$
Now substitute $\lambda' = 3$.
There is a formula, but you'll have to modify it to suit your needs. The distance formula says that the distance between A(
x 1,y1) and B(
But here, since
x 1=y1 and
This is $10\sqrt2$
So, the distance between A(10,10) and B(20,20) is $10\sqrt2$
So your point C, which is 3 units away from A, divides the line segment AB into two parts, 3 and $(10\sqrt2-3)$
Another formula says that if a point C(
x,y) divides a line between 2 points A( x 1,y1) and B(
Substituting values, we get the point to be $$C\left({20-{3\over\sqrt2}},{10+{3\over\sqrt2}}\right)$$ |
№ 9
All Issues Konovalov V. N.
Ukr. Mat. Zh. - 2005. - 57, № 12. - pp. 1633–1652
Let $s \in \mathbb{N}$ and let $\Delta^s_+$ be the set of functions $x \mapsto \mathbb{R}$ on a finite interval $I$ such that the divided differences
$[x; t_0, ... , t_s ]$ of order $s$ of these functions are nonnegative for all collections of $s + 1$ distinct points $t_0,..., t_s \in I$.
For the classes $\Delta^s_+ B_p := \Delta^s_+ \bigcap B_p$ , where $B_p$ is the unit ball in $L_p$, we obtain orders of the Kolmogorov and linear widths in the spaces $L_q$ for $1 \leq q < p \leq \infty$.
Ukr. Mat. Zh. - 2004. - 56, № 7. - pp. 901–926
Let $s ∈ ℕ$ and $Δ^s_{+}$ be a set of functions $x$ which are defined on a finite interval $I$ and are such that, for all collections of $s + 1$ pairwise different points $t_0,..., t_s \in I$, the corresponding divided differences $[x; t_0,..., t_s ]$ of order $s$ are nonnegative. Let $\Delta^s_{+} B_p := \Delta^s_{+} \bigcap B_p,\; 1 \leq p \leq \infty$, where $B_p$ is the unit ball of the space $L_p$, and let $\Delta^s_{+} L_p := \Delta^s_{+} \bigcap L_p,\; 1 \leq q \leq \infty$. For every $s \geq 3$ and $1 \leq q \leq p \leq \infty$, exact orders of the shape-preserving Kolmogorov widths $$d_n (\Delta^s_{+} B_p, \Delta^s_{+} L_p )_{L_p}^{\text{kol}} := \inf_{M^n \in \mathcal{M}^n} \sup_{x \in \Delta^s_{+} B_p} \inf_{y \in M^n \bigcap \Delta^s_{+} L_p} ||x - y||_{L_p},$$ are obtained, where $\mathcal{M}^n$ is the set of all affine linear manifolds $M^n$ in $L_q$ such that $\dim М^n \leq n$ and $M^n \bigcap \Delta^s_{+} L_p \neq \emptyset$.
Ukr. Mat. Zh. - 2002. - 54, № 5. - pp. 647-655
For Sobolev classes of periodic functions of one variable with restrictions on higher derivatives in
L 2, we determine the exact orders of relative widths characterizing the best approximation of a fixed set by its sections of given dimension in the spaces L q.
Ukr. Mat. Zh. - 2001. - 53, № 11. - pp. 1575 -1579
We present a class of functions for which trigonometric widths decrease to zero slower than the Kolmogorov widths in power scale.
Continuation of functions of several variables with preservation of differential-difference properties
Ukr. Mat. Zh. - 1984. - 36, № 3. - pp. 304 - 308
Approximation of functions of several variables by polynomials with preservation of the differential-difference properties
Ukr. Mat. Zh. - 1984. - 36, № 2. - pp. 154 - 160
Ukr. Mat. Zh. - 1981. - 33, № 6. - pp. 757-764
Ukr. Mat. Zh. - 1980. - 32, № 1. - pp. 104 - 110
Ukr. Mat. Zh. - 1978. - 30, № 5. - pp. 668–670
Method of expanding unity in regions with piecewise smooth boundaries as sums of algebraic polynomials of two variables having certain properties of a kernel
Ukr. Mat. Zh. - 1973. - 25, № 2. - pp. 179—192 |
Definition:Arbitrarily Small
Jump to navigation Jump to search
We say that
Definition
We say that
$P \left({x}\right)$ holds for arbitrarily small $\epsilon$ (or there exist arbitrarily small $x$ such that $P \left({x}\right)$ holds) if and only if: $\forall \epsilon \in \R_{> 0}: \exists x \in \R: \left\lvert{x}\right\rvert \le \epsilon: P \left({x}\right)$
That is:
For any real number $a$, there exists a (real) number not more than $a$ such that the property $P$ holds.
or, more informally and intuitively:
However small a number you can think of, there will be an even smaller one for which $P$ still holds. |
Considering the top answer to the question “If xor-ing a one way function with different input, is it still a one way function?”…
The function is no longer one-way.
we build a counter example in the following way. Assume $g$ is a one-way function that preserves size, and define $f$ on input $w=bx_1x_2$ in the following way, $$f(bx_1x_2) = \begin{cases} g(x_1)\,x_2 & b=0 \\ x_1\, g(x_2) & b=1 \end{cases}$$ (assuming $b\in\{0,1\}$ and $|x_1|=|x_2|$.) It is easy to see that $f$ is also one-way — to invert it, you need to either invert $g$ on the first half or invert $g$ on the second half.
Now we show how to invert $h$. Assume you are given $h(u,v)=Z$, we write it as $h(u,v)= z_1z_2$ with $|z_1|=|z_2|=n$. Then a possible preimage of $Z$ is $$u=0 \,0^n \,\langle g(0^n)\oplus z_2\rangle$$ $$v=1 \, \langle g(0^n)\oplus z_1\rangle \, 0^n$$
because $f(u) = g(0^n)\, \langle g(0^n)\oplus z_2\rangle$ and $f(v) = \langle g(0^n)\oplus z_1\rangle \, g(0^n)$ thus their XOR gives exactly $z_1\,z_2$ as required.
Wouldn't this counter-example imply that we've inverted $f$?
Consider the reduction where we take in $f(x_1)$ and $f(x_2)$: then we could compute $f(x_1) \oplus f(x_2)$, invert this to $x_1x_2$, and then we have inverted $f$ as well.
Is the quoted answer correct? If so,
why, given my considerations outlined above? |
We solve the problem in two steps.First we solve the problem of a random walk with no cliff.Then we show how the solution to the problem with the cliff can be expressed in therms of the solution without the cliff.
Unconstrained walk
Consider the problem of a random walker moving without a cliff, i.e. just an unconstrained random walker.Denote the probability of arriving at point $j$, having started at point $i$, after $n$ steps, by the symbol $p_{ji}(n)$.Let $k$ denote the number of rightward steps.Then the number of leftward steps is $n - k$.The number of ways we can arrange $k$ rightward and $n - k$ leftward steps is
$$\frac{n!}{k! (n - k)!}$$
and the probability of getting any such string of steps is
$$p^{n - k}q^{k} \, ,$$
so the probability of such a walk is
$$p_{ji}(n) = p^{n - k}q^{k} \frac{n!}{k! (n - k)!} \, . $$
The displacement $d$ of this walk is the distance between the end and start points, $d \equiv j - i$.This displacement must also be equal to the the number of rightward steps minus the number of leftward steps,
\begin{align}d &= k - (n - k) \\\implies k &= (d + n) / 2 \, .\end{align}
Therefore,$$p_{ji}(n) = p^{(n - d)/2} q^{(n + d)/2}\frac{n!}{\Big(\frac{n - d}{2}\Big)! \Big(\frac{n + d}{2}\Big)!} \, . $$
Note that this expression is only valid when $n$ and $d$ are both even or both odd.Otherwise, $p_{ji}(n)$ is zero.Note also that the problem has translational symmetry; $i$ and $j$ do not appear in the expression for $p_{ji}(n)$.The random walk probabilities depend only on the translation, so we replace the $ji$ subscript with $d$, writing$$p_d(n) = p^{(n - d)/2} q^{(n + d)/2}\frac{n!}{\Big(\frac{n - d}{2}\Big)! \Big(\frac{n + d}{2}\Big)!} \, . $$
Generating functions
Of course, we need to solve the original problem which includes the cliff.Let $f_{ji}(n)$ denote the probability that the walker arrives at point $j$
for the first time, having started at point $i$, after $n$ steps.Again, by translational symmetry we can write this as $f_d(n)$.The answer to the question "what is the probability that the walker ever falls off the cliff" is$$\text{Probability of falling} = \sum_{n=0}^\infty f_{-1}(n) \, .$$
Now here's the amazing part: if we define
generating functions for $p_d(n)$ and $f_d(n)$ as$$P_d(z) \equiv \sum_{n=0}^\infty p_d(n) z^n\qquadF_d(z) \equiv \sum_{n=0}^\infty f_d(n) z^n \, ,$$then it turns out that $^{[a]}$$$F_d(z) = P_d(z) / P_0(z) \, .$$
This is awesome because the the probability that the walker ever falls off the cliff is\begin{align}\text{Probability of falling}&=\sum_{n=0}^\infty f_{-1}(n) \\&= \lim_{z \rightarrow 1} F_{-1}(z) \\&= \lim_{z \rightarrow 1} P_{-1}(z) / P_0(z) \\&= \lim_{z \rightarrow 1} \left( \sum_{n=0}^\infty p_{-1}(n)z^n \right) / \left( \sum_{n=0}^\infty p_0(n) z^n \right) \, .\end{align}
Thus, we've written the solution to the problem purely in terms of the unconstrained probabilities for which we already found a solution!All that remains is doing the sums.
Solution
It turns out that
$$P_{-1}(z) = \sum_{n=0}^\infty p_{-1}(n)z^n = \frac{1}{2qz} \frac{1 - \sqrt{1 - 4 pqz^2}}{\sqrt{1 - 4pqz^2}}$$
and
$$P_0(z) = \sum_{n=0}^\infty p_0(n)z^n = \frac{1}{\sqrt{1 - 4pqz^2}}$$
so
\begin{align}\text{Probability of falling}&= \lim_{z \rightarrow 1} P_{-1}(z) / P_0(z) \\&= \lim_{z \rightarrow 1} \frac{1 - \sqrt{1 - 4pqz^2}}{2qz} \\&= \lim_{z \rightarrow 1} \frac{1 - \sqrt{1 - 4p(1-p)z^2}}{2(1-p)z} \\&= \frac{1 - 2\sqrt{(p - 1/2)^2}}{2(1-p)} \\&= \left\{ \begin{array}{ll} \frac{p}{1-p} & p < 1/2 \\ 1 & p > 1/2 \end{array} \right.\end{align}
which solves the problem.
It's pretty cool that this method got the kink at $p=1/2$ without us having to make any extra logical arguments.
$\lim_{z \rightarrow 1}$
It's interesting to think about the meaning of the limit $z \rightarrow 1$.The sums over $n$ in the generating functions involve the factor $z^n$.For $|z|<1$, the sums de-emphasize terms with large numbers of steps.In other words, $z<1$ means that we don't count long walks as much when computing the probability of falling off the cliff.In Figure 1 we plot the probability of falling off the cliff for various values of $z$.For low values of $z$ the probability of falling off the cliff is low for all $p$.This makes sense because low $z$ means that we strongly de-emphasize longer walks; even with $p=1$ we may not fall off the cliff because $z<1$ means we don't always even count the first step.For higher values of $z$ the probability to fall off increases.At $z=1$, which represents the original problem, the curve forms a cusp at $p=1/2$.It is interesting that the sequence of curves for values of $z$ less than one has no cusp, yet the limiting curve for $z=1$ does have a cusp.For $z>1$ the curve diverges.
Figure 1: Absorption probability as a function of $p$ for a few values of $z$. Higher moments
Note that if we want to compute higher moments of the number of steps of the random walk we can do it by differentiating the expression we already found.For example, the mean number of steps before falling off the cliff is
\begin{align}\sum_{n=0}^\infty f_{-1}(n) n&= \lim_{z \rightarrow 1} \frac{d}{dz} \sum_{n=0}^\infty f_{-1}(n) z^n \\&= \lim_{z \rightarrow 1} \frac{d}{dz} F_{-1}(z) \\&= \frac{p}{\sqrt{(p - 1/2)^2}} - \frac{1 - 2\sqrt{(p - 1/2)^2}}{2(1 - p)} \\&= \left\{ \begin{array}{ll}\frac{p}{p-1/2} - 1 & p > 1/2 \\\frac{p}{1/2 - p} - \frac{p}{1 - p} & p < 1/2 \, .\end{array} \right.\end{align}
Note that this function goes to $1$ as $p \rightarrow 1$, which makes sense because the walker has to fall off on the first step.The other asymptotic behaviors look wrong though, so maybe I messed up some algebra.
$[a]$: You can prove this by thinking of every walk as two parts: a first part which gets to $j$ for the first time, and then a second part which wanders off but eventually ends up at $j$. |
Consider the following equation on the circle:
$$\dfrac{\partial p(x,t)}{\partial t} = a(x)\dfrac{\partial p(x,t)}{\partial x} \equiv L(p) \enspace ,$$
where $L$ is the operator acting on $p(x,t)$.
Now, I would like to create a matrix of this operator $L$ using FFT/IFFT in Matlab. Note that since our domain is the circle, $x\in[0,2\pi]$
Taking Fourier transform, we get $\hat L(p)= \left[a(x)\dfrac{\partial p(x,t)}{\partial x}\right]^{\wedge}= \left[ a(x) (D_x \hat{p})^{V}\right]^{\wedge}$, where $\wedge$ and $V$ are FFT and IFFT operations, and $D_x$ is the $x$-derivative matrix in Fourier (formed by multiplying the $k$th fourier coefficient by $ik$).
My question is: From the expression I was able to solve the PDE using an ODE solver such as ODE45. But, I am interested in extracting the operator for $L$ to be used for other purposes. Hence, I am trying to extract matrix $\hat{L}$ in the following equation
$$\dfrac{\partial \hat{p}(x,t)}{\partial t}=\hat{L}\hat{p}$$
Any ideas ? |
You could probably expect that $\langle v_r\rangle=\langle v_z\rangle=0$--that is, most of your velocities will be in the $\hat\phi$ direction (assuming you are using cylindrical coordinates).
1
For stable orbits, the kinetic and potential energies must be equal, so you should end up seeing that$$v(r)=\sqrt{-2\Phi(r)}$$where $\Phi$ is the gravitational potential energy (and there are a few choices). You also can add Normally-distributed values to the velocity components (as suggested here) to give a little perturbations to the orbits. It is pretty straight-forward to convert to another coordinate system from here, if need be.
I also discuss in this related question an algorithm to pick the velocities assuming a Plummer model (discussed in the first link in the post). I expect a similar algorithm could be developed for the alternative potentials, but haven't worked the math of it.
Note also that, while the system of equations for the $N$-body problem is pretty straight-forward,\begin{eqnarray}\mathbf v&=\frac{\mathrm d\mathbf x}{\mathrm d t}\\\mathbf F&=m\frac{\mathrm d\mathbf v}{\mathrm dt}\end{eqnarray}writing it in code is actually quite
hard for large $N$. The force $\mathbf F$ is computed between each pair of objects in the $N$-body system:$$\mathbf F_i=-\sum_{j\neq i}G\frac{\hat{\mathbf x}_{ij}}{\left|\mathbf x_{ij}\right|^2}$$where $\mathbf x_{ij}=\mathbf x_i-\mathbf x_j$ and $\hat{\mathbf{x}}_{ij}$ is the direction of the force between bodies $i$ and $j$. This means you automatically have at least an $\mathcal O(N^2)$ problem (at least in time, it's $\mathcal O(N)$ in memory). 2
A real galaxy has $N\sim10^{11}$ stars, which is probably not possible for any modern computer to handle.
3 I believe that $N\sim10^6$ is a good round number for modelling galaxies, but these can still take hours on computer clusters (depending on what the simulation does) and I'll go out on a limb and guess that you don't have a computer cluster available to you, so you may want to try $N\sim10^3$ or $N\sim10^4$ instead--but don't be surprised if this still takes a very long time!
1. It is probably easiest to use $r,\phi,z$ coordinates for generating the velocities, then transforming them back to Cartesian (assuming you are working in Cartesian). 2. There are tricky algorithms that can reduce this to $\mathcal O\left(n\log n\right)$, but this may be a bit much for your purposes. 3. Though there was the one trillion body simulation, they used the whole K-computer for something like a month straight (if I recall the news articles correctly). I doubt you have that type of resources available. |
My problem should probably be built up from the beginning, so lets start there. I performed a certain experiment 25 times. Every time, the experiment consists of 5000 measurements, and each measurement returns either 1 ('yes') or 0 ('no'). I know that each time I measure 1, the probability that this measurement is correct is $F_1$, while if I measure 0 I know that it's correct with probability $F_0$.
Now, subsequently I want to compute the autocorrelation function of these measurements I use the following formula
with a slightly different normalization, but in principle that should not matter, because what I want to do is fit this autocorrelation to an exponential decay and find the decay time. To do so, I thought I had two options: Either I fit the 25 datasets separately, and then find the mean of the decay time $t_1$ and maybe say something about its variance, or I can find the mean autocorrelation of the 25 datasets, and fit that one. I've decided that that is probably the best option, as the individual datasets can be quite noisy with strange fluctuations, while the mean looks much cleaner.
But now I'm stuck wondering what I should be doing to find the uncertainty in the fit. Should I have introduced some sort of uncertainty due to the imperfect measurements ($F_0$ and $F_1$), or should I have introduced some sort of standard error of the mean for the 'mean autocorrelation' data? That last one seems plausible, as of course there is also a standard deviation in the 25 datasets, but then I get a little confused as to how I should do that. The formula for the standard error of the mean seems to be $\frac{\sigma}{\sqrt{n}}$, but this is a bit unfair as in my 5000 measurements, the autocorrelation drops to 0 after around 50 measurements. For the fit I therefore also only use the first ~100 points of the autocorrelation, as there's no real need to fit the next 4000 points that are all pretty much equal to 0. So should I instead only calculate the mean and standard error of the first 100 points? Would that be a fair way of finding the uncertainty? The subsequent fitting method will already give error bars for the $t_1$, but this of course heavily depends on the uncertainty in the data.
A final sort of sidetrail, what kind of quantitative method of finding how good I fit should I use for exponential decay? Reduced Chi square, Adjusted R squared, or something else entirely? I suppose this is not intended for dsp though, more for a statistics forum, so feel free to ignore it. |
I want to know if I solved the following exercise correctly so it would be nice, if someone corrects it. Especially at number $b$ I'm not sure, if this is also true for infinite dimensions.
Prove or disprove the following statements:
a) Let $(X,T_x)$ and $(Y,T_y)$ be topological spaces and $f:X\to Y$ a continuous function. If $X$ is a $T_2$ space then $f(X)$ is $T_2$ too.
Wrong. Let $X=\mathbb{R}$ with the standard topology and $Y=\mathbb{R}$ with the cofinite topology. Then $Y$ is not $T_2$
b) Let $(X,T_x)$ and $(Y,T_y)$ be topological spaces and $f:X\to Y$ a continuous function. If $X$ is a compact space then $f(X)$ is compact too.
True. Suppose $f(X)\subset\bigcup\limits_{i\in I} U_i$ then by continuity $X\subset f^{-1}(\bigcup\limits_{i\in I} U_i)$. Because $X$ is compact $X\subset f^{-1}(\bigcup\limits_{i=1}^n U_i)$ so it follows $f(X)\subset\bigcup\limits_{i\in n} U_i$.
c) Let $(X,d_x)$ and $(Y,d_y)$ be metric spaces and $f:X\to Y$ a continuous function. If $X$ is a precompact space then $f(X)$ is precompact too.
Wrong. Let $S=[-\frac{\pi}{2},\frac{\pi}{2}]$ and $X=(-\frac{\pi}{2},\frac{\pi}{2})\subset S$. Then $\overline{X}=[-\frac{\pi}{2},\frac{\pi}{2}]$ is compact. Let $f(x)=\tan(x)$, then $f$ is continuous. But $f(X)=\mathbb{R}$ is not precompact.
d) Let $(X,d_x)$ and $(Y,d_y)$ be metric spaces and $f:X\to Y$ a continuous function. If $X$ is a complete space then $f(X)$ is complete too.
Wrong. Let $X=\mathbb{R}$ with the euclidean metric $d_x$ and $Y=\mathbb{Q}$ with the euclidean metric $d_y$. Let $f(x)=1$ be a constant function. Obviously $f$ is continuous, $X$ is complete and $Y$ is not. |
Jyrki Lahtonen
General non-sense:
Mostly I teach here. I want to encourage beginning students to think for themselves, so I use a lot of hints and comments. More advanced questions I often just answer.
Relevant personal history:
PhD from Notre Dame in '90.
Drifted from representation theory of algebraic groups to applications of algebra into telecommunications, mostly coding theory, and lately mostly teaching at college level.
3 graduate students with awarded PhDs. Technically it's now 4, but my previous students did most of the work with the latest one, so I should not include him.
I have mostly worked at our local University at Turku, Finland. At one point I tried working for Nokia Research Center. It was ok, but an old dog didn't learn all the tricks, and then they downsized, so I returned to the Uni as a tenured lecturer.
Rusko, Finland
Member for 7 years, 5 months
3 profile views
Last seen Apr 12 at 18:15 Communities (40)
Mathematics
114.5k
114.5k1414 gold badges186186 silver badges420420 bronze badges
MathOverflow
1.2k
1.2k88 silver badges1515 bronze badges
Mathematics Educators
1.2k
1.2k99 silver badges1919 bronze badges
Meta Stack Exchange
910
91066 silver badges1313 bronze badges
Politics
708
70833 silver badges77 bronze badges View network profile → Top network posts 158 The limit of truncated sums of harmonic series, $\lim\limits_{k\to\infty}\sum_{n=k+1}^{2k}{\frac{1}{n}}$ 131 Is there any mathematical reason for this "digit-repetition-show"? 95 Why is SE giving so much attention to the "be nice"-policy? 94 Math behind rotation in MS Paint 82 Why is a big country less suitable for high-tax, high-welfare system? 67 Is $\sqrt1+\sqrt2+\dots+\sqrt n$ ever an integer? 64 How can you show there are only 2 nonabelian groups of order 8? View more network posts → |
I have a bunch of points in $\mathbb{R}^3$ that I would like to translate and rotate so that their center is at the origin and the variance along the $x$ and $y$ axes are maximal (greedy, and in that order). To accomplish this I am trying to use python's principal components analysis algorithm. It is not behaving as I expect it to, most likely due to some misunderstanding about what PCA actually does on my part.
The Problem: When I center and then rotate the data, the variance along the third component is greater than along the second. This means that, once centered and rotated, there is more variance in the data along the $z$ axis than there is along the $y$. In other words, the rotation is not the correct one. What I am Doing:Python's PCA routine returns an object (say myPCA) with several attributes. myPCA.Y is the data array, but centered, scaled, and rotated (in that order). I do not want the data to be scaled. I simply want a translation and a rotation.
import numpy as np from matplotlib.mlab import PCA # manufactured data producing the problem data_raw = np.array([ [80.0, 50.0, 30.0], [50.0, 90.0, 60.0], [70.0, 20.0, 40.0], [60.0, 30.0, 45.0], [45.0, 60.0, 20.0] ]) # obtain the PCA myPCA = PCA(data_raw) # center the raw data centered = np.array([point - myPCA.mu for point in data_raw]) # rotate the centered data centered_and_rotated = np.array([np.dot(myPCA.Wt, point) for point in centered])# the variance along axis 0 should now be greater than along 1, so on variances = np.array([np.var(centered_and_rotated[:,i]) for i in range(3)]) # they are not: print(variances[1]>variances[2]) #False; I want this to be True # Now look at the PCA output, Y. This is centered, scaled, and rotated. # The variances decrease in magnitude, as I want them to: variances2 = np.array([np.var(myPCA.Y[:,i]) for i in range(3)]) # This looks good, but the coordinates have been scaled. # Let's try to get from the raw coordinates to the PCA output Y# mu is the vector of means of the raw data, and sigma is the vector of # standard deviations of the raw data along each coordinate direction guess = np.array([np.dot(myPCA.Wt, (xxx-myPCA.mu)/myPCA.sigma) for xxx in data_raw])print(guess==myPCA.Y) # all true
The last two lines in the above show that we may take a point $\mathbf{x}$ from its representation in the raw data input into its representation $\mathbf{x}'$ in terms of the PCA axes via $$ \mathbf{x}' = \mathrm{R}\cdot\left((\mathbf{x}-\boldsymbol{\mu}) / \boldsymbol{\sigma} \right) $$
where $\mathrm{R}$ is myPCA.Wt, the weight matrix, $\boldsymbol{\mu}$ is the vector of means of the original data along each coordinate axis, $\boldsymbol{\sigma}$ is the vector of standard deviations of the original data along each coordinate axis, and the division is element-wise. In order to write this in standard mathematical notation, let's replace this division by multiplication: $$ \mathbf{x}' = \mathrm{R}\cdot\left(\mathrm{D}\cdot(\mathbf{x}-\boldsymbol{\mu}) \right) $$ where $\mathrm{D}$ is a diagonal matrix whose diagonal entries are $1/\sigma_i$.
This notation makes clear the problem: to undo the scaling, I need to act on the RHS above with $\mathrm{R}\mathrm{D}^{-1}\mathrm{R}^{-1}$. This will return me to the problem situation, in which the variance is greater along the $z$ axis than the $y$.
Is there a way to use PCA to get what I want, or do I need to use another method? |
I think this is a reverse product rule but I could not figure out how to reverse this.
$$\int_0^\infty \frac{\beta^\alpha}{\Gamma(\alpha)}{\theta^{-(\alpha+1)}}*e^{-\frac{\beta}{\theta}}d\theta$$
I pulled out the constants but then got stuck here: $$\frac{\beta^\alpha}{\Gamma(\alpha)}\int_0^\infty {\theta^{-(\alpha+1)}}*e^{-\frac{\beta}{\theta}}d\theta$$
$\alpha$ and $\beta$ are constants while $\Gamma$ is a function. |
0
Share Mechanical Vibrations GATE ME Quiz 1
9 months ago .
Here is a quiz to help you prepare for your upcoming
GATE 2019 exam. The GATE ME paper has several subjects, each one is equally important. However, one of the most important subjects in GATE ME is Mechanical Vibrations. The subject is vast, but practice makes tackling it easy.
This quiz contains important questions which match the pattern of the GATE exam. Check your preparation level in every chapter of Mechanical Vibrations for GATE ME by taking the quiz and comparing your ranks. Learn about Free vibration, forced vibration, degree of freedom, effect of damping, vibration isolation, resonance, speeds of shafts and more!
For an underdamped harmonic oscillator, resonance
The natural frequency of the system shown below is
The equation of motion of a harmonic oscillator is given by
\(\frac{{{d^2}x}}{{d{t^2}}} + 2\zeta {\omega _n}\frac{{dx}}{{dt}} + \omega _n^2x = 0\)
and the initial conditions at t = 0 are \(x\left( 0 \right) = X,\;\;\frac{{dx}}{{dt}}\left( 0 \right) = 0\). The amplitude of x(t) after n complete cycles is
The natural frequency of the spring mass system shown in the figure is closest to
A uniform rigid rod of mass m = 1 kg and length L = 1 m is hinged at its centre and laterally supported at one end by a spring of spring constant k = 300 N/m. The natural frequency ω
n in rad/s is
As we all know, practice is the key to success. Therefore, boost your preparation by starting your practice now.
Furthermore, chat with your fellow aspirants and our experts to get your doubts cleared on Testbook Discuss:
0 |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Let $A\in M_{n\times n}(\mathbb{C})$. Suppose that $A^2=0$. Show that $\lambda$ is an eigenvalue of $A+I$ if and only if $\lambda=1$
As $A^2=0$, $A^2-t^2I=-t^2I$ i.e., $(A+tI)(A-tI)=-t^2I$ this imply $A-tI$ is invertible iff $t\neq 0$.
So, $(A+I)-\lambda I=A-(\lambda-1)I$ is invertible iff $\lambda-1=0$ i.e., $\lambda=1$. So, $1$ is the only eigen value of $I+A$.
After posting this question i got this idea.. Let me know if this is fine.. |
Let $p$ be given by $p=2^{89}-1$ and note that it is a Mersenne Prime. The problem is to find the number of incongruent solutions to$$x^2\equiv 5 \pmod{1331p^3}$$I began the problem by splitting it up into the congruences$$x^2\equiv 5 \pmod{1331} $$and$$ x^2\equiv 5 \pmod{p^3}$$I found that $x\equiv 4,7\pmod{11}$ are solutions to $x^2\equiv 5\pmod{11}$ and then use Hansel's Lemma all the way up to get that $x\equiv 1258, 73\pmod{1331}$ are solutions to the equation $\pmod{1331}$.
I think all I have to do is solve the second equation and use the Chinese Remainder Theorem at the end but I am stuck because I have no idea where to begin in solving $x^2\equiv 5\pmod{p}$ as p is such a large number. Any help is appreciated!
Let $p$ be given by $p=2^{89}-1$ and note that it is a Mersenne Prime. The problem is to find the number of incongruent solutions to$$x^2\equiv 5 \pmod{1331p^3}$$I began the problem by splitting it up into the congruences$$x^2\equiv 5 \pmod{1331} $$and$$ x^2\equiv 5 \pmod{p^3}$$I found that $x\equiv 4,7\pmod{11}$ are solutions to $x^2\equiv 5\pmod{11}$ and then use Hansel's Lemma all the way up to get that $x\equiv 1258, 73\pmod{1331}$ are solutions to the equation $\pmod{1331}$.
Consider Legendere symbol $\left(\frac{5}{p}\right)$. By quadratic reciprocity $$\left(\frac{5}{p}\right)\Big(\frac{p}{5}\Big)=(-1)^{\left(\frac{5-1}{2}\right)\left(\frac{p-1}{2}\right)}=1 \implies \left(\frac{5}{p}\right)=\left(\frac{p}{5}\right).$$ But $p=2^{89}-1 \equiv 2(2^2)^{44}-1 \equiv 1 \pmod{5}$. Thus $$\left(\frac{5}{p}\right)=\left(\frac{p}{5}\right)=\left(\frac{1}{5}\right)=1.$$ Thus $5$ is indeed a QR modulo $p$. Since $p$ is a prime thus $x^2 \equiv 5 \pmod{5}$ will have two non-congruent solutions. Now you can apply Hensel to see if you will continue to have two solutions as you lift from $p$ to $p^3$.
If you have two solutions for $p^3$ as well, then in all you will have $4$ solutions (combining with two from the previous congruence with $11^3$). |
Suppose a system $$Ax=b$$ is given, with $A\in\mathbb{R}^{n\times n}$ being a symmetric positive-definite matrix, and some non-zero $b\in\mathbb{R}^n$. The gradient method with optimum step length can be written as $$x_{k+1}=x_k-\alpha_k\cdot g_k,$$ with $g_k=Ax_k-b$ and $\alpha_k=\frac{g_k^Tg_k}{g_k^TAg_k}$.
Can it be proven that the above iterative procedure converges to $\bar{x}=A^{-1}b$, regardless of the initialization?
Can the above iterative scheme be regarded as a
linear fixed-point iteration? In fact, what is the precise meaning of linear in the term linear fixed-point iteration? |
Contents Practice Question on Computing the Output of an LTI system by Convolution
The unit impulse response h[n] of a DT LTI system is
$ h[n]= \frac{1}{3^n} \ $
Use convolution to compute the system's response to the input
$ x[n]= \delta[n+2]+\delta[n+1]+\delta[n]+\delta[n-1]. \ $
You will receive feedback from your instructor and TA directly on this page. Other students are welcome to comment/discuss/point out mistakes/ask questions too!
Answer 1
$ y[n]=h[n]*x[n]=\sum_{k=-\infty}^\infty \frac{1}{3^k}\delta[n+2-k]+\sum_{k=-\infty}^\infty \frac{1}{3^k}\delta[n+1-k]+\sum_{k=-\infty}^\infty \frac{1}{3^k}\delta[n-k]+\sum_{k=-\infty}^\infty \frac{1}{3^k}\delta[n-1-k] $
$ y[n]=\frac{1}{3^{n+2}}+\frac{1}{3^{n+1}}+\frac{1}{3^{n}}+\frac{1}{3^{n-1}} $
--Cmcmican 20:25, 31 January 2011 (UTC)
Answer 2
Write it here.
Answer 3
Write it here. |
A two-dimensional model for extensional motion of a pre-stressed incompressible elastic layer near its cut-off frequencies is derived. Leading-order solutions for displacement and pressure are obtained in terms of the long wave amplitude by direct asymptotic integration. A governing equation, together with corrections for displacement and pressure, is derived from the second-order problem. A novel feature of this (two-dimensional) hyperbolic governing equation is that, for certain pre-stressed states, time and one of the two (in-plane) spatial variables can change roles.
We consider the following system of equationsA_t= A_{xx} + A - A^3 -AB,\quad x\in R,\,t>0,B_t = \sigma B_{xx} + \mu (A^2)_{xx}, x\in R, t>0,where \mu > \sigma >0. It plays animportant role as a Ginzburg-Landau equation with a mean field inseveral fields of the applied sciences.We study the existence and stability of periodic patterns with anarbitrary minimal period L. Our approach is by combining methodsof nonlinear functional analysis such as nonlocal eigenvalueproblems and the variational characterization of eigenvalues withJacobi elliptic integrals.
A numerical matching technique known as point collocation is used to model mathematically large dissipative splitter silencers of a type commonly found in HVAC ducts. Transmission loss predictions obtained using point collocation are compared with exact analytic mode matching predictions in the absence of mean flow. Over the frequency range in which analytic mode matching predictions are available, excellent agreement with point collocation transmission loss predictions is observed for a range of large splitter silencers.
The Random Matrix Model approach to Quantum Chromodynamics (QCD) with non-vanishing chemical potential is reviewed. The general concept using global symmetries is introduced, as well as its relation to field theory, the so-called epsilon regime of chiral Perturbation Theory (echPT). Two types of Matrix Model results are distinguished: phenomenological applications leading to phase diagrams, and an exact limit of the QCD Dirac operator spectrum matching with echPT.
We show that there is a close relation between standing-wave solutions for the FitzHugh-Nagumo system\[ \Delta u +u(u-a)(1-u) - \delta v=0, \ \ \Delta v-\delta \gamma v + u=0 \ \ \mbox{in} \ R^N,\]\[ u, v \to 0 \ \mbox{as} \ |x| \to +\infty\]where $0
We consider the problem \left \{\begin{array}{rcl} \varepsilon^2 \Delta u - u + f(u) = 0 & \mbox{ in }& \ \Omega\\ u > 0 \ \mbox{ in} \ \Omega, \ \frac{\partial u}{\partial \nu} = 0 & \mbox{ on }& \ \partial\Omega,\end{array} \right. where \Omega is a bounded smooth domain in R^N, \varepsilon>$ is a small parameter and f is a superlinear, subcritical nonlinearity. It is known that this equation possesses boundary spike solutions such thatthe spike concentrates, as \varepsilon approaches zero, at a critical point of the mean curvature function H(P), P \in \partial \Omega . |
Since Andrej has somewhat covered the operational side, I'll take themore semantic/category theoretic perspective of why we care aboutstacks, that is especially relevant in EEC.
The general philosophy of categorical logic is that all types shouldbe defined by a universal property. In CBPV without stacks, you cannotgive a universal property to the $F$ type. I believe this was notdiscovered initially because Levy was originally working based onconcrete denotational models rather than general categorical logic.
To see what I mean, let's consider the universal property of the thunktype constructor $U$, which is that the sets of computations $\Gamma\vdash M : B$ are naturally isomorphic to the sets of
values $\Gamma\vdash V : UB$. This is essentially what is encoded by the intro/elimand $\beta\eta$ equations for $U$. Now what's the universal propertyof the $F$ type constructor? It turns out that it says that sets ofcomputations $\Gamma,x:A \vdash M : B$ are naturally isomorphic to thesets of stacks $\Gamma | F A \vdash S : B$. In particular, I don'tknow how you can state the $\eta$ principle for $F$ without usingstacks, which says that for any stack $\Gamma|F A \vdash S : B$ that$$S \equiv \bullet \textrm{ to } x. S[\textrm{return } x]$$which might also be written as saying for any such $S$ and $\Gamma \vdash M : F A$ that$$S[M] \equiv M \textrm{ to } x. S[\textrm{return } x]$$I know from experience, that you need this rule frequently when proving program equivalences where computations use the $F$ type.
When looking at the models, you get that rather than describing aneffect by a strong monad $T$, you describe it by a strong adjunction$F \dashv U$. What are the two categories involved? The category ofvalues and the category of stacks.
Stacks become even more important when you move to the setting ofenriched effect calculus. There they are written as terms $\Gamma |\Delta \vdash t : B$ which are typed with a non-empty stoup$\Delta$. In EEC, we need the stacks to describe the universalproperties of types like the tensor product $!A \otimes B$ (whichgeneralizes $F A$), the linear function space $B \multimap B'$ (whichgeneralizes $U B'$) and the computation sum types $0, \oplus$. The stuff by Ahman extends EEC and includes these connectives as well.
Finally, a bit of semantic intuition for what a stack is. We can thinkof values as "total" functions between value types, and we can thinkof stacks as "linear" functions between computation types. This can beformalized in the idea of "thunkable" terms, which are computationsthat "act like" values and "linear" terms which are computations that"act like" stacks. This idea was introduced by GuillaumeMunch-Maccagnoni (1) and is shown in CBPV syntax in section 6 of (2). |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
A class of languages $\mathcal{L}$ is closed against concatenation if for every pair of languages $L_1, L_2 \in \mathcal{L}$ we also have $L_1 \cdot L_2 \in \mathcal{L}$.
Here, $A \cdot B = \{ v \cdot w \mid v \in A, w \in B \}$.
A class of languages $\mathcal{L}$ is closed against homomorphism if for every language $L \in \mathcal{L}$ and every (word) homomorphism $f$ we also have that $f(L) \in \mathcal{L}$.
Here, $f : \Sigma \to \Sigma^*$ is lifted to words and sets in the natural way, i.e.
$\qquad\displaystyle f(w) = f(w_1) \dots f(w_n)$ for $w = w_1 \dots w_n$,
and
$\qquad\displaystyle f(A) = \{ f(w) \mid w \in A\}$.
(Note that can make a difference if you require $f : \Sigma \to \Sigma^+$.)
It's easy to see that the two are not the same.
Let $A = \{ w \in \{a, b\}^* \mid |w|_a \in 2\mathbb{N} \}$. Clearly, $\mathcal{A} = 2^A$ is closed against concatenation but not against homomorphism.
Let $B = \{ ww \mid w \in \{a,b\}^* \}$. Then, $\mathcal{B} = 2^B$ is closed against homomorphism but not against concatenation.
(mentioned by chi in a comment)
For completeness, $2^{\Sigma^*}$ is closed against both, and $\mathcal{A}\cdot\mathcal{B}$ against neither. |
One interpretation of your question could be as follows:
Given that a system has the following two properties:
the
scaling or homogeneity property that if the response to input $x(t)$ is output $y(t)$, then for any choice of $\alpha$, the system response
to scaled input $\alpha\cdot x(t)$ is scaled output $\alpha\cdot y(t)$,
the
time-invariance property that for all choices of $\tau$,
the response to time-delayed
input $x(t-\tau)$ is time-delayed output $y(t-\tau)$,
then why does the system have the
additivity or superposition property
that the response to input $x_1(t)+x_2(t)$ is
$y_1(t) + y_2(t)$ where
the system response to $x_i(t)$ is $y_i(t)$, $i = 1,2~$ ???? $~~~~~~~~~~~$
More generally, why is the system response to input
$\alpha\cdot x_1(t-\tau_1) + \beta\cdot x_2(t-\tau_2)$
given by $\alpha\cdot y_1(t-\tau_1) + \beta\cdot y_2(t-\tau_2)~$?
The answer is that a system with properties 1 and 2 does not
necessarily have the additivity or superposition property.If the superposition property also holds, then the systemis called a linear time-invariant system. But this is anadditional assumption that you need to make (or prove).
Commonly, homogeneity and additivity are combined togetherinto the
linearity property that says that the responseto input $\alpha\cdot x_1(t)+\beta\cdot x_2(t)$ (that is,a linear combination of inputs $x_1(t)$ and $x_2(t)$) is $\alpha\cdot y_1(t) + \beta\cdot y_2(t)$(that is, the same linear combination of outputs $y_1(t)$ and $y_2(t)$).
A couple of points that should be tucked away into theback of one's mind:
A system can be linear without being time-invariant (e.g.a modulator $x(t) \to x(t)\cos(\omega t)$, ortime-invariant without being linear (e.g. a square-lawcircuit $x(t) \to [x(t)]^2$
A additive system which produces output $y(t) + y(t) = 2y(t)$in response to input $x(t) + x(t) = 2x(t)$ and so
seemsto have the scaling property does not in fact have thescaling property. Persuade yourself that this is true byattempting to prove that the response to $0.5x(t)$ is$0.5y(t)$. In short, scaling and additivity are twodifferent properties and a system that enjoys one of themdoes not necessarily enjoy the other.
A second interpretation of your question could be as follows:
For a linear time-invariant system, the output is supposed
to be the sum of scaled and time-delayed versions of the
impulse response, but I don't see how this is so. For example,
the standard convolution result (for discrete-time systems)
says
$$y[n] = \sum_m x[m]h[n-m]$$
where $h[\cdot]$ is the impulse (or unit) response of the
system. But this seems to be completely backwards since the
impulse response is running backwards in time (as in $-m$
in the argument of $h$ in the above formula
compared to $x[m]$ in which time is running forwards.
This is indeed a legitimate concern, but actually theconvolution formula is very successful in
concealingthe result that the output is the sum of scaled and time-delayedversions of the impulse response. What's going on is as follows.
We break down the input signal $x$ into a
sum of scaledunit pulse signals. The system response to the unit pulse signal$\cdots, ~0, ~0, ~1, ~0, ~0, \cdots$ isthe impulse response or pulse response $$h[0], ~h[1], \cdots, ~h[n], \cdots$$and so by the scaling property the single input value $x[0]$,or, if you prefer$$x[0](\cdots, ~0, ~0, ~1, ~0,~ 0, \cdots)= \cdots ~0, ~0, ~x[0], ~0, ~0, \cdots$$creates a response$$x[0]h[0], ~~x[0]h[1], \cdots, ~~x[0]h[n], \cdots$$
Similarly, the single input value $x[1]$ or creates$$x[1](\cdots, ~0, ~0, ~0, ~1,~ 0, \cdots)= \cdots ~0, ~0, ~0, ~x[1], ~0, \cdots$$creates a response$$0, x[1]h[0], ~~x[1]h[1], \cdots, ~~x[1]h[n-1], x[1]h[n] \cdots$$Notice the delay in the response to $x[1]$. We can continue furtherin this vein, but it is best to switch to a more tabular formand show the various outputs aligned properly in time. We have$$\begin{array}{l|l|l|l|l|l|l|l}\text{time} \to & 0 &1 &2 & \cdots & n & n+1 & \cdots \\\hlinex[0] & x[0]h[0] &x[0]h[1] &x[0]h[2] & \cdots &x[0]h[n] & x[0]h[n+1] & \cdots\\\hlinex[1] & 0 & x[1]h[0] &x[1]h[1] & \cdots &x[1]h[n-1] & x[1]h[n] & \cdots\\\hlinex[2] & 0 & 0 &x[2]h[0] & \cdots &x[2]h[n-2] & x[2]h[n-1] & \cdots\\\hline\vdots & \vdots & \vdots & \vdots & \ddots & \\\hlinex[m] & 0 &0 & 0 & \cdots & x[m]h[n-m] & x[m]h[n-m+1] & \cdots \\\hline \vdots & \vdots & \vdots & \vdots & \ddots \end{array}$$
The rows in the above array are precisely the scaled anddelayed versions of the impulse response that add up tothe response $y$ to input signal $x$.But if you ask a more specific question such as
What is the output at time $n$?
then you can get the answer by summing the $n$-th column to get$$y[n] = x[0]h[n] + x[1]h[n-1] + x[2]h[n-2] + \cdots + x[m]h[n-m] + \cdots= \sum_{m=0}^{\infty} x[m]h[n-m],$$the beloved convolution formula that befuddles generations of studentsbecause the impulse response seems to be running backwards in time. |
Ok, I think I got your point. You want a BPF, $H(z)$, that auto extends its bandwidth accordingly to the energy distribution in the magnitude spectrum. If you have a pure 1k Hz sinusoidal tone (that corresponds, in the frequency domain, to a dirac delta located at $\omega_0=\pm2\pi 1$k rad/s), you want to pass only frequencies in the 1k$\pm 50$ Hz range, and if you have a transient event with a white noise-like distribution, you want an all-pass filter to preserve the sharp attack.
What you need is a resonator filter [1]: $$H(z)=\frac{(1-\lambda)\sqrt{1+\lambda^2-2\lambda\cos(2\omega_0)}}{1-(2\lambda\cos(\omega_0))z^{-1}+\lambda^2 z^{-2}},$$
its behavior for different values of $\lambda \in [0,1]$ is like so:
so for $\lambda\to 0$ you will get a flat response to catch transient events, and for $\lambda\to 1$ you will have a localized filter at the desired frequency. Here, for illustration purposes, I set $w_0=\pi/2$, but you can change the desired frequency using the formula $w_0=2\pi F_0/F_s$.
For setting $\lambda$ automatically you can use the
spectral flatness estimator [2]:
$$f = \frac{\left(\prod_{n=0}^{N-1}{x[n]}\right)^{1/N}}{\frac{1}{N}\sum_{n=0}^{N-1}{x[n]}},$$
which is $f=1$, when the magnitude spectrum is completely flat, and $f=0$, when the magnitude spectrum is completely localized. Therefore, you can make $\lambda=1-f$. I wrote the following code to exemplify how you can apply this control:
Fs=16e3;
F0=1e3;
w0 = 2*pi*F0/Fs;
x1 = [zeros(1,50),2*rand(1,50)-1];
x2 = 0.7*sin(w0.*[1:100])+0.3*rand(1,100);
x3 = 0.7*sin(3.5*w0.*[1:100])+0.3*rand(1,100);
y = [adaptiveResonatorFilter(x1,w0), adaptiveResonatorFilter(x2,w0), adaptiveResonatorFilter(x3,w0)];
plot([x1,x2,x3],'linewidth',2)
hold on
plot(y,'linewidth',2)
xlabel('Samples')
ylabel('Amplitude')
legend('Original','Filtered')
function y = adaptiveResonatorFilter(x,w0)
X = fft(x);
mX = abs(X);
mX = mX/max(mX);
sf = mean(mX,'g')/mean(mX,'a')
lambda = ifelse(0.5<1-sf, 0.99, 0.0)
B = (1-lambda)*sqrt(1+lambda^2-2*lambda*cos(2*w0));
A = [1,-2*lambda*cos(w0), lambda^2];
[H,W] = freqz(B,A,linspace(-pi,pi,length(mX)));
Y = X .* fftshift(H);
y = real(ifft(Y));
end
which gives the following output:
where you can see that transient part is kept untouched, the 1k Hz pure tone contaminated with noise has been cleared and the 3.5k Hz pure tone has been attenuated, as you wanted.
Note: I am taking this as the definition of "transient attack". Please correct me if I misunderstood. M. Vetterli, P. Prandoni. Signal Processing for Communications. EPFL press. https://en.wikipedia.org/wiki/Spectral_flatness |
We want to calculate the limit limx→∞ 4x^2/Sqrt3x^4+2 (square root is for the whole bottom function) Rewrite The limit as as 4/sqrt3+g(x) (square root is for the whole bottom function)where g(x)=?
Follow Math Help Forum on Facebook and Google+
I know what the limit is when it goes towards infinity but just pointing out G(x) is confusing
$\dfrac{4x^2}{\sqrt{3x^4+2}} \cdot \dfrac{x^{-2}}{\sqrt{x^{-4}}}= \dfrac{4}{\sqrt{3+2x^{-4}}}$
Thank you! I see why I kept getting the wrong answer now, appreciate it!
View Tag Cloud |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
The ordinals of infinite time Turing machines The theory of infinite time Turing machines extends the operation of ordinary Turing machines into transfinite ordinal time. At successor stages of computations, the machines compute as expected, according to the rigid instructions of their finite programs, writing on the tape, moving the head to the left or right and changing to a new state. At limit stages, the information the computation was producing is preserved in a sense: each cell of the tape assumes the limsup of its values going into that limit; the head is reset to the left-most cell and the state is placed in the limit state, a distinguished state like the start state and the halt state.
A real is
writable by such machines if there is a program which on trivial input can write that real on the output tape and then halt. A real is eventually writable if there is a program that on trivial input can write the real on the output tape in such a way that from some point on, the output tape exhibits that real as its final stabilized value, even if the machine does not halt. A real is accidently writable if it appears on one of the tapes during the course of a computation of a program on trivial input. See [1, 2, 3]
Similarly, an ordinal is writable or eventually writable or accidentally writable if it is the order type of a relation coded by such a kind of real.
$\lambda=$ the supremum of the writable ordinals $\zeta=$ the supremum of the eventually writable ordinals $\Sigma=$ the supremum of the accidentally writable ordinals
Welch [4, 5] proved the $\lambda-\zeta-\Sigma$ theorem, asserting that $L_\lambda\prec_{\Sigma_1}L_\zeta\prec_{\Sigma_2}L_\Sigma$, and furthermore $\lambda$ is the least ordinal such that $L_\lambda$ has a $\Sigma_1$-elementary end-extension, and $\zeta$ is least such that $L_\zeta$ has a $\Sigma_2$-elementary end-extension.
References Hamkins, Joel David and Lewis, Andy. Infinite time Turing machines.J Symbolic Logic 65(2):567--604, 2000. www arχiv DOI MR bibtex Hamkins, Joel David. Infinite time Turing machines.Minds and Machines 12(4):521--539, 2002. (special issue devoted to hypercomputation) www arχiv bibtex Hamkins, Joel David. Supertask computation.Classical and new paradigms of computation and their complexity hierarchies23:141--158, Dordrecht, 2004. (Papers of the conference ``Foundations of the Formal Sciences III'' held in Vienna, September 21-24, 2001) www arχiv DOI MR bibtex Welch, Philip. The Lengths of Infinite Time Turing Machine Computations.Bulletin of the London Mathematical Society 32(2):129--136, 2000. bibtex |
In Do Carmo's Differential Geometry of Curves and Surfaces he does the following:
Let $\vec r$ be a parametrization of a surface $S\subset\mathbb{R}^3$ so that $\vec r_u,\vec r_v$ forms a basis for $T_pS$ at each point in the image of $\vec r$. We can then define $N=\frac{\vec r_u\times\vec r_v}{|\vec r_u\times \vec r_v|}\in (T_pS)^\perp$ so that $\{\vec r_u,\vec r_v,N\}$ is a basis for $\mathbb{R}^3$.
We can express $\vec r_{uu},\vec r_{uv}=\vec r_{vu},\vec r_{vv}$ in terms of this basis. We set $$\vec r_{uu}=\Gamma_{11}^1\vec r_u+\Gamma_{11}^2\vec r_v+L_1 N$$ $$\vec r_{uv}=\Gamma_{12}^1\vec r_u+\Gamma_{12}^2\vec r_v+L_2N$$ $$ \vec r_{vv}=\Gamma_{22}^1\vec r_u+\Gamma_{22}^2\vec r_v+L_3N$$
I am failing to recover this definition of the Christoffel symbols from that on an arbitrary Riemannian manifold as $\Gamma_{ij}^k=(\nabla_{E_i}E_j)^k$. There's obviously something simple I am missing. If $\nabla$ is the Euclidean connection on $\mathbb{R}^3$, and $\vec r=(x,y,z)$ then $\nabla_{\vec r_u}\vec r_u=\vec r_u(x_u)+\vec r_u(y_u)+\vec r_u(z_u)$, right? Where is $\vec r_{uu}$ coming in? |
For which values of $z$ does $$\sum_{n=1}^\infty \frac{\tan(nz)}{n^2}$$ converge? For which values of $z$ is the limiting function analytic?
One can show, as in this answer, that $$\left|\frac{e^{inz}-e^{-inz}}{e^{inz}+e^{-inz}}\right|$$ is bounded as $n\to \infty$, so long as $\text{Im}(z)\neq 0$. But the article above really does not discuss the case $\text{Im}(z)=0$, although it thinks it does. It doesn't deal with the poles at $\frac{(2k+1)\pi}{2}$, which can make some of the terms of the series undefined.
If $\text{Im}(z)=0$, obviously the estimate $$\left| \frac{e^{inz}-e^{-inz}}{e^{inz}+e^{-inz}} \right|\leq \frac{1+e^{2ny}}{|1-e^{2ny}|} $$
does not work. (Here $y=\text{Im}(z)$.) For $x\in \mathbb{R}$ of the form $j^2\frac{(2k+1)\pi}{2}$, there will be undefined terms.
Suppose there are no undefined terms. What can we say then about convergence? And in what way can we describe these singularities of the limiting function, corresponding to $x$ with undefined terms? Perhaps these points are not even isolated... |
Q. Two forces P and Q of magnitude 2F and 3F, respectively, are at an angle $\theta$ with each other. If the force Q is doubled, then their resultant also gets doubled. Then, the angle is :
Solution:
$4F^2 + 9F^2 + 12F^2 \; \cos \; \theta = R^2$ $4F^2 + 36 \; F^2 + 24 \; F^2 \; \cos \; \theta = 4R^2$ $4F^2 + 36 \; F^2 + 24 \; F^2 \; \cos \; \theta$ $= 4(13F^2 + 12F^2 \cos \theta) = 52 \; F^2 + 48F^2 \cos \theta$ $\cos \theta = \frac{12 F^2}{24 F^2} = - \frac{1}{2}$ Questions from JEE Main 2019 5. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 10. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Let $F$ be a topological space and $G$ be a topological group acting on $G$. Very generally, you can construct an $F$-bundle on a space $X$ with structure group $G$ from the following data:
An open cover $(U_\alpha)$ of $X$ For each $\alpha$ and $\beta$, a map $f_{\alpha\beta}:U_\alpha\cap U_\beta\to G$ such whenever $x\in U_\alpha\cap U_\beta \cap U_\gamma$, $$f_{\beta\gamma}(x)f_{\alpha\beta}(x)=f_{\alpha\gamma}(x).$$
Namely, take the trivial bundles $F\times U_\alpha$ over each $U_\alpha$, and glue them together to a bundle on $X$ by identifying $(s,x)\in F\times U_\alpha$ with $(f_{\alpha\beta}(x)s,x)\in F\times U_\beta$ when $x\in U_\alpha\cap U_\beta$. Conversely, any $F$-bundle on $X$ with structure group $G$ can be obtained (up to isomorphism) in this way, by taking an open cover $(U_\alpha)$ over which the bundle is trivialized and taking the $f_{\alpha\beta}$ to be the transition functions between the trivializations.
Furthermore, any isomorphism between the bundle obtained from an open cover $(U_\alpha)$ and functions $f_{\alpha\beta}$ and the bundle obtained from an open cover $(V_\gamma)$ and functions $g_{\gamma\delta}$ can be described as follows. For each $\alpha$ and $\gamma$, we must give a map $h_{\alpha\gamma}:U_\alpha\cap V_\gamma\to G$ such that whenever $x\in U_\alpha\cap U_\beta\cap V_\gamma\cap V_\delta$, $$h_{\beta\delta}(x)f_{\alpha\beta}(x)=g_{\gamma\delta}(x)h_{\alpha\gamma}(x).$$ The isomorphism is then defined by sending $(s,x)\in F\times U_\alpha$ to $(h_{\alpha\gamma}(x)s,x)\in F\times V_\gamma$ for any $x\in U_\alpha\cap V_\gamma$.
Now note that none of this data depends on $F$: it only depends on $G$! So you can describe $F$-bundles with structure group $G$ (and isomorphisms between them) in terms of just the group $G$. In your case, $G=O(n)$, and $G$ acts on several different spaces: $\mathbb{R}^n$, $D^n$, or $S^{n-1}$. So, for instance, you can take a sphere bundle with structure group $O(n)$, extract the data above (which involves only the group $O(n)$) from it, and then use that same data to construct a vector bundle with structure group $O(n)$. |
Question
Four fair six-sided dice are rolled. The probability that the sum of the results being $22$ is $$\frac{X}{1296}.$$ What is the value of $X$?
My Approach
I simplified it to the equation of the form:
$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $
Solving this equation results in:
$x_{1}+x_{2}+x_{3}+x_{4}=22$
I removed restriction of $x_{i} \geq 1$ first as follows-:
$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
$\Rightarrow \binom{18+4-1}{18}=1330$
Now i removed restriction for $x_{i} \leq 6$ , by calculating the number of
bad cases and then subtracting it from $1330$:
calculating
bad combination i.e $x_{i} \geq 7$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
We can distribute $7$ to $2$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{2}$
We can distribute $7$ to $1$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{1}$ and then among all others .
i.e
$$\binom{4}{1} \binom{14}{11}$$
Therefore, the number of bad combinations equals $$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$
Therefore, the solution should be:
$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$
However, I am getting a negative value. What am I doing wrong?
EDIT
I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work. |
Algebra
This is the third part of our free
ASVAB Math Study Guide. It covers algebra along with several other related topics. Read this section carefully and be sure to study the examples that are given. At the bottom of the page there is a review quiz to test your algebra skills. Solving for x in a Basic Equation xin a Basic Equation
The goal with algebra is to solve for a given variable or variables. Most often, the variable you will be solving for will be represented by
x. The goal is to get x by itself on one side of the equal sign. To do this, we work in “reverse-PEMDAS” order. And, anything we do on one side of the equal sign, we must also do to the other side.
Let’s look at some examples:
Example 1: x + 3 = 7
We are looking for the
x value that when added to 3 will give us 7. Even without algebra we can see that x will need to be 4, but let’s see how we would use algebra to solve the equation. We need to get x by itself. To do so, we need to remove the 3 from the left-hand side (LHS) of the equation. We do this by subtracting 3 from both sides: x + 3 = 7 −3 −3 x + 0 = 4 x = 4, as expected. Example 2: 2 x + 3 = 7
Here we are looking for a number,
x, that when first multiplied by 2 (doubled) and then added to 3 will give us 7. We could guess and check until we found a number that worked, but let’s instead use the superior approach that algebra offers. We need to get x by itself. And we need to work in “reverse-PEMDAS” order. This means we need to deal with the “add 3” before we deal with the “multiply by 2.” First subtract 3 from both sides and then divide both sides by 2:
Example 3: 2 x − 2 + x − 3 = 6 − 2
We now need to deal with the concept of “collecting like terms”. This simply means combining the numbers with each other and combining the x’s with each other on each side of the equation. We “collect like terms” on each side before we do anything else. Let’s first deal with the right-hand side (RHS) since it is simpler:
6 − 2 = 4, so we now have:
2 x − 2 + x − 3 = 4
Let’s rewrite the equation so that the
x terms and the numbers are next to each other:
2
x + x − 2 − 3 = 4
On the left-hand side (LHS), we can combine the numbers. We have a −2 and a −3 which combine to give us −5. If you struggle with this, it may help to think of the equation this way:
2x +
x + (−2) + (−3) = 4 We need to add −2 to −3: (−2) + (−3) = −5
2
x + x − 5 = 4
Let’s now combine our
x terms:
2
x + x = 3 x
Rewriting our equation we have:
3
x − 5 = 4
We can solve this using a similar approach to
Example 2:
Solving for x in an Inequality xin an Inequality
The process of solving for x in an inequality is nearly the same as it is in an equality. The difference is that when you divide (or multiply) by a negative number, the inequality sign changes direction.
Example 1: 5 x + 15 ≥ 50
We begin just as we would with an equality, subtracting 15 from both sides:
5
x + 15 ≥ 50 −15 −15 5 x ≥ 35
We then divide both sides by 5. Since we are dividing by a positive number, the inequality remains unchanged:
Example 2: −3 x − 12 ≥ 18
We begin just as we would with an equality, adding 12 to both sides:
−3
x − 12 ≥ 18 +12 +12 −3 x ≥ 30
Next, we divide both sides by −3. Because we are dividing by a negative number,
we must flip the inequality sign:
Ratios and Equivalent Proportions Ratios
Ratios represent the amount of one quantity in comparison to another quantity.
For example, we might say the ratio of female nurses to male nurses at a certain hospital is 5 to 3, which is written mathematically as 5:3.
What would be the ratio of female nurses to all nurses at that same hospital? For every 5 female nurses there are 3 male nurses, meaning that for every 8 (male or female) nurses, 5 of them are female. Thus, the ratio of female nurses to total nurses is 5:8.
Ratios can also be written as fractions. 5:8 can be written as $\frac{5}{8}$.
Numbers that can be written as fractions (ratios) are called
nal numbers. Ratio Equivalent Proportions
We can use ratios to solve problems involving
equivalent proportions.
Equivalent proportions are ratios that are set equal to one another. It is crucial to make sure that the numerator of the left-hand side ratio represents the same thing as the numerator of the right-hand side ratio; and, the denominators of each ratio must also represent the same thing:
$\dfrac{3 \text{ male nurses}}{5 \text{ female nurses}} = \dfrac{x \text{ male nurses}}{y \text{ female nurses}}$ or
$\dfrac{5 \text{ female nurses}}{3 \text{ male nurses}} = \dfrac{y \text{ female nurses}}{x \text{ male nurses}}$
It does not matter whether we put “Male nurses” or “Female nurses” in the numerator, as long as we are consistent on the left and right sides of the equation.
Example 1: The ratio of female nurses to male nurses at a certain hospital is 5 to 3, and there are actually 20 female nurses at the hospital. How many male nurses are there? Let’s set up an equivalent proportion, where $x$ represents the number of male nurses at the hospital:
$\dfrac{5}{3} = \dfrac{20}{x}$
To solve this equation, we begin by “cross multiplying” and then use algebra to isolate (solve for) x:
In our hospital there are 12 male nurses.
Now that you’ve read more of our lessons and tips for the Mathematics section of the ASVAB, put your skills to practice with the review quiz below. Try not to reference the above information and treat the questions like a real test.
Part 3 Review Quiz:
Congratulations - you have completed .
You scored %%SCORE%% out of %%TOTAL%%.
Your performance has been rated as %%RATING%%
Question 1
Solve for $x$.
$6x + 3 = 15$
$6$
$2$
$3$
$6x$
Now, all you have to do is cancel out $6$ by dividing it from both sides. $12 ÷ 6 = 2$, so you are left with $x = 2$.
Question 2
Solve for $x$.
$8x + 3 ≥ 27$
$x ≥ 16$
$x ≤ 16$
$x ≤ 3$
$x ≥ 3$
Question 3
Solve for $x$.
$9x + 3x − 1 ≥ 5$
$x ≥ \frac{1}{2}$
$x ≤ \frac{1}{2}$
$x ≥ 2$
$x ≤ 2$
$12x ≥ 6$
Divide both sides by $12$ to get $x ≥ \frac{1}{2}$.
Question 4
For every $10$ male students at a school, there are $20$ female students. What is the ratio of male to female students?
Hint: reduce your answer to the lowest possible terms
$10 : 20$
$1 : 2$
$2 : 1$
$5 : 10$
Question 5
Solve for $x$.
$2x + 5x − 4 = 21x$
$−7$
$24$
$\dfrac{2}{7}$
$\dfrac{−2}{7}$
$−4 = 14x$
To isolate $x$, divide both sides by $14$. This leaves us with $x$ alone on the right and $\frac{−4}{14}$ on the left, which can be simplified to $\frac{−2}{7}$.
Question 6
For every $3$ female students at a school, there are $5$ male students. What is the ratio of female students to all students?
Hint: reduce your answer to the lowest possible terms
$1 : 3$
$3 : 5$
$5 : 8$
$3 : 8$
allstudents. $3 + 5 = 8$, and out of those $8$ students, $3$ of them are female, so the correct ratio is $3 : 8$.
Question 7
Solve for $x$.
$−7x + 18 ≥ 39$
$x ≤ 3$
$x ≥ 3$
$x ≤ −3$
$x ≥ −3$
Question 8
Solve for $x$.
$x^2 + 1 = 10$
$9$
$3$
$4.5$
$81$
(Note: Technically there are actually two answers: $x = 3$ and $x = -3$. However, this is beyond our scope at this point.)
Question 9
Solve for $x$.
$\dfrac{x}{3} − 7 = 5x$
$-\dfrac{3}{2}$
$-\dfrac{1}{2}$
$-\dfrac{1}{7}$
$\dfrac{7}{14}$
$x − 21 = 15x$
Now, combine like terms by subtracting $x$ from both sides, leaving you with $−21 = 14x$.
Lastly, divide both sides by $14$ and simplify to get:
$\dfrac{−21}{14} = x \;$ or $\; x = \dfrac{−21}{14}$
$x = \dfrac{−3}{2}$
Which is typically written as $x = -\dfrac{3}{2}$
Question 10
There is a 2:3 ratio of male students to female students at a school. If there are 250 male students in total, how many female students are there?
$375$
$500$
$750$
Not enough information
$\dfrac{2 \text{ male students}}{3 \text{ female students}}$ $= \dfrac{250 \text{ male students}}{x \text{ female students}}$
Now, solve for $x$ by cross-multiplying. your equation should look like this:
$2 \cdot x = 3 \cdot 250$
$3 \cdot 250 = 750$, so the equation can be simplified to $2x = 750$. Dividing both sides by $2$, we get the final answer: $x = 375$. |
NTS ABSTRACTSpring2019
Return to [1]
Contents Jan 23
Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31
Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7
Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
Feb 14
Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28
Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7
Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14
Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross.
This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew.
March 28
Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4
Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a
set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri. |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
EXPRESS PAPER Open Access Published: Multibody motion segmentation for an arbitrary number of independent motions IPSJ Transactions on Computer Vision and Applications volume 8, Article number: 1 (2016) Article metrics
1028 Accesses
1 Citations
Abstract
We propose a new method for segmenting feature point trajectories tracked through a video sequence without assuming a number of independent motions. Our method realizes motion segmentation of feature point trajectories by hierarchically separating the trajectories into two affine spaces in a situation that we do not know the number of independently moving objects. We judge that input trajectories should be separated by comparing the likelihoods computed from those trajectories before/after separation. We also consider integration of the resulting separated trajectories for avoiding too much segmentations. By using real video images, we confirmed the efficiency of our proposed method.
Introduction
Separating independently moving objects in a video sequence is one of the important tasks in computer vision applications. Costeira and Kanade [1] proposed a segmentation algorithm based on the shape interaction matrix. Sugaya and Kanatani [3] proposed a multi-stage learning strategy using multiple models. Yan and Pollefeys [8] proposed a new local subspace fitting scheme. Vidal et al. [7] proposed a segmentation algorithm based on generalized principal component analysis (GPCA) [6]. By introducing GPCA for computing an initial segmentation, Sugaya and Kanatani [4] improved the multi-stage learning.
However, all these methods assume the number of moving objects. Kanatani and Matsunaga [2] proposed a method for estimating the number of independently moving objects based on the rank estimation of the affine space using the geometric minimum description length (MDL). However, estimating the number of independently moving objects based on the rank of the affine space is very difficult for real image sequences. For example, if an object motion is planar, the dimension of an affine space which includes its trajectories degenerates from 3-D to 2-D. Moreover, if two objects merely translate without rotation, the two 2-D affine spaces are parallel to each other. This means that a 3-D affine space which includes those 2-D affine spaces exists.
For this problem, we propose a new method for segmenting feature point trajectories without assuming the number of objects. Based on the fact that trajectories of a rigidly moving object is constrained to a 2-D or 3-D affine space, we hierarchically separate input trajectories into two affine spaces until all the trajectories are divided into 2-D or 3-D affine spaces. In order to judge whether input trajectories should be divided or not, we compare the likelihoods before/after separation. After the separation process, we also check whether the separated trajectories should be integrated by comparing the likelihoods to avoid that the trajectories which belong to the same object are separated into different groups.
Proposed method
From the fact that trajectories of a rigidly moving object is constrained to a 2-D or 3-D affine space, we can separate independently moving objects by hierarchically separating input trajectories into two affine spaces until all the trajectories are divided into 2-D or 3-D affine spaces. In order to realize the above separation, we need to overcome two problems. One problem is to properly estimate the dimension of the affine space which includes input trajectories. The other problem is that we need to judge whether input trajectories should be divided to stop hierarchical separation.
For the first problem, we can regard the rank of the moment matrix of the input trajectory vectors. The rank of the moment matrix can be obtained as the number of positive eigenvalues of the matrix. However, in the presence of noise, all eigenvalues are non-zero in general. Hence, we need to truncate small eigenvalues, but it is difficult to determine a proper threshold. We compute the rank of the moment matrix by using the geometric MDL [2].
For the second problem, we compare the average likelihoods of the trajectories for the affine spaces which are fitted to all the trajectories and the divided ones. We compute the average likelihoods before/after division and divide those trajectories if the likelihood after division is larger than that before division.
We summarize the algorithm of our proposed method as follows:
1.
Fit an affine space to the input trajectories, and compute its dimension
dby using the geometric MDL. 2.
If
d≤2, then we stop the division process for the target trajectories. 3.
Divide the trajectories into two affine spaces.
(a)
Convert the trajectory vectors into 3-D vectors.
Please refer to [4] for the detail computation.
(b)
Fit two planes to those 3-D vectors by the Taubin method.
(c)
Convert the trajectory vectors into
d-D vectors. (d) (a) 4.
Compute the average likelihoods
Pand P ′of the trajectories before/after separation and accept the separation, and go to step 1 if the following inequality is satisfied.$$ \lfloor{\log_{10}P'}\rfloor-\lfloor{\log_{10}P}\rfloor > 0, $$(1)
where ⌊·⌋ is the floor function. In our experience, if we compared the average likelihoods directly, the separation judgement was not stable. Thus, we compare the exponent part of the average likelihood.
5.
Else, reject the separation.
We hierarchically iterate the above procedures until all the input trajectories are not separated.
Rank estimation of the affine space
We compute the eigenvalues of the moment matrix of the 2
M-D trajectory vectors p α α=1,…, N and estimate its rank by using the geometric MDL. M is the number of the image frame. (1)
Define the 2
M×2 Mmoment matrix by M$$ \boldsymbol{M} = \sum_{\alpha=1}^{N}(\boldsymbol{p}_{\alpha} - \boldsymbol{p}_{C})(\boldsymbol{p}_{\alpha} - \boldsymbol{p}_{C})^{\top}, \quad \boldsymbol{p}_{C} = \frac{1}{N}\sum_{\alpha=1}^{N}\boldsymbol{p}_{\alpha}. $$(2) (2)
Compute the eigenvalues of
, and let M λ 1≥⋯≥ λ 2be the sorted eigenvalues. M (3)
Compute the residuals
J r r=2,...,2 M, for the fitted r-D affine space by$$ J_{r} = \sum_{\beta=r+1}^{2M}\lambda_{\beta}. $$(3) (4)
Compute the geometric MDLs [2] for each rank by$$ \text{G-MDL}(r) = J_{r} - \Bigl(rN + (r + 1)(2M - r)\Bigr)\epsilon^{2} \log\Bigl(\frac{\epsilon}{L}\Bigr)^{2}, $$(4)
where
εis the standard deviation of feature point tracking accuracy, which we call this noise level, and Lis a reference length, for which we can use an arbitrary value whose order is approximately the same as the data, say the image size. (5)
Estimate the rank \(\hat {r}\) of the affine space which includes the input trajectories as$$ \hat{r} = \arg\min_{r} \text{G-MDL}(r) $$(5)
Separation of the trajectories
We separate the input trajectories by using the EM algorithm of Sugaya and Kanatani [3, 4] and can compute the likelihood
P( α| k) of the α-th point for the fitted affine space k in the separation process. The likelihood P( α| k) can be computed in the form
where \(\boldsymbol {p}_{C}^{(k)}\) and
V ( k) are the centroid and the covariance matrix of class k, respectively.
We compute the likelihoods
P( α) and P ′( α) of p α P and P ′ by Integration of the separated trajectories
After separation, we check whether the separated trajectories should be integrated by comparing the average likelihoods to avoid that the trajectories which belong to the same object are separated into different groups.
For all the separated trajectory groups, we integrate the two groups of the separated trajectories if the average likelihood
Q before integration and Q ′ after integration satisfy the following inequality: Real image experiments Separation process
Figure 1 shows a separation process of our method. Figure 1 a shows the five decimated images of the input video sequence. The red points are the tracked feature points by the KLT tracker. We explain this separation process by showing tree expression of our separation in Fig. 2.
First, we estimated the dimension of the affine space which includes all the input trajectories and obtained that its dimension was 4. Since the resulting dimension was larger than 2, then we separated those input trajectories. We show the separation result in Fig. 1 b. For this result, we computed the average likelihoods for the affine spaces before/after separation and obtained the results 5.63×10
−7 and 8.11×10 −5, respectively. Since these values satisfied the inequality in Eq. (1), we accepted the separation.
In the second stage, we estimated the dimensions of the affine spaces for the separated trajectories shown in Fig. 1 b of green and blue points. The resulting dimension of the blue points was 2 and stopped separation. Since the estimated dimension of the green points was 3, we separated those trajectories. The result is shown in Fig. 1 c. For this separation, the computed average likelihoods satisfied Eq. (1), and then, we accepted the separation. In the third stage, we estimated the dimensions of the affine spaces for the separated trajectories shown in Fig. 1 c of red and green points. The resulting dimension of the red points was 2 and stopped separation. The estimated dimension of the green points was 3, but the average likelihoods before/after separation did not satisfy Eq. (1). Hence, we stop separation. We show the final separation result in Fig. 1 d. In this sequence, the number of independently moving objects, which includes background points, is 3 and our method correctly separates the input trajectories into three groups.
Integrating process
We show another result in Fig. 4. Figure 4 a shows the five decimated images of the input video sequence. We also show the separation process in Fig. 3 and show the results in each separation process in Fig. 4 b–f. Figure 4 g is the final separation result. From this result, we can see that miss-separation exists in Fig. 4 e and the points which belong to the same object are separated into three groups, which are the blue, green, and orange points in Fig. 4 g.
For all the pairs of the separated groups, we computed the average likelihoods before/after integration and check whether the separated groups should be integrated or not. Table 1 shows the computed average likelihoods. From this, the blue, green, and orange points are integrated. Figure 4 g shows the integrated result.
Accuracy comparison
We compared the accuracy of our method with RANSAC and LSA of Yan and Pollefeys [8] and GPCA of Vidal et al. [7]. We used the T-Hopkins database [5] and original data and computed the separation accuracy for each method. Table 2 shows the result. The accuracy is computed by (number of correctly separated points)/(number of input points). As we can see, our method is superior to the compared existing methods.
Conclusions
In this paper, we propose a new method for segmenting feature point trajectories without assuming the number of objects. Based on the fact that trajectories of a rigidly moving object is constrained to a 2-D or 3-D affine space, we hierarchically separate input trajectories into two affine spaces until all the trajectories are divided into 2-D or 3-D affine spaces. In order to judge whether input trajectories should be divided or not, we compare the likelihoods before/after separation. After the separation process, we also check whether the separated trajectories should be integrated by comparing the likelihoods to avoid that the trajectories which belong to the same object are separated into different groups.
By using real video sequences, we checked the separation and integration processes of our method and confirmed the accuracy of our method by comparing existing methods.
References 1
Costeira JP, Kanade T (1998) A multibody factorization method for independently moving objects. Int J Comput Vision 29(3): 159–179.
2
Kanatani K, Matsunaga C (2002) Estimating the number of independent motions for multibody motion segmentation In: Proc of the 5th Asian Conference on Computer Vision (ACCV2002), 7–12, Melbourne, Australia.
3
Sugaya Y, Kanatani K (2004) Multi-stage unsupervised learning for multi-body motion segmentation. IEICE Trans Inform Syst E87-D(7): 1935–1942.
4
Sugaya Y, Kanatani K (2010) Improved multistage learning for multibody motion segmentation In: Proc. of International Conference of Computer Vision Theory and Applications (VISAPP2010), 199–206, Angers, France.
5
Sugaya Y, Kanatani K (2013) Removing mistracking of multibody motion video database Hopkins155 In: Proc. of the 24th British Machine Vision Conference (BMVC2013), Bristol, U.K.
6
Vidal R, Ma Y, Sastry S (2005) Generalized principal component analysis (GPCA). IEEE Trans Pattern Anal Mach Intell 27(12): 1945–1959.
7
Vidal R, Tron R, Hartley R (2008) Multiframe motion segmentation with missing data using PowerFactorization and GPCA. Int J Comput Vis 79(1): 85–105.
8
Yan J, Pollefeys M (2006) A general framework for motion segmentation: independent, articulated, rigid, non-rigid, degenerate and non-degenerate In: Proc. of the 9th European Conference on Computer Vision (ECCV2006), 94–106, Graz, Austria.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
YSa introduced the hierarchical separation of the feature point trajectories, carried out all the experiments, and wrote the manuscript. YSu introduced the rank estimation of the affine space using the geometric MDL and separation judgement using likelihoods of the fitted affine spaces and revised the manuscript written by YSa. Both authors read and approved the final manuscript. |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
Objective: I am trying to simulate the following advection-diffusion-reaction equation in 2D space (x,y) and time. $$\begin{align}\text{ADR Equation: }\frac{\partial C}{\partial t} + \nabla\left(v.C - D\nabla{C} \right)= \alpha.C\end{align}$$I discretized the above ADR equation in 2D using finite-difference implicit scheme and as a result I get the following discretized equation.
$$\begin{align} p_1C^{n+1}_{i,j-1}+p_2C^{n+1}_{i-1,j}+p_3C^{n+1}_{i,j}+p_4C^{n+1}_{i+1,j}+p_5C^{n+1}_{i,j+1} = C^{n}_{i,j} \end{align}$$ where, $p_1, p_2, p_3, p_4, p_5$ are constants in time.
I want to solve this as a system of equations using $A^{n+1}.C^{n+1}=C^{n}$, with no-flow i.e. $C=0$ outside the boundary domain. Here, $A^{n+1}$ would be a penta-diagonal, symmetric (not sure about this) and a diagonally dominant matrix. I have derived matrix $A$ for $2\times2$, $3\times3$ and $4\times4$ systems. For example, below you can see matrix $A$ for $3\times3$ and $4\times4$ systems, respectively.
Issue: I am not sure if the form of matrix $A$ I have derived is correct because as per my understanding it should be symmetric, however, it's not as per my derivations. Owing to an unsymmetric form of the matrix $A$ I need help to efficiently form $A$ for $N\times N$ system.
Would appreciate if someone could use their awesome numerical skills to answer these issues. |
Let me introduce to you the topic of modal model theory, injecting some ideas from modal logic into the traditional subject of model theory in mathematical logic.
For example, we may consider the class of all models of some first-order theory, such as the class of all graphs, or the class of all groups, or all fields or what have you. In general, we have $\newcommand\Mod{\text{Mod}}\Mod(T)$, where $T$ is a first-order theory in some language $L$.
We may consider $\Mod(T)$ as a potentialist system, a Kripke model of possible worlds, where each model accesses the larger models, of which it is a submodel. So $\newcommand\possible{\Diamond}\possible\varphi$ is true at a model $M$, if there is a larger model $N$ in which $\varphi$ holds, and $\newcommand\necessary{\Box}\necessary\varphi$ is true at $M$, if $\varphi$ holds in all larger models.
In this way, we enlarge the language $L$ to include these modal operators. Let $\possible(L)$ be the language obtained by closing $L$ under the modal operators and Boolean connectives; and let $L^\possible$ also close under quantification. The difference is whether a modal operator falls under the scope of a quantifier.
Recently, in a collaborative project with Wojciech Aleksander Wołoszyn, we made some progress, which I’d like to explain. (We also have many further results, concerning the potentialist validities of various natural instances of $\Mod(T)$, but those will wait for another post.)
Theorem. If models $M$ and $N$ are elementarily equivalent, that is, if they have the same theory in the language of $L$, then they also have the same theory in the modal language $\possible(L)$. Proof. We show that whenever $M\equiv N$ in the language of $L$, then $M\models\varphi\iff N\models\varphi$ for sentences $\varphi$ in the modal language $\possible(L)$, by induction on $\varphi$.
Of course, by assumption the statement is true for sentences $\varphi$ in the base language $L$. And the property is clearly preserved by Boolean combinations. What remains is the modal case. Suppose that $M\equiv N$ and $M\models\possible\varphi$. So there is some extension model $M\subset W\models\varphi$.
Since $M\equiv N$, it follows by the Keisler-Shelah theorem that $M$ and $N$ have isomorphic ultrapowers $\prod_\mu M\cong\prod_\mu N$, for some ultrafilter $\mu$. It is easy to see that isomorphic structures satisfy exactly the same modal assertions in the class of all models of a theory. Since $M\subset W$, it follows that the ultrapower of $M$ is extended to (a copy of) the ultrapower of $W$, and so $\prod_\mu M\models\possible\varphi$, and therefore also $\prod_\mu N\models\possible\varphi$. From this, since $N$ embeds into its ultrapower $\prod_\mu N$, it follows also that $N\models\possible\varphi$, as desired. $\Box$
Corollary. If one model elementarily embeds into another $M\prec N$, in the language $L$ of these structures, then this embedding is also elementary in the language $\possible(L)$. Proof. To say $M\prec N$ in language $L$ is the same as saying that $M\equiv N$ in the language $L_M$, where we have added constants for every element of $M$, and interpreted these constants in $N$ via the embedding. Thus, by the theorem, it follows that $M\equiv N$ in the language $\possible(L_M)$, as desired. $\Box$
For example, every model $M$ is elementarily embedding into its ultrapowers $\prod_\mu M$, in the language $\possible(L)$.
We’d like to point out next that these results do not extend to elementary equivalence in the full modal language $L^\possible$.
For a counterexample, let’s work in the class of all simple graphs, in the language with a binary predicate for the edge relation. (We’ll have no parallel edges, and no self-edges.) So the accessibility relation here is the induced subgraph relation.
Lemma. The 2-colorability of a graph is expressible in $\possible(L)$. Similarly for $k$-colorability for any finite $k$. Proof. A graph is 2-colorable if we can partition its vertices into two sets, such that a vertex is in one set if and only if all its neighbors are in the other set. This can be effectively coded by adding two new vertices, call them red and blue, such that every node (other than red and blue) is connected to exactly one of these two points, and a vertex is connected to red if and only if all its neighbors are connected to blue, and vice versa. If the graph is $2$-colorable, then there is an extension realizing this statement, and if there is an extension realizing the statement, then (even if more than two points were added) the original graph must be $2$-colorable. $\Box$
A slightly more refined observation is that for any vertex $x$ in a graph, we can express the assertion, “the component of $x$ is $2$-colorable” by a formula in the language $\possible(L)$. We simply make the same kind of assertion, but drop the requirement that every node gets a color, and insist only that $x$ gets a color and the coloring extends from a node to any neighbor of the node, thereby ensuring the full connected component will be colored.
Theorem. There are two graphs that are elementary equivalent in the language $L$ of graph theory, and hence also in the language $\possible(L)$, but they are not elementarily equivalent in the full modal language $L^\possible$. Proof. Let $M$ be a graph consisting of disjoint copies of a 3-cycle, a 5-cycle, a 7-cycle, and so on, with one copy of every odd-length cycle. Let $M^*$ be an ultrapower of $M$ by a nonprincipal ultrafilter.
Thus, $M^*$ will continue to have one 3-cycle, one 5-cycle, one 7-cycle and on on, for all the finite odd-length cycles, but then $M^*$ will have what it thinks are non-standard odd-length cycles, except that it cannot formulate the concept of “odd”. What it actually has are a bunch of $\mathbb{Z}$-chains.
In particular, $M^*$ thinks that there is an $x$ whose component is $2$-colorable, since a $\mathbb{Z}$-chain is $2$-colorable.
But $M$ does not think that there is an $x$ whose component is $2$-colorable, because an odd-length finite cycle is not $2$-colorable. $\Box$.
Since we used an ultrapower, the same example also shows that the corollary above does not generalize to the full modal language. That is, we have $M$ embedding elementarily into its ultrapower $M^*$, but it is not elementary in the language $L^\possible$.
Let us finally notice that the Łoś theorem for ultraproducts fails even in the weaker modal language $\possible(L)$.
Theorem. There are models $M_i$ for $i\in\mathbb{N}$ and a sentence $\varphi$ in the language of these models, such that every nonprincipal ultraproduct $\prod_\mu M_i$ satisfies $\possible\varphi$, but no $M_i$ satisfies $\possible\varphi$. . Proof. In the class of all graphs, using the language of graph theory, let the $M_i$ be all the odd-length cycles. The ultraproduct $\prod_\mu M_i$ consists entirely of $\mathbb{Z}$-chains. In particular, the ultraproduct graph is $2$-colorable, but none of the $M_i$ are $2$-colorable. $\Box$ |
Suppose $f$ is a real-valued function of a complex variable that is differentiable at every $z \in \mathbb{C}$. Show that $f'(z)=0$ for all $z \in \mathbb{C}.$
My approach: Since $f$ is a real-valued function of a complex variable that is differentiable at all $z \in \mathbb{C}$, we can write that: $$f'(z)=\lim_{\lambda\to0} \frac{f(z+\lambda)-f(z)}{\lambda}= L$$ for some real function $L$.
If this is true, then $f(z+\lambda) - f(z) = \lambda L + g(\lambda)$ such that $\lim_{\lambda\to0} \frac{g(\lambda)}{\lambda}=0$, where $g(\lambda)$ is a real valued function. However, if L is real, that implies $\lambda L$ is complex and arises from the subtraction of two real-valued functions. Since this is not possible, $\lambda L$ has to be real, which implies $L = 0$ or $L=c\bar{\lambda}$, where $c$ is some constant. But since L is real, L cannot be $\bar{\lambda}$. Hence, L has to be zero. This implies $f'(z) = 0$.
Is this proof correct? If not, how should I correct it? |
Let me introduce to you the topic of modal model theory, injecting some ideas from modal logic into the traditional subject of model theory in mathematical logic.
For example, we may consider the class of all models of some first-order theory, such as the class of all graphs, or the class of all groups, or all fields or what have you. In general, we have $\newcommand\Mod{\text{Mod}}\Mod(T)$, where $T$ is a first-order theory in some language $L$.
We may consider $\Mod(T)$ as a potentialist system, a Kripke model of possible worlds, where each model accesses the larger models, of which it is a submodel. So $\newcommand\possible{\Diamond}\possible\varphi$ is true at a model $M$, if there is a larger model $N$ in which $\varphi$ holds, and $\newcommand\necessary{\Box}\necessary\varphi$ is true at $M$, if $\varphi$ holds in all larger models.
In this way, we enlarge the language $L$ to include these modal operators. Let $\possible(L)$ be the language obtained by closing $L$ under the modal operators and Boolean connectives; and let $L^\possible$ also close under quantification. The difference is whether a modal operator falls under the scope of a quantifier.
Recently, in a collaborative project with Wojciech Aleksander Wołoszyn, we made some progress, which I’d like to explain. (We also have many further results, concerning the potentialist validities of various natural instances of $\Mod(T)$, but those will wait for another post.)
Theorem. If models $M$ and $N$ are elementarily equivalent, that is, if they have the same theory in the language of $L$, then they also have the same theory in the modal language $\possible(L)$. Proof. We show that whenever $M\equiv N$ in the language of $L$, then $M\models\varphi\iff N\models\varphi$ for sentences $\varphi$ in the modal language $\possible(L)$, by induction on $\varphi$.
Of course, by assumption the statement is true for sentences $\varphi$ in the base language $L$. And the property is clearly preserved by Boolean combinations. What remains is the modal case. Suppose that $M\equiv N$ and $M\models\possible\varphi$. So there is some extension model $M\subset W\models\varphi$.
Since $M\equiv N$, it follows by the Keisler-Shelah theorem that $M$ and $N$ have isomorphic ultrapowers $\prod_\mu M\cong\prod_\mu N$, for some ultrafilter $\mu$. It is easy to see that isomorphic structures satisfy exactly the same modal assertions in the class of all models of a theory. Since $M\subset W$, it follows that the ultrapower of $M$ is extended to (a copy of) the ultrapower of $W$, and so $\prod_\mu M\models\possible\varphi$, and therefore also $\prod_\mu N\models\possible\varphi$. From this, since $N$ embeds into its ultrapower $\prod_\mu N$, it follows also that $N\models\possible\varphi$, as desired. $\Box$
Corollary. If one model elementarily embeds into another $M\prec N$, in the language $L$ of these structures, then this embedding is also elementary in the language $\possible(L)$. Proof. To say $M\prec N$ in language $L$ is the same as saying that $M\equiv N$ in the language $L_M$, where we have added constants for every element of $M$, and interpreted these constants in $N$ via the embedding. Thus, by the theorem, it follows that $M\equiv N$ in the language $\possible(L_M)$, as desired. $\Box$
For example, every model $M$ is elementarily embedding into its ultrapowers $\prod_\mu M$, in the language $\possible(L)$.
We’d like to point out next that these results do not extend to elementary equivalence in the full modal language $L^\possible$.
For a counterexample, let’s work in the class of all simple graphs, in the language with a binary predicate for the edge relation. (We’ll have no parallel edges, and no self-edges.) So the accessibility relation here is the induced subgraph relation.
Lemma. The 2-colorability of a graph is expressible in $\possible(L)$. Similarly for $k$-colorability for any finite $k$. Proof. A graph is 2-colorable if we can partition its vertices into two sets, such that a vertex is in one set if and only if all its neighbors are in the other set. This can be effectively coded by adding two new vertices, call them red and blue, such that every node (other than red and blue) is connected to exactly one of these two points, and a vertex is connected to red if and only if all its neighbors are connected to blue, and vice versa. If the graph is $2$-colorable, then there is an extension realizing this statement, and if there is an extension realizing the statement, then (even if more than two points were added) the original graph must be $2$-colorable. $\Box$
A slightly more refined observation is that for any vertex $x$ in a graph, we can express the assertion, “the component of $x$ is $2$-colorable” by a formula in the language $\possible(L)$. We simply make the same kind of assertion, but drop the requirement that every node gets a color, and insist only that $x$ gets a color and the coloring extends from a node to any neighbor of the node, thereby ensuring the full connected component will be colored.
Theorem. There are two graphs that are elementary equivalent in the language $L$ of graph theory, and hence also in the language $\possible(L)$, but they are not elementarily equivalent in the full modal language $L^\possible$. Proof. Let $M$ be a graph consisting of disjoint copies of a 3-cycle, a 5-cycle, a 7-cycle, and so on, with one copy of every odd-length cycle. Let $M^*$ be an ultrapower of $M$ by a nonprincipal ultrafilter.
Thus, $M^*$ will continue to have one 3-cycle, one 5-cycle, one 7-cycle and on on, for all the finite odd-length cycles, but then $M^*$ will have what it thinks are non-standard odd-length cycles, except that it cannot formulate the concept of “odd”. What it actually has are a bunch of $\mathbb{Z}$-chains.
In particular, $M^*$ thinks that there is an $x$ whose component is $2$-colorable, since a $\mathbb{Z}$-chain is $2$-colorable.
But $M$ does not think that there is an $x$ whose component is $2$-colorable, because an odd-length finite cycle is not $2$-colorable. $\Box$.
Since we used an ultrapower, the same example also shows that the corollary above does not generalize to the full modal language. That is, we have $M$ embedding elementarily into its ultrapower $M^*$, but it is not elementary in the language $L^\possible$.
Let us finally notice that the Łoś theorem for ultraproducts fails even in the weaker modal language $\possible(L)$.
Theorem. There are models $M_i$ for $i\in\mathbb{N}$ and a sentence $\varphi$ in the language of these models, such that every nonprincipal ultraproduct $\prod_\mu M_i$ satisfies $\possible\varphi$, but no $M_i$ satisfies $\possible\varphi$. . Proof. In the class of all graphs, using the language of graph theory, let the $M_i$ be all the odd-length cycles. The ultraproduct $\prod_\mu M_i$ consists entirely of $\mathbb{Z}$-chains. In particular, the ultraproduct graph is $2$-colorable, but none of the $M_i$ are $2$-colorable. $\Box$ |
One should not confuse function problems like FNP and FP with the different types of functions computable by deterministic, non-deterministic, alternating, probabilistic, or ... Turing machines.
Instead of starting by looking at functions $f:\Sigma^* \to \Sigma^*$, it is easier to start by looking at functions $f:\Sigma^* \to \mathcal{P}(\Sigma^*)$. A function $f:X\to\mathcal{P}(Y)$ is equivalent with to a subset $R\subset X\times Y$ of the cartesian product of $X$ and $Y$, i.e. to a binary relation.
Since we can assume that the decision problem is already defined, using the decision problem for $R$ gives one reasonable definition for the computable functions (of this form). However, that definition would force the different Turing machines to compute each output-bit separately and independently, which is a
small speed penalty. One can allow the different machines to produce the output in a way more suitable to their capabilities, to avoid that penalty.
The more annoying problem is that the form of output is different from the form of input. One could combine that with a stupid translation function $g:S\to \Sigma^*$ for $S\subset \mathcal{P}(\Sigma^*)$ to bring the output into the same form as the input. But if the output was produced in a way more suitable to the different capabilities of the Turing machines, such a stupid translation function might need to be more powerful than desired. Even accepting the
small speed penalty would not allow to avoid that problem, since the functions computable by the different Turing machines are often not closed under composition. That problem might be partly solved by using monads (and accepting $\mathcal{P}(\Sigma^*)$ as the form of output), but then the fact that functions computable by deterministic or alternating Turing machines were actually closed under composition gets hidden. |
A standard optimization problem in economics is choosing a consumption bundle subject to prices and a budget constraint:
$$\max_{x,y} \sqrt{x} + \sqrt{y} \hspace{1cm} \text{s.t. } p_x \cdot x + p_y \cdot y \leq w $$With the two goods,
x and
y, these solve easily in Mathematica:
assumptions = x >= 0 && y >= 0 && px > 0 && py > 0 && w > 0;FullSimplify[ArgMax[{Sqrt[x] + Sqrt[y], px*x + py*y <= w && assumptions}, {x,y}], assumptions]
As it should, this yields: $$x^*=\frac{w p_y}{p_x (p_x+p_y)}, \hspace{2cm} y^*=\frac{w p_x}{p_y(p_x+p_y)}$$
The problem with three goods is:
$$\max_{x,y,z} \sqrt{x} + \sqrt{y} + \sqrt{z} \hspace{1cm} \text{s.t. } p_x \cdot x + p_y \cdot y + p_z \cdot z \leq w $$
Solving it analogously for some reason does not work for me:
assumptions = x >= 0 && y >= 0 && z >= 0 && px > 0 && py > 0 && pz > 0 && w > 0;FullSimplify[ArgMax[{Sqrt[x] + Sqrt[y] + Sqrt[z], px*x + py*y + pz*z <= w && assumptions}, {x, y, z}], assumptions]
Mathematica accepts it and runs indefinitely without giving an answer. The problem is really only slightly more difficult than the two-variable case. You just get the optimality conditions from the Lagrangian and then solve as a system of four equations (instead of 3 as above) and get: $$x^*=\frac{w p_y p_z}{p_x (p_xp_y+p_xp_z+p_yp_z)}, \hspace{1cm} y^*=\frac{w p_x p_z}{p_y (p_xp_y+p_xp_z+p_yp_z)}$$ $$z^*=\frac{w p_x p_y}{p_z (p_xp_y+p_xp_z+p_yp_z)}$$
Why is Mathematica unable to work that out when it can handle the two-variable case almost instantaneously? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
The problem I'm referring to is: Lagarias' Elementary Version of the Riemann Hypothesis, which states:
For a positive integer $n$, let $\sigma(n)$ be the sum of all of its positive divisors. Let $H_n$ denote the $n$-th Harmonic number. ($\sum_{k=1}^{n}\frac{1}{k}$).
Is the inequality true for all $n$ greater than or equal to $1$?
$$\sigma(n)\leq H_n+\ln(H_n)e^{H_n}$$
I want to gain some insight as to how the problems are equivalent. |
This is in relation to this paper
I am looking for ways to optimize Recall @ fixed Precision ($R@P$) for a machine learning problem and i didnt want to use accuracy as a proxy for $R@P$. Upon research i came across this paper:
It goes as follows:
*here $f(b)$ will be the model (classifier) that thresholds at $b$. i.e. $f(b) < b$ are negative samples rest are positive samples
Creating building block bounds:
Precision ($P(f_b)) = \dfrac{t_p(f_b)}{t_p(f_b) + f_p(f_b)}$
Recall $(R(f_b)) = \dfrac{t_p(f_b)}{t_p(f_b) + f_n(f_b)} = \dfrac{t_p(f_b)}{|Y^+|}$
where $t_p, f_p, f_n$ are true positives, false positives and false negative counts respectively:
$$t_p(f_b) = \sum_{i \in |Y^+|}\mathbb{1}_{f(x_i) \geq b}$$
$$f_p(f_b) = \sum_{i \in |Y^-|}\mathbb{1}_{f(x_i) \geq b}$$
where $\mathbb{1}$ is the indicator function
Now, lower bound $t_p$ and upper bound $f_p$ by first writing them in terms of the zero-one loss and then replacing that loss by log-loss or hinge-loss
$$t_p(f_b) = \sum_{i \in |Y^+|}(1 - l_{01}(f_b, x_i, y_i))$$
$$f_p(f_b) = \sum_{i \in |Y^-|}l_{01}(f_b, x_i, y_i) \tag1$$
Now, these quantities are bound by replacing $l_{01}$ by loss function $l(f_b, x_i, y_i)$ which are $l_{01}$'s convex upper bounds (example: log-loss, hinge loss.. here i will write with hinge loss)
$$t_p(f_b) = \sum_{i \in |Y^+|}(1 - l(f_b, x_i, y_i)) \leq t_p(f_b)$$
$$f_p(f_b) = \sum_{i \in |Y^-|}l(f_b, x_i, y_i) \geq f_p(f_b)\tag2$$
**Now hinge loss is ${\displaystyle \ell (y)=\sum _{t\neq y}\max(0,1+\mathbf {w} _{t}\mathbf {x} -\mathbf {w} _{y}\mathbf {x} )}$ which is convex in the final output (logits) of the network that get fed into the loss function but not in the parameters of the model (for example deep neural network) **
Now coming back to the problem at hand which is to maximize recall at fixed minimum precision
$$R@P_\alpha = \underset{f}{max} R(f) s.t. P(f) \geq \alpha$$
this is a difficult combinatorial problem, hence we instead try to maximize its lower bound
$\underset{f,b}{max} \dfrac{1}{|Y^+|}{t_p(f)} $ s.t. $t_p(f) \geq \alpha(t_p(f) + f_p(f))$
this is turned into a tractable optimization surrogate, we use (2) to lower bound $t_p$ and upper bound $f_p$:
$\overline{R@P_\alpha} = \underset{f,b}{max} \dfrac{1}{|Y^+|}{t_{p_l}(f)} $ s.t. $t_{p_l}(f) \geq \alpha(t_{p_l}(f) + f_{p^u}(f))$
Now, The author says $\overline{R@P_\alpha}$ is a concave lower bound for $R@P_\alpha$ (lemma 4.1 in paper hyperlinked). Which i dont have an issue with. But $\overline{R@P_\alpha}$
is concave in the outputs of $f$ and not in the parameters of $f$
Now, the authors proceed as follows:
$$\overline{R@P_\alpha} = \underset{f}{max}(1-\dfrac{\mathbb{L^+}(f)}{|Y^+|}) s.t.(1-\alpha)(|Y^+| - \mathbb{L^+}(f)) \geq \alpha\mathbb{L^-}(f)$$
from now we denote, the loss on positive samples as $\mathbb{L^+}(f) = \sum_{i \in |Y^+|}l(f_b, x_i, y_i)$ and loss on negative samples as $\mathbb{L^-}(f) = \sum_{i \in |Y^-|}l(f_b, x_i, y_i)$
Now, $|Y^+|$ is constant for the data observed
so, the previous equation is equivalent to $$\underset{f}{min}{\mathbb{L^+}} s.t. \alpha\mathbb{L^-} + (1- \alpha)\mathbb{L^+} \leq(1-\alpha)|Y^+|\tag3$$
Sorry for the long post. But here lies the part that i cannot understand:
(3) is transformed using lagrange's multiplier theory to the following equivalent objective:
$$\underset{f}{min}\underset{\lambda}{max} \mathbb{L^+} + \lambda(\dfrac{\alpha}{1-\alpha}{\mathbb{L^-}} + \mathbb{L^+} - |Y^+|)$$
And the final suggestion by the authors is to tackle this saddle point problem by gradient descent
$$f^{t+1} = f^{t} - \gamma\nabla L(f^t, \lambda^t))$$ $$\lambda^{t+1} = \lambda^{t} + \gamma\nabla L(f^t, \lambda^t))$$
Where $L(f, \lambda) = (1+\lambda)\mathbb{L^+}(f) + \lambda(\dfrac{\alpha}{1-\alpha}{\mathbb{L^-}}(f) - \lambda|Y^+|$
But, this entire explanation assumes that loss is convex in the parameters of the model which is clearly not the case. Any help is greatly appreciated. Been struggling with this for almost 2 days now. Many many many thanks |
I've little experience with Zorn's lemma, and my class hasn't covered the material yet. However, the teacher has stated that the claim holds for infinite dimensional vector spaces, and he implies he wants us to use this fact on our homework. I was not satisfied with this. I felt we should justify this claim before using it.
$\textbf{Proof:}$
Let $V$ be a vector space over a field $F$. Let $S\in V$ be a linearly independent set. Consider the set $$\mathcal{S}=\{S'\subseteq V|S\subseteq S'\text{ and }S'\text{ is linearly independent.}\}.$$ This set is partially ordered by subset inclusion. Consider a chain in $\mathcal{S}$ $$S_1\subset S_2\subset...\subset S_i\subset...$$ which may or may not terminate. It follows that $\{S_i\}$ is bounded above by $\bigcup S_i$. We will show that $\bigcup S_i$ is a linearly independent set. If $\bigcup S_i$ were linearly dependent, then there exists $v_1,...,v_n\in\bigcup S_i$ such that $a_1v_1+...+a_nv_n=0$ has a non-trivial solution, that is $v_1,...,v_n$ are linearly dependent. Note that since $v_j\in \bigcup S_i$ it follows that there $S_{i_j}$ such that $v_j\in S_{i_j}$. Then $v_1,...,v_n\in S_m$ where $m=\max(i_1,...,i_n)$. This contradicts that $S_m$ is linearly independent, hence $\bigcup S_i$ is linearly independent, and belongs to $\mathcal{S}$. This shows that every totally ordered chain in $\mathcal{S}$ has an upper bound in $\mathcal{S}$, therefore by Zorn's Lemma $\mathcal{S}$ has a maximal element.
Let $S^m\in\mathcal{S}$ be maximal. Clearly $S\subseteq S^m$. If we can show $S^m$ is a spanning set, then since it is linearly independent ($S^m\in\mathcal{S}$) it will be a basis containing $S$ completing the proof.
Suppose that $v\in V$ such that $v\not\in\text{Span}(S^m)$. This happens if and only there exists no set of vectors $v_1,...,v_n\in S^m$ and scalars $a_1,...,a_n,a\in F$ not all $0$ such that $$a_1v_1+...+a_nv_n=-av.$$ This implies that for all $v_1,...,v_n\in S^m$ the equation $$a_1v_1+...+a_nv_n+av=0$$ has only the trivial solution, which happens if and only if $S^m\cup\{v\}$ is a linearly independent set. This is impossible, for $S^m$ is maximal, therefore by contradiction $S^m$ spans $V$.$\blacksquare$ |
Why is the Lagrangian a function of the position and velocity (possibly also of time) and why are dependences on higher order derivatives (acceleration, jerk,...) excluded?
Is there a good reason for this or is it simply "because it works".
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Why is the Lagrangian a function of the position and velocity (possibly also of time) and why are dependences on higher order derivatives (acceleration, jerk,...) excluded?
Is there a good reason for this or is it simply "because it works".
I reproduce a blog post I wrote some time ago:
We tend to not use higher derivative theories. It turns out that there is a very good reason for this, but that reason is rarely discussed in textbooks. We will take, for concreteness, $L\left(q,\dot q, \ddot q\right)$, a Lagrangian which depends on the 2nd derivative in an essential manner. Inessential dependences are terms such as $q\ddot q$ which may be partially integrated to give ${\dot q}^2$. Mathematically, this is expressed through the necessity of being able to invert the expression $$P_2 = \frac{\partial L\left(q,\dot q, \ddot q\right)}{\partial \ddot q},$$ and get a closed form for $\ddot q \left(q, \dot q, P_2 \right)$. Note that usually we also require a similar statement for $\dot q \left(q, p\right)$, and failure in this respect is a sign of having a constrained system, possibly with gauge degrees of freedom.
In any case, the non-degeneracy leads to the Euler-Lagrange equations in the usual manner: $$\frac{\partial L}{\partial q} - \frac{d}{dt}\frac{\partial L}{\partial \dot q} + \frac{d^2}{dt^2}\frac{\partial L}{\partial \ddot q} = 0.$$ This is then fourth order in $t$, and so require four initial conditions, such as $q$, $\dot q$, $\ddot q$, $q^{(3)}$. This is twice as many as usual, and so we can get a new pair of conjugate variables when we move into a Hamiltonian formalism. We follow the steps of Ostrogradski, and choose our canonical variables as $Q_1 = q$, $Q_2 = \dot q$, which leads to \begin{align} P_1 &= \frac{\partial L}{\partial \dot q} - \frac{d}{dt}\frac{\partial L}{\partial \ddot q}, \ P_2 &= \frac{\partial L}{\partial \ddot q}. \end{align} Note that the non-degeneracy allows $\ddot q$ to be expressed in terms of $Q_1$, $Q_2$ and $P_2$ through the second equation, and the first one is only necessary to define $q^{(3)}$.
We can then proceed in the usual fashion, and find the Hamiltonian through a Legendre transform: \begin{align} H &= \sum_i P_i \dot{Q}_i - L \ &= P_1 Q_2 + P_2 \ddot{q}\left(Q_1, Q_2, P_2\right) - L\left(Q_1, Q_2,\ddot{q}\right). \end{align} Again, as usual, we can take time derivative of the Hamiltonian to find that it is time independent if the Lagrangian does not depend on time explicitly, and thus can be identified as the energy of the system.
However, we now have a problem: $H$ has only a linear dependence on $P_1$, and so can be arbitrarily negative. In an interacting system this means that we can excite positive energy modes by transferring energy from the negative energy modes, and in doing so we would increase the entropy — there would simply be more particles, and so a need to put them somewhere. Thus such a system could never reach equilibrium, exploding instantly in an orgy of particle creation. This problem is in fact completely general, and applies to even higher derivatives in a similar fashion.
Excellent question, and one that I've never really found a
completely satisfactory answer for. But consider this: in elementary classical mechanics, one of the fundamental laws is Newton's second law, $\mathbf{F} = m\mathbf{a}$, which relates the force on an object to the object's acceleration. Now, most forces are exerted by one particular object on another particular object, and the value of the force depends only on the positions of the source and "target" objects. In conjunction with Newton's second law, this means that, in a classical system with $N$ objects, each one obeys an equation of the form
$$\ddot{\mathbf{x}}_i = \mathbf{f}(\{\mathbf{x}_j|j\in 1,\ldots,N\})$$
where $\mathbf{f}$ is some vector-valued function. The point of this equation is that, if you have the
positions of all the objects, you can compute the accelerations of all the objects.
By taking the derivative of that equation, you get
$${\dddot{\mathbf{x}}}_i = \mathbf{f'}(\{\mathbf{x}_j\})\{\dot{\mathbf{x}}_j\}$$
(I'm getting quite loose with the notation here ;p) This allows you to compute the jerk (third derivative) using the positions and velocities. And you can repeat this procedure to get a formula (at least in some abstract sense) for any higher derivative. To put it in simple terms, since Newton's second law relates functions which are two orders of derivative apart, you only need the 0th and 1st derivatives, position and velocity, to "bootstrap" the process, after which you can compute any higher derivative you want, and from that any physical quantity. This is analogous to (and in fact closely related to) the fact that to solve a second-order differential equation, you only need two initial conditions, one for the value of the function and one for its derivative.
The story gets more complicated in other branches of physics, but still, if you look at most of them you will find that the fundamental evolution equation relates the value of some function to its first and second derivatives, but no higher. For example, in quantum mechanics you have the Schrodinger equation,
$$i\hbar\frac{\partial\Psi}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2 \Psi}{\partial x^2} + U(x)\Psi$$
or in quantum field theory, the Klein-Gordon equation,
$$-\frac{\partial^2\phi}{\partial t^2} + \frac{\partial^2\phi}{\partial x^2} - m^2\phi = 0$$
and others, or Maxwell's equations (equivalently, the wave equation that can be derived from them) in classical electromagnetism. In each case, you can use a similar argument to at least motivate the fact that only position or its equivalent field and its first derivative are enough to specify the entire state of the system.
Of course, you might still wonder
why the equations that describe the universe relate functions that are only two derivatives apart, rather than three or four. That part is a mystery, but one that falls in the realm of philosophy rather than physics.
There are implications for causality when a equation of motion contains higher than second derivatives of the fields, EM radiation from charged bodies goes over the derivative of the acceleration
i don't know the details of WHY but this book should give more details: (Causality and Dispersion Relations) http://books.google.com/books?id=QDzHqxE4anEC&lpg=PP1&dq=causality%20dispersion%20relations&pg=PP1#v=onepage&q&f=false
There are formulations involving higher order derivatives, however, you made a fair characterization.
I think a rule of thumb would be to start looking for the simplest Lagrangian you can think of. In the general case, a good Lagragian should obey homogeneity of space, time and isotropy of space which means that it can't contain explicitly the position, time and velocity $\vec{v}$, respectively. Then, the simplest allowed possibility is to have a Lagrangian with a velocity squared. Since we don't need to look for more conditions to be fulfilled, there is no need to add terms involving higher derivatives or combinations of other terms.
You can see this procedure at work (quite a few times, actually) in Landau & Lifshitz, The Classical Theory of Fields.
Well, the usual physics in classical mechanics is formulated in terms of second-order differential equations. If you are familiar with the process of deriving Euler-Lagrange equations from the Lagrangian then it should be natural that the kinetic term must be proportional to $(\partial_t x)^2$ to reproduce that.
If you'd considered more general Lagrangians (which you are certainly free to) you would obtain arbitrarily complicated equations of motions but these wouldn't correspond to anything physical. Nevertheless, some of those equations might describe some mathematical objects (because Lagrangian formalism and calculus of variations isn't inherent only to physics but also to lots of other mathematical disciplines).
This question actually needs a 2 steps answer:
Lagrangian has been defined in such a way, that problem to be solved would produce a second order derivative with respect to time when Euler-Lagrange equation is produced. It includes an implicit derivation of the momentum (notice time derivative after minus sign in $\frac{\partial L}{\partial q} - \frac{d}{dt}\frac{\partial L}{\partial \dot q}=0$) which in turn, is a first order derivative of position. It means that, acceleration is actually taken care when the full problem is setup. One can verify it by simply checking that for most cases Euler-Lagrange equation just turns to be $\frac{\partial L}{\partial q}-m \ddot q=0$ and if one defines$\frac{\partial L}{\partial q}=F$ it becomes Newton’s second law. Having said that, we need to move to the next step, which is,
This question has already been replied (including one by me) here Why $F=ma$ and not $F=m \dot a$. The short answer is:“… second order derivative is
all one needs to differentiate natural states of motion from affected states of motion”.
If we assume, say, a second derivative in the Lagrangian, the Euler-Lagrange equations which minimize action
$$A[q] = \int_{x_1}^{x_2} L(x,q,q',q'')\, dx $$
would be
$$\frac{\partial L}{\partial q} - \frac{d}{dt}\frac{\partial L}{\partial q'} + \frac{d^2}{dt^2}\frac{\partial L}{\partial q''} = 0$$
This is a fourth order differential equation. However, this can't be the case as we already know that $q''=F/m$, i.e acceleration is determined by Force, which is "outside" the initial conditions. In a gravitational force field, for example, you know, a piori, the forces at every point in the system, and hence the acceleration at every point in the system is already known. A fourth order DE would lead to an internal inconsistency.
The deeper question to ask, I suppose, is why $F=mq''$, not $F=mq'''$ or $F=mq''''$. I won't pretend to know the answer to this, but I suspect there might be one. |
On approximation of the separately and jointly continuous functions Abstract
We investigate the following problem: which dense subspaces$L$ of the Banach space $C(Y)$ of continuous functions on acompact $Y$ and topological spaces $X$ have such property, thatfor every separately or jointly continuous functions $f: X\times Y\rightarrow \mathbb{R}$ there exists a sequence of separately orjointly continuous functions $f_{n}: X\times Y \rightarrow\mathbb{R}$ such, that $f_n^x=f_n(x, \cdot) \in L$ for arbitrary $n\in \mathbb{N}$, $x\in X$ and $f_n^x\rightarrow f^x$ uniformly on $Y$ for every $x\in X$? In particular, it was shown, if the space $C(Y)$ has a basis that every jointly continuous function $f: X\times Y \rightarrow \mathbb{R}$ has jointly continuous approximations $f_n$ such type.
3 :: 2
Refbacks There are currently no refbacks.
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
Proof that an Infinite Series Adds to 1
We will prove that:
$\displaystyle \frac{1}{2}+\frac{1}{4}+\frac{1}{8}... = 1$ (exactly)
Proof:
(1) Let n = $\displaystyle \frac{1}{2}+\frac{1}{4}+\frac{1}{8}...$
Multiply each side of (1) by 2
(2) 2n = $\displaystyle 1 + \frac{1}{2}+\frac{1}{4}+\frac{1}{8}...$
Subtract each side of line (1) from each side of line (2)
(3) n = 1
One can also use this technique to prove that .999... is exactly equal to 1.
Be careful when using proofs like this.
Saying "Let $n = \dfrac{1}{2} + \dfrac{1}{4} + \dfrac{1}{8} + ...$" implies that the series converges, which may not be the case.
There are (false) proofs that the series $1 - 1 + 1 - 1 + ...$ converges to $0$, $1$ and $\dfrac{1}{2}$ using this principle, but the series does not converge at all.
If you want to formalise this proof, you have to first prove that the series converges.
This is, in fact, a "geometric series". A geometric series is any series of the form $\displaystyle \sum_{n=0}^\infty ar^n= a+ ar+ ar^2+ ar^3+ \cdot\cdot\cdot$.
One can prove that a geometric series converges if and only if |r|< 1 and then converges to $\displaystyle \frac{a}{1- r^2}$.
Here, we can write the series as $\displaystyle \frac{1}{2}\left(1+ \frac{1}{2}+ \frac{1/4}+ \cdot\cdot\cdot\right)$ so 1/2 times a geometric sequence with a= 1 and r= 1/2. Since 1/2< 1, this converges to $\displaystyle \frac{1}{2}\frac{1}{1- \frac{1}{2}}= \frac{1}{2}\frac{2}{2- 1}= 1$.
(This is also a geometric series with a= 1/2 and r= 1/2 so converges to $\displaystyle \frac{\frac{1}{2}}{1- \frac{1}{2}}= \frac{\frac{1}{2}}{\frac{1}{2}}= 1$)
(It can also be thought of as a geometric series with a= 1 and r= 1/2 with the first "a" term missing. The geometric series with a= 1 and r= 1/2, $\displaystyle 1+ \frac{1}{2}+ \frac{1}{4}+ \frac{1}{8}+ \cdot\cdot\cdot$ converges to $\displaystyle \frac{1}{1- \frac{1}{2}}= 2$ and subtracting the missing first term, "1" gives a sum of 1.)
Quote:
How in the world did that "2" sneak in there?! Yes, of course. Thank you.
Perhaps I'm stupid, but I see no necessity of first proving that the series converges. Whether it converges or not, the proof I provided is purely algebraic:
Multiplying each side of an equation by 2, and
Subtracting the same thing from each side of an equation.
Quote:
For example:
S = 1 + 2 + 3 + 4 + ...
2S = 2 + 4 + 6 + 8 + ....
S = 2S - S = -1 - 3 - 5 - 7 - ...
But can we say that 1 + 2 + 3 + 4 + ... = -1 - 3 - 5 - 7 - .... ?
-Dan
Quote:
Suppose that $$s = 1 - 1 + 1 - 1 + \cdots$$Then $$-s = -1 + 1 - 1 + 1 \cdots = -1 + s$$Thus $$2s = 1 \implies \boxed{s = \tfrac12}$$
This uses all the same techniques that yours does, but is patently not true. The partial sums never get closer to $\frac12$ at all!
What you are really saying is that "I get the right answer, so why should I prove that the method is sound". But that means that what you have is not a proof that the sum is what you say it is because applied to another sequence we don't know whether the result is correct or not. We have to check by another method.
Quote:
S = 1 + (2 - 2) + (3 - 4) + (4 - 6) + ...
S = 1 + 0 - 1 - 2 - 3 - ...
S = 1 - S
S = 1/2
Quote:
2S= 2 + 4 + 6 + 8 + ...
S= 1 + 2 + 3 + 4 + ...
Subtract both sides and you have:
S = 1 + 2 + 3 + 4 + ...
All times are GMT -8. The time now is 09:19 AM.
Copyright © 2019 My Math Forum. All rights reserved. |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
I saw a post about that before but can't find it nowAforAmpere wrote:What are the speed limits for spaceships in B0 rules? Orthogonal is C. I think, but I am not sure that the diagonal speed limit is 3c/4, but what about oblique? Well, I thought about it for a bit, this pattern should determine the speed limit, as this expands the fastest possible in all directions, while only half the generations use B1c, to allow spaceships.Majestas32 wrote:I saw a post about that before but can't find it nowAforAmpere wrote:What are the speed limits for spaceships in B0 rules? Orthogonal is C. I think, but I am not sure that the diagonal speed limit is 3c/4, but what about oblique?
Code: Select all
x = 1, y = 1, rule = B02e3cjr4a5ay6i7c8/S2ak3acei4cjkrtyz5acijy6aek7eo!
Things to work on:
- Find a (7,1)c/8 ship in a Non-totalistic rule
- Finish a rule with ships with period >= f_e_0(n) (in progress)
Hold on. If m=n, then the case reduces to a diagonal, or 3(m,m)c/2(m+m). Consider the ship with a diagonal speed of c/3 (the diagonal speed limit). Then 3(m,m)c/2(2m)=c/3.Majestas32 wrote:3(m,n)c/2(m+n). Which leads to another question: what's the knightship speed limit in rules where the diagonal speed limit is c/3 (like movostill 3)?
3(m,m)c/4m=c/3
9(m,m)c/4m=c
9(m,m)c=4mc
9(m(1,1))c=4mc
9m(1,1)c=4mc
(1,1)c=(4/9)c
(1,1)=4/9 (contradiction)
Code: Select all
x = 81, y = 96, rule = LifeHistory58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27.A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A$4.2A18$4.2A$4.2A2.2A$8.2A!
You can’t apply the formula to that, the formula only applies to the B0 rules.Gamedziner wrote: 3(m,m)c/4m=c/3 9(m,m)c/4m=c 9(m,m)c=4mc 9(m(1,1))c=4mc 9m(1,1)c=4mc (1,1)c=(4/9)c (1,1)=4/9 (contradiction)
Things to work on:
- Find a (7,1)c/8 ship in a Non-totalistic rule
- Finish a rule with ships with period >= f_e_0(n) (in progress)
BlinkerSpawn Posts:1905 Joined:November 8th, 2014, 8:48 pm Location:Getting a snacker from R-Bee's Take a look at B2-a345678/S012345678 (I stole this proof from somewhere but I don't remember):Majestas32 wrote:what's the knightship speed limit in rules where the diagonal speed limit is c/3
Since it has
allbirth (and survival, but that's not too important) conditions except the ones which allow for faster-than-c/3d, the edges of the octagon advance at the fastest possible speed. To see the speed, we're going to look at the edge. Notice that it takes 1 generation for the selected cell to "travel" west and 2 for it to travel north. As a result, the lowest (theoretically) possible period for (a,b) is min(2a+b,a+2b) (The speed is (a,b)c/(min(2a+b,a+2b))). [citation needed]
-Terry Pratchett
(a,b)c/a, a>=2b, B0
3(a,b)c/(2a+2b), b<a<2b, B0
(a,b)c/(a+b), B2a
(a,b)c/(2a+b), B2e; S4w; S5a (sometimes)
(a,b)c/(2a+2b), all other rules with B3a and B2c/B3i
?
Where a > b.
But what's the deal with those rules where no ships from c/2 to c/3 are possible?
It's more complicated than that: B2e (for most but not all other transitions) and S5a (for sufficient other transitions) also allow (a,b)c/(2a+b) speeds.Majestas32 wrote:(a,b)c/(2a+b), S4w (a,b)c/(2a+2b), ~B2a/~S4w
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Things to work on:
- Find a (7,1)c/8 ship in a Non-totalistic rule
- Finish a rule with ships with period >= f_e_0(n) (in progress)
Template:EmbedViewer might be helpful. If you need more flexibility, you can also use LV:Viewer directly and add the appropriate styling.gameoflifemaniac wrote:How to add a LifeViewer window on a LifeWiki page? The original form of the P45 glider gun might be what you're looking for -- it's kind of borderline between elementary and engineered, but statorless in any case.Rhombic wrote:How can we look for both a small engineered and an elementary statorless gun in CGoL? It seems more than feasible enough in both cases!
Also, on an unrelated note, I've been thinking that it might be worthwhile to have a yearly competition modeled off of Pattern of the Year, but for other cellular automata -- it seems like OCAs are a major component of what's going on on these forums that doesn't get enough recognition. I'm kind of curious what everyone else thinks about that. Maybe like an "OCA Pattern of the Year" or "Rule of the Year" competition, or combined into "OCA Development of the Year"?
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
I vote in favor of that (personally I'd nominate AbhpzTa's adjustable spaceship rule)A for awesome wrote:Also, on an unrelated note, I've been thinking that it might be worthwhile to have a yearly competition modeled off of Pattern of the Year, but for other cellular automata -- it seems like OCAs are a major component of what's going on on these forums that doesn't get enough recognition. I'm kind of curious what everyone else thinks about that. Maybe like an "OCA Pattern of the Year" or "Rule of the Year" competition, or combined into "OCA Development of the Year"? I agree with that idea as well, it would show all the other progress in these forums.A for awesome wrote:Also, on an unrelated note, I've been thinking that it might be worthwhile to have a yearly competition modeled off of Pattern of the Year, but for other cellular automata -- it seems like OCAs are a major component of what's going on on these forums that doesn't get enough recognition. I'm kind of curious what everyone else thinks about that. Maybe like an "OCA Pattern of the Year" or "Rule of the Year" competition, or combined into "OCA Development of the Year"?
Things to work on:
- Find a (7,1)c/8 ship in a Non-totalistic rule
- Finish a rule with ships with period >= f_e_0(n) (in progress)
EDIT:
Code: Select all
local g = golly()local selection = g.getselrect()while ( true ) do g.new("") g.select(selection) g.randfill(50) g.run(10000) if tonumber(g.getpop()) == 0 then g.reset() count = 0 while ( true ) do g.run(1) count = count+1 if tonumber(g.getpop()) == 0 then if count > 15 then break end else break end end endendg.reset()g.show("Found "..count.."-generation diehard")
No idea. If Catagolue paid attention to these things, I'm sure some really impressive 16x16s would have showed up by now. But paying attention would have slowed apgsearch down a lot, meaning fewer results in other areas.gameoflifemaniac wrote:What's the longest lasting 16x16 die hard known?
So I guess we should start with simeks' various diehard finds. Or do you mean it has to be 16x16? I guess that's trivial, unless it has to look like a random soup --
Code: Select all
#C simeks' 538-tick diehardx = 16, y = 16, rule = B3/S23b5o$bob3o$2bo$2o$b4o$2bob2o10$15bo!#C [[ AUTOFIT HISTORYFIT ]]
Good work there, you've almost got it. The "if count > 15 then break end else break end" is a clue. Basically, you're not doing anything different if your count test succeeds. Also, you're only breaking out of the inner while loop, but there's no way for the program to know that it should break out of the outer while loop.gameoflifemaniac wrote:EDIT: ... This script should stop if and only if it finds a diehard that lasts more than 15 generations. But it doesn't work.
If you want to stop after the first find, here's one way:
Code: Select all
local g = golly()local selection = g.getselrect()while ( true ) do g.new("") g.select(selection) g.randfill(50) g.run(10000) if tonumber(g.getpop()) == 0 then g.reset() count = 0 while ( true ) do g.run(1) count = count+1 if tonumber(g.getpop()) == 0 then g.show("Found "..count.."-generation diehard") if count > 15 then g.reset() g.exit() else break end end end endend
Code: Select all
x = 16, y = 16, rule = B3/S23o2bobo3b2o3bo$3b3obob4obo$bo3b3o2b2o$o3b5obo3b2o$2b3o3b2obo$bob7o2bob2o$o2b5obob3o$bobob11o$2b2o4b3ob3o$obobobo2bo5bo$2obob2ob3ob3o$obob2o4bobo2bo$2ob2obob3ob3o$2b3o3b2obob2o$3b4o2bo2b2o$b3o2bo2b4o!
if bestsofar<count then bestsofar=count end
and so forth, saving the starting pattern to another variable, or maybe a separate layer so you can interrupt the script at any time without losing anything.
284 generations already:dvgrn wrote: snip...
Code: Select all
local g = golly() local selection = g.getselrect() while ( true ) do g.new("") g.select(selection) g.randfill(50) g.run(10000) if tonumber(g.getpop()) == 0 then g.reset() count = 0 while ( true ) do g.run(1) count = count+1 if tonumber(g.getpop()) == 0 then g.show("Found "..count.."-generation diehard") if count > 15 then g.reset() g.exit() else break end end end end end
...
Code: Select all
x = 16, y = 16, rule = B3/S23obo2b3o4bo2bo$b6o6bo$3bob2o2bobob2o$bo2bo2bo2bo2bobo$4ob2obo2bo2b2o$4b2ob2obo4bo$6bo3b4obo$2bobob2o3b3o$bob4o2bo2b2obo$7b3ob5o$6b3obo3bo$o2bob2obo4b3o$bobobo6b2obo$2o2bobob4ob3o$o2b3obob3o2bo$3obo2b8o!
Things to work on:
- Find a (7,1)c/8 ship in a Non-totalistic rule
- Finish a rule with ships with period >= f_e_0(n) (in progress)
I'm really sorry to necrobump this thread, but I'm running the example 1.in and it returns with no 1.rle and no console output. What's going on? I'm not very skilled in C. If you need it, here is the example 1.in:
Code: Select all
max-gen 100 start-gen 1 num-catalyst 3 stable-interval 10 search-area -8 -8 16 16 pat 2bo$bobo$o3bo$o3bo$o3bo$bobo$2bo! -3 -4 filter 24 b2o$obo$bo 5 -3 filter 30 3o$3o! -9 2 fit-in-width-height 20 3 cat 2o$2o 60 0 0 . cat 2b2o$3bo$3o$o 10 -2 -2 * output 1.rle full-report full.rle
Please stop using my full name. Refer to me as dani.
"I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D.
321 ticks, turns into the standard 7-bit diehard, and is a 200th cousin to T=337 of diehard658 in Golly's Patterns/Life/Miscellaneous:_zM wrote:284 generations already...
Code: Select all
x = 58, y = 42, rule = B3/S2313bobo$2bo10b2ob2o$bobo3b2o4b2obo$o2bo3bo7bo$bobo3bobo$2bo3b3ob2o$10b2o7$14b2o$13bo2bo$14b2o6$20bo$19bobo$20bobo$21b2o2$42bobo2b2o2bob2o2bo$42b10o2bo$42b2o2b2ob2obob4o$10b2o30b2ob2ob4o2b4o$9bo2bo31bob4obobobobo$10b2o30bo4b2ob2ob2o2bo$44bo2b4obo2b2o$42b2o2b3o3b2o$42bo5bo4b2ob2o$42bobob2ob3obobobo$42b3o2b2o2bobobo$42b3ob3o4b2ob2o$43bobo2b2ob3ob3o$44b2o3b4o2bobo$45b6o3b2o$42bob2o3b2obobobo!
EDIT: OK, I figured out how my script can detect *WSS.
Code: Select all
local g = golly()g.new("")g.select({0,0,16,16})g.randfill(50)g.run(50000)g.select({-13000,-13000,26000,26000})g.cut()if tonumber(g.getpop()) > 0 then g.show("Orthogonal spaceship detected!") g.fit()endg.paste(-13000,-13000,"or")g.setmag(1)g.select({0,0,16,16})
Can someone tell me if it works? *WSS are so rare.
EDIT 2: Oh, it works. |
Difference between revisions of "Lower attic"
From Cantor's Attic
Line 18: Line 18:
* [[indecomposable]] ordinal
* [[indecomposable]] ordinal
* the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]]
* the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]]
−
* [[Hilberts hotel | Hilbert's hotel]]
+
* [[Hilberts hotel | Hilbert's hotel]]
* [[omega | $\omega$]], the smallest infinity
* [[omega | $\omega$]], the smallest infinity
* down to the [[parlour]], where large finite numbers dream
* down to the [[parlour]], where large finite numbers dream
Revision as of 08:53, 30 December 2011
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
$\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including the Bachmann-Howard ordinal admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the omega one of chess, $\omega_1^{\rm chess}$ the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel, and first steps beyond infinity $\omega$, the smallest infinity down to the parlour, where large finite numbers dream |
Sometimes, especially in introductory courses the instructor will try to keep things "focused" in order to promote learning. Still, it's unfortunate that the instructor couldn't respond in a more positive and stimulating way to your question.
These reactions do occur at $\ce{sp^2}$ hybridized carbon atoms, they are often just energetically more costly, and therefore somewhat less common. Consider when a nucleophile reacts with a carbonyl compound, the nucleophile attacks the carbonyl carbon atom in an $\ce{S_{N}2}$ manner. The electrons in the C-O $\pi$–bond can be considered as the leaving group and a tetrahedral intermediate is formed with a negative charge on oxygen. It is harder to do this with a carbon-carbon double bond (energetically more costly) because you would wind up with a negative charge on carbon (instead of oxygen), which is energetically less desirable (because of the relative electronegativities of carbon and oxygen).
If you look at the Michael addition reaction, the 1,4-addition of a nucleophile to the carbon-carbon double bond in an $\ce{\alpha-\beta}$ unsaturated carbonyl system, this could be viewed as an $\ce{S_{N}2}$ attack on a carbon-carbon double bond, but again, it is favored (lower in energy) because you create an intermediate with a negative charge on oxygen.
$\ce{S_{N}1}$ reactions at $\ce{sp^2}$ carbon are well documented. Solvolysis of vinyl halides in very acidic media is an example. The resultant vinylic carbocations are actually stable enough to be observed using nmr spectroscopy. The picture below helps explain why this reaction is so much more difficult (energetically more costly) than the more common solvolysis of an alkyl halide. In the solvolysis of the alkyl halide we produce a traditional carbocation with an empty p orbital. In the solvolysis of the vinyl halide we produce a carbocation with the positive charge residing in an $\ce{sp^2}$ orbital. Placing positive charge in an $\ce{sp^2}$ orbital is a higher energy situation compared to placing it in a p orbital (electrons prefer to be in orbitals with higher s density, it stabilizes them because the more s character in an orbital the lower its energy; conversely, in the absence of electrons, an orbital prefers to have high p character and mix the remaining s character into other bonding orbitals that do contain electrons in order to lower their energy). |
I am trying to simulate throw of javelin using projectile motion. The basic physics of projectile motion of a point was used to position the javelin. For the rotation part, I just assumed that the javelin must be tanget to the projectile path as shown below.
But, if the launch angle was extremely huge, we can see that the projectile range will be extremely small. Now, lets assume, the height of projectile was big, the projectile motion will look as shown below.
The animation looks unnatural because the stick rotates really fast because the curve is extremely steep.
So, I am sure the javelin is not always tangent to the path(of Centre of Gravity of the javelin).
So I thought maybe I need to calculate torque on the javelin. I've also tried this approach but the javelin is extremely shaky. My steps were:
$$torque = r \times \vec{F}$$ $$\vec{F} = mg + ma$$
$$ a = \frac{dv}{dt} $$ v is the velocity of the centre of javelin at any point of time of projectile motion. We integrate the equation to find the change in torque.
$$d\vec{Torque} = r \times d\vec{force}$$
$$d\vec{Torque} = Id\vec{\omega}$$
Now, we obtain the change in angular velocity, which is then integrated to obtain the orientation of javelin.
I must have messed up somewhere because the integration is not working. Its extremely shaky and it works in some cases where launch angle is small, but fails when launch angle is huge.
So can anybody give me some pointers? |
October 31st, 2018, 01:50 AM
# 1
Member
Joined: Nov 2012
Posts: 80
Thanks: 1
Question about proving the availability of double real root of a polynomial
23A5BCBB-CA79-4DA7-B726-23E5A84070E1.jpg
2BC82CF6-7D69-4698-87BF-11388DD9C0A7.jpg
E2282EA8-2E2A-45F4-B13F-D9D0ACAFB0B4.jpg
For part b, it is assuming that n is an odd number, so r-a = r+a
Since the exponent of n-1 would be an even number for n being an odd number, is the equation of (r-a)^(n-1) = -(r+a)^(n-1) undefined? It is because any real number to a power of an even number cannot be negative.
Please rectify me if I have made some mistakes.
October 31st, 2018, 08:07 AM
# 2
Senior Member
Joined: May 2016
From: USA
Posts: 1,310
Thanks: 552
I can't read your attachments so I cannot actually check your work.
$f(x) \text { is a polynomial such that } f(x) = (x - r)^2 * g(x) \text { and } g(r) \ne 0.$
$f'(x) = (x - r)^2 * g'(x) + 2(x - r) * g(x) \implies f'(r) = 0 * g'(r) + 2 * 0 * g(r) = 0.$
$p(x) = (x - a)^n + (x + a)^n = \displaystyle = \left ( \sum_{k=0}^n \dbinom{n}{k} * (-\ 1)^k * x^{(n-k)}a^k \right ) + \left ( \sum_{k=0}^n \dbinom{n}{k} * x^{(n-k)}a^k \right ).$
$n = 2m \implies p(x) = \displaystyle \left ( \sum_{j=0}^m 2 * \dbinom{2m}{2j} * (x^{(m - j)})^2 * (a^j)^2 \right ) > 0 \ \because a \ne 0 \text { by hypothesis.}$
Every term in that sum is non-negative except the last, which is positive, so the sum is positive. There are no real roots, let alone a real double root.
We skip the trivial case where n = 1.
$p'(x) = \displaystyle \left ( \sum_{k=0}^{n-1} (n - k) \dbinom{n}{k} * (-\ 1)^k * x^{(n - 1 - k)}a^k \right ) + \left ( \sum_{k=0}^{n - 1} (n - k) * \dbinom{n}{k} * x^{(n-1-k)}a^k \right ).$
$n = 2m + 1 \implies p'(x) = \displaystyle \left ( \sum_{j=0}^m 2 * (2m + 1 - 2j) \dbinom{2m + 1}{2j} * (x^{(m - j)})^2(a^j)^2 \right ).$
$\therefore n = 2m + 1 \text { and } x \ne 0 \implies p'(x) > 0.$
$n = 2m + 1 \text { and } p'(x) = 0 \implies x = 0.$
$p(0) = a \ne 0.$
If n is odd, the derivative is not zero at any real root, and therefore no real root is a double root.
October 31st, 2018, 07:40 PM
# 3
Global Moderator
Joined: Dec 2006
Posts: 21,020
Thanks: 2255
Using latex:
Let $f(x)$ be a polynomial. If $f(x) = (x - r)^2g(x)$ where $g(r) \ne 0$, we say that $r$ is a double root of $f(x) = 0$.
(a) $\ $ If $f(x) = 0$ has a double root, prove that $f(r) = f'(r) = 0$.
(b) $\ $ Show that $p(x) = (x - a)^n + (x + a)^n$, where $a$ is a non-zero real number, does not have a double real root.
2. $\ \ $ (a) $\ $ If $f(x) = 0$ has a double root $r$,
$\displaystyle \begin{align*} f(x) &= (x - r)^2g(x) \\
\hspace{80px}f(r) &= (r - r)^2g(r) \\
&= 0 \\
f'(x) &= \frac{d}{dx}[(x - r)^2g(x)] \\
&= (x - r)^2\frac{d}{dx}[g(x)] + g(x)\frac{d}{dx}(x - r)^2 \\
&= (x - r)^2g'(x) + g(x)(2)(x - r)\frac{d}{dx}(x - r) \\
&= (x - r)^2g'(x) + 2(x - r)g(x) \\
f'(r) &= (r - r)^2g'(r) + 2(r - r)g(r) \\
&= 0\end{align*}$
$\hspace{80px}\therefore$ If $f(x) = 0$ has a double root $r$, then
$\hspace{100px}f(r) = f'(r) = 0$
$\hspace{28px}$(b) $\ $ Assume $p(x)$ has a double real root $r$,
$\displaystyle \hspace{66px}\therefore\ p(r) = p'(r) = 0 \\
\begin{align*}\hspace{66px}p'(x) &= \frac{d}{dx}[(x - a)^n + (x + a)^n] \\
&= n(x - a)^{n-1} + n(x + a)^{n-1} \\ \end{align*}$
$\displaystyle \begin{align*}\hspace{100px}p'(r) &= n(r - a)^{n-1} + n(r + a)^{n-1} = 0 \\
(r - a)^{n-1} &= -(r + a)^{n-1}\end{align*}$
November 1st, 2018, 01:49 AM
# 4
Member
Joined: Nov 2012
Posts: 80
Thanks: 1
I am quite confused with the binomial expression, though I think, for the p(x), you are proving that the graph does not cut the x-axis given that a is not equal to zero. Therefore, the graph has no any real roots at all. Anyway, thx for your help Jeff.
I checked up the answer of the question again. The derivative of p(x) is n(x+a)^(n-1) + n(x-a)^(n-1) = 0 by assuming that there is a double real root. We can re-write the equation to the form of (x-a)^(n-1) = -(x+a)^(n-1). Before, I was wondering why if n-1 was an even number, then x-a = x+a. Someone just told me that only when x-a = x+a = 0, the above equation would be valid. And now, I am convinced that x-a = x+a when n-1 is an even number. This leads to the result that a=0 which contradicts the fact of the constant a being an non-zero real number to disprove the assumption for n-1 being an even integer.
November 1st, 2018, 05:38 AM
# 5
Senior Member
Joined: May 2016
From: USA
Posts: 1,310
Thanks: 552
Quote:
Second, I think you misunderstood mine. My proof is split into 2 parts.
If n is even, p(x) has no real root and therefore no real double root. For example
$p(x) = (x - a)^2 + (x + a)^2 = x^2 - 2ax + a^2 + x^2 - 2ax + a^2 = 2x^2 + 2a^2.$
And $2x^2 + 2a^2 > 0 \text { unless } x = 0 = a.$
$\text {But } a \ne 0 \text { by hypothesis.}$
So no real root and thus no double real root.
It is a bit more complex if n is odd. Then p(x) certainly has at least one real root. But p'(x) has no real root. But that is a requirement for a double real root. For example
$p(x) = (x - a)^3 + (x + a)^3 =$
$x^3 - 3ax^2 + 3a^2x - a^3 + x^3 + 3ax^2 + 3a^2x + a^3 = 2x^3 + 6a^2x \implies$
$p'(x) = 6x^2 + 6a^2 > 0 \text { unless } x = 0 = a.$
$\text {But } a \ne 0 \text { by hypothesis.}$
So no value of x can be a double root because p'(x) would have to equal 0 there.
Last edited by JeffM1; November 1st, 2018 at 05:41 AM.
November 2nd, 2018, 10:41 PM
# 6
Member
Joined: Nov 2012
Posts: 80
Thanks: 1
Really appreciate for your work, Jeff. It seems that it is more convenient for me to understand the steps by showing some counter-examples to disprove the assumption rather than by showing in the form of algebraic expressions. Really thanks for your time And also jack for your suggestion.
Last edited by justusphung; November 2nd, 2018 at 11:04 PM.
November 3rd, 2018, 07:14 AM
# 7
Banned Camp
Joined: Mar 2015
From: New Jersey
Posts: 1,720
Thanks: 126
p(x)=(x-a)^n+(x+a)^n=0
If n even, impossible.
If n odd
(a-x)^n=(a+x)^n $\displaystyle \rightarrow$ a-x=a+x $\displaystyle \rightarrow$ x=-x
x=0 is only real root
Tags availability, double, polynomial, proving, question, real, root
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post What is a double root?!?! wrightarya Algebra 1 July 18th, 2014 04:42 AM Expressing every real number as a root of a polynomial series honzik Real Analysis 4 June 2nd, 2014 01:16 PM a double square root fraction simplify problem wuzhe Algebra 8 October 14th, 2012 06:43 AM Real double roots question oasi Calculus 1 March 15th, 2012 04:41 AM Real root(s) johnny Calculus 4 June 14th, 2011 10:29 PM |
I know this question was asked before, but none of the previous threads end up answering the question satisfactorily enough for me. So let me try to summarize my problems succinctly:
The notation $\mathbb{Q}(\sqrt{2})$ is commonly used to denote the smallest sub-field generated by $\mathbb{Q} \cup \{\sqrt{2}\}$. However, for a general sub-field $K$, $K[t]$ is defined to be the ring of polynomials over $K$ which then follows $K(t)$ which is the field of polynomials over $K$ (or rational expressions). However, is $K(t)$ simply a way of notation or does it follow the convention of $\mathbb{Q}(\sqrt{2})$? If the latter, how does this relate with rational functions over $K$? I am not understanding the proof for how $K(t)$ is a transcendental extension of $K$ which goes as follows:
If $p$ is a polynomial over $K$ s.t. $p(t)=0$ then $p=0$ by definition of $K(t)$, so the extension is transcendental.
I understand that to show transcendence over $K$ we shall assume some element $t =\frac{r(s)}{q(s)}\in K(s)$ satisfies $p(t) =0, p\in K[t]$ and show that p must be identically $0$ as a result. However, where are we using the definition of $K(t)$ to show this is so?
The other threads are linked here: |
For any prime $p\gt 5$,prove that there are consecutive quadratic residues of $p$ and consecutive non-residues as well(excluding $0$).I know that there are equal number of quadratic residues and non-residues(if we exclude $0$), so if there are two consecutive quadratic residues, then certainly there are two consecutive non-residues,therefore, effectively i am seeking proof only for existence of consecutive quadratic residues. Thanks in advance.
Since 1 and 4 are both residues (for any $p\ge 5$), then to avoid having consecutive residues (with 1 and 4), we would have to have both 2 and 3 as non-residues, and then we have 2 consecutive non-residues.
Thus, we must have either 2 consecutive residues or 2 consecutive nonresidues.
i.e.: 1 and 4 are both residues, so we have R * * R for the quadratic character of 1, 2, 3 and 4. However we fill in the two blanks with Rs or Ns, we will get either 2 consecutive Rs or 2 consecutive Ns.
Edited:
To show that we must actually get both RR and NN for $p\gt 5$, we consider 2 cases:
$p\equiv -1 \pmod 4$: then the second half of the list of $p-1$ Ns and Rs is the inverse of the first half (Ns become Rs and the order is reversed), so that if we have NN or RR in the first half (using the argument above) then we get the other pattern in the second half.
$p\equiv 1 \pmod 4$: then the second half of the list is the reverse of the first half. Then if there is no RR amongst the first 4, then there must be an appearance of NN, i.e. sequence begins RNNR..., and if we fill in the dots (this is where we need $p>5$ - to ensure there ARE some dots!) with Ns and Rs trying to avoid an appearance of RR, then we have to alternate ...NRNR...NR. However the sequence then ends with R, and the second half begins with R, so we eventually get RR.
(The comments about the second half of the list in the 2 cases are easy consequences of -1 being a residue or a nonresidue of p).
The number of $k\in[0,p-1]$ such that $k$ and $k+1$ are both quadratic residues is equal to: $$ \frac{1}{4}\sum_{k=0}^{p-1}\left(1+\left(\frac{k}{p}\right)\right)\left(1+\left(\frac{k+1}{p}\right)\right)+\frac{3+\left(\frac{-1}{p}\right)}{4}, $$ where the extra term is relative to the only $k=-1$ and $k=0$, in order to compensate the fact that the Legendre symbol $\left(\frac{0}{p}\right)$ is $0$, although $0$ is a quadratic residue. Since: $$ \sum_{k=0}^{p-1}\left(\frac{k}{p}\right)=\sum_{k=0}^{p-1}\left(\frac{k+1}{p}\right)=0, $$ the number of consecutive quadratic residues is equal to $$ \frac{p+3+\left(\frac{-1}{p}\right)}{4}+\frac{1}{4}\sum_{k=0}^{p-1}\left(\frac{k(k+1)}{p}\right). $$ By the multiplicativity of the Legendre symbol, for $k\neq 0$ we have $\left(\frac{k}{p}\right)=\left(\frac{k^{-1}}{p}\right)$, so: $$ \sum_{k=1}^{p-1}\left(\frac{k(k+1)}{p}\right) = \sum_{k=1}^{p-1}\left(\frac{1+k^{-1}}{p}\right)=\sum_{k=2}^{p}\left(\frac{k}{p}\right)=-1,$$ and we have $\frac{p+3}{4}$ consecutive quadratic residues if $p\equiv 1\pmod{4}$ and $\frac{p+1}{4}$ consecutive quadratic residues if $p\equiv -1\pmod{4}$.
An elementary proof, too: if $p>5$, at least one residue class among $\{2,5,10\}$ must be a quadratic residue, since the product of two quadratic non-residues is a quadratic residue. But every element of the set $\{2,5,10\}$ is a square-plus-one, giving at least a couple of consecutive quadratic residues among $(1,2),(4,5),(9,10)$. |
Your question actually describes an infinite dimension division algebra - the field of
Formal Laurent series. In particular, the elements of this field are just expressions of the form:$$a_{-k}x^{-k}+a_{-k+1}x^{-k+1}+\ldots+a_{-1}x^{-1}+a_0+a_1x+a_2x^2+\ldots$$where the series can go on forever to the right, but may not have infinitely many terms of negative exponent (although can have arbitrarily many). Note that we're not concerning ourselves with convergence or anything like that - these are purely expressions that you manipulate by adding them coefficient wise and multiplying them by taking every pair of terms from the two series, taking their product, then collecting like terms (which is, for each coefficient, a finite process due to the fact that there are only finitely many negative terms included in any series).
It's sort of a pain to write out the exact formula for the multiplicative inverse of an element, but you can do it fairly nicely in two steps: First, note that every non-zero element is of the form$$c\cdot x^n\cdot F$$where $c$ is an element of the field we're taking our coefficients from and where $F$ is of the form $F=1+a_1x+a_2x^2+\ldots$. Since we can clearly invert $c$ as it's just a real number (or something like that) and we can invert $x^n$, all we need to do is invert $F$. We can do that by solving$$(1+a_1x+a_2x^2+\ldots)\cdot (1+b_1x+b_2x^2+\ldots)=1+0x+0x^2+\ldots$$which gives the equations, for each $n\geq 1$ that$$\sum_{i=0}^{n}a_ib_{n-i}=0$$which rearranges to say$$b_n=-\sum_{i=1}^na_ib_{n-i}$$after we pull out one term from the sum. We can then inductively figure out the power series inverse to any of the form $1+a_1x+a_2x^2+\ldots$ and extend that as you wish.
It's also worth noting that this construction gives a division ring whenever we take our coefficients from a division ring - so if we want something non-commutative, we could apply this construction to have quaternion coefficients. |
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 25, Number 2 (1954), 382-386. Approximation Methods which Converge with Probability one Abstract
Let $H(y\mid x)$ be a family of distribution functions depending upon a real parameter $x,$ and let $M(x) = \int^\infty_{-\infty} y dH(y \mid x)$ be the corresponding regression function. It is assumed $M(x)$ is unknown to the experimenter, who is, however, allowed to take observations on $H(y\mid x)$ for any value $x.$ Robbins and Monro [1] give a method for defining successively a sequence $\{x_n\}$ such that $x_n$ converges to $\theta$ in probability, where $\theta$ is a root of the equation $M(x) = \alpha$ and $\alpha$ is a given number. Wolfowitz [2] generalizes these results, and Kiefer and Wolfowitz [3], solve a similar problem in the case when $M(x)$ has a maximum at $x = \theta.$ Using a lemma due to Loeve [4], we show that in both cases $x_n$ converges to $\theta$ with probability one, under weaker conditions than those imposed in [2] and [3]. Further we solve a similar problem in the case when $M(x)$ is the median of $H(y \mid x).$
Article information Source Ann. Math. Statist., Volume 25, Number 2 (1954), 382-386. Dates First available in Project Euclid: 28 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177728794 Digital Object Identifier doi:10.1214/aoms/1177728794 Mathematical Reviews number (MathSciNet) MR62399 Zentralblatt MATH identifier 0055.37806 JSTOR links.jstor.org Citation
Blum, Julius R. Approximation Methods which Converge with Probability one. Ann. Math. Statist. 25 (1954), no. 2, 382--386. doi:10.1214/aoms/1177728794. https://projecteuclid.org/euclid.aoms/1177728794 |
I have an Equilateral triangle with unknown side $a$. The next thing I do is to make a random point inside the triangle $P$. The distance $|AP|=3$ cm, $|BP|=4$ cm, $|CP|=5$ cm.
It is the red triangle in the picture. The exercise is to calculate the area of the Equilateral triangle (without using law of cosine and law of sine, just with simple elementary argumentation).
The first I did was to reflect point $A$ along the opposite side $a$, therefore I get $D$. Afterwards I constructed another Equilateral triangle $\triangle PP_1C$.
Now it is possible to say something about the angles, namely that $\angle ABD=120^{\circ}$, $\angle PBP_1=90^{\circ} \implies \angle APB=150^{\circ}$ and $\alpha+\beta=90^{\circ}$
Now I have no more ideas. Could you help me finishing the proof to get $a$ and therefore the area of the $\triangle ABC$. If you have some alternative ideas to get the area without reflecting the point $A$ it would be interesting. |
This question already has an answer here:
I'm struggling to write |.| in nomenclature in LaTeX. It works for the first item in the nomenclature, but not working for the second item in the following:
\documentclass[10pt,journal]{IEEEtran}\usepackage{nomencl}\usepackage{ifthen}\renewcommand{\nomgroup}[1]{% \ifthenelse{\equal{#1}{C}}{\item[\textbf{Constants}]}{% \ifthenelse{\equal{#1}{V}}{\item[\textbf{Variables}]}{% \ifthenelse{\equal{#1}{K}}{\item[\textbf{Symbols}]}{% \ifthenelse{\equal{#1}{S}}{\item[\textbf{sets}]}{}}}}}\makenomenclature\begin{document}\nomenclature[Km]{$\leftarrow\cdot\rightarrow$}{Number of element in a set.} %\nomenclature[Km]{$\left|\cdot \right|$}{Number of element in a set.} %Say something.\printnomenclature\end{document}
Can any one help me with it, why it is not showing up? |
It is not clear from your statement whether the proof is to strictly use induction on some variable such as the number of characters or some more general kind of induction.
Here is how I would do it. The definition of a well formed formula (wff) is almost always defined recursively. Your problem statement apparently uses something like this definition:
All propositional variables (such as $p$, $p1$, $p2$, etc.) are wffs. If $\mathcal A$ is a wff then so is $(\mathcal A)$. If $\mathcal A$ and $\mathcal B$ are wffs then so are $\mathcal A\land\mathcal B$ and $\mathcal A\lor\mathcal B$. A statement is a wff if and only if it can be determined to be so by means of rules 1, 2, and 3.
(Note that a complete definition of wff would also include $\lnot\mathcal A$, $\mathcal A\implies\mathcal B$, and $\mathcal A\equiv\mathcal B$, and perhaps some others depending on your particular logical structure, but these are excluded in your problem.)
Then use induction on this (or the actual) definition. First show your problem is true for all propositional variables, then for wffs in parentheses, then for conjunctions and disjunctions.
If you need an actual variable to do induction on, note that each of those rules in the definition of a wff adds to the number of characters in the formula. You can then do induction on the number of characters in the formula. Use the induction
Show the statement is true for $n=1$ (statement letters). Show that if the statement is true for all $k<n$ then it is true for $n$ (using definition rules 2 and 3).
In my first version, you could say $n$ is the number of applications of the rules for a wff. There are multiple ways to do the induction: pick the one that works best for you and the details of your class.
Is this clear? |
Let $(K,w)$ be a henselian field such that the residue field $k=k(w)$ is an algebraic extension of a finite field $\mathbb{F}_p$. Let $\ell\neq p$ be a prime with $\mu_\ell\subset K$.
Now, in the paper I'm reading, it states:
Let $G_\ell$ be a pro-$\ell$ Sylow group of $G_K$. By the ramification theory for general valuations (see e.g.
O. Endler, Valuation Theory, Springer, 1972, §20) we have $G_\ell\cong \mathbb{Z}_\ell(1)^r \rtimes G_k(\ell)$ where the action is defined via the cyclotomic character with $r=\dim_{\mathbb{F}_\ell}(\Gamma_w/\ell\Gamma_w)$ and $G_k(\ell)$ being the pro-$\ell$ Sylow group of $G_k$ hence either $\cong \mathbb{Z}_\ell$ or $\cong 0$.
I've looked into "Valuation Theory", §20 by O. Endler and other books on the subject ramification theory, but the only thing I've found is how the ramification group is the pro-$p$ Sylow group of the inertia group, which, of course, is something entirely different, right?
Anyway, I'm wondering why this assertion holds and if there are any better references?
Update: I've found the same assertion in the paper On Grothendieck's Conjecture of Birational Anabelian Geometry by Florian Pop, Ann. Math. 138 (1994), p.155, 1.6 (2). But there's no reference. |
A
Rajchman measure on the unit circle $\mathbb{T}$ is a Borel probability measure $\mu$ with $\lim_{n\to\infty}\hat{\mu}(n)=0$. Where $\hat{\mu}(n)=\mu(z^n)$ for $n\in\mathbb{Z}$ are Fourier coefficients of $\mu$.
Suppose $(X,\mathcal{B},T,\mu)$ is a measure-theoretic dynamical system consisting of a measure space $(X,\mathcal{B},\mu)$ and an invertible measure preserving map $T:X\to X$. A dynamical system $(X,\mathcal{B},T,\mu)$ is called
strong mixing if$$\lim_{n\to\infty} \mu(T^{-n}A\cap B)=\mu(A)\mu(B)$$ for all $A,B\in\mathcal{B}$.
Denote the space $\{f\in L^2(X,\mu)|\int_X f\,d\mu=0\}$ by $L^2_0(X,\mu)$. The dynamical system $(X,\mathcal{B},T,\mu)$ is strong mixing iff $$\lim_{n\to\infty}\int_X f(T^nx)\overline{f(x)}\,d\mu(x)=0$$ for every unit $f\in L^2_0(X,\mu)$.
For every unit $f\in L^2_0(X,\mu)$, one can define a Borel probability measure $\mu_f$ on $\mathbb{T}$ by $$\widehat{\mu_f}(n)=\int_X f(T^nx)\overline{f(x)}\,d\mu(x).$$ So a dynamical system $(X,\mathcal{B},T,\mu)$ is strong mixing iff $\mu_f$ is a Rajchman measure for every unit $f\in L^2_0(X,\mu)$.
Through the above observation, given a strong mixing dynamical system $(X,\mathcal{B},T,\mu)$, for instance, Bernoulli shift $(\{0,1\}^\mathbb{Z},\mathcal{B},S,\mu_p)$ where $\mu_p$ is the product measure of the measure on $\{0,1\}$ giving $\{0\}$ measure $p$ and $\{1\}$ measure $1-p$, each $f\in L^2_0(X,\mu)$ gives rise to a Rajchman measure on $\mathbb{T}$.
Question: For which strong mixing dynamical system $(X,\mathcal{B},T,\mu)$, one can find unit $f\in L^2_0(X,\mu)$ such that $\mu_f$ is a Rajchman measure singular to the Lebesgue measure? |
When solving a reduced KKT system of a nonlinear (and nonconvex) constrained program after eliminating slack and dual variables, how do we actually take the next step in a primal-dual method?
For example, following notation from NW, if the original nonlinear system is like (19.12) \begin{align} \begin{bmatrix} \nabla_{xx}^2 L & 0 & A_E(x)^T & A_I(x)^T \\ 0 & \Sigma & 0 & -I \\ A_E(x) & 0 & 0 & 0 \\ A_I(x) & -I & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} p_x \\ p_s \\ -p_y \\ -p_z \end{bmatrix} = -\begin{bmatrix} \nabla f(x) - A_E(x)^Ty - A_I(x)^T z \\ z - \mu S^{-1}e \\ c_E(x) \\ c_I(x) - s \end{bmatrix}, \end{align}
then I see how a solution gives us a way to update $x,s,y,z$. However, if we solve a reduced system of the form
\begin{align} \begin{bmatrix} \nabla_{xx}^2 L + A_I(x)^T\Sigma A_I(x) & A_E(x)^T \\ A_E(x) & 0 \end{bmatrix} \begin{bmatrix} p_x \\ -p_y \end{bmatrix} = \text{?} \end{align}
then (1) what is the RHS; and (2) how do we update $s,z$?
EDIT: Can I have some help on the details for how we eliminate variables in moving from the larger system to the smaller system? |
cbs¶
psi4.driver.
cbs(
func, label, **kwargs)¶
Function to define a multistage energy method from combinations of basis set extrapolations and delta corrections and condense the components into a minimum number of calculations.
Aliases: complete_basis_set() Returns: ( float) – Total electronic energy in Hartrees PSI variables:
Caution
Some features are not yet implemented. Buy a developer a coffee.
No way to tell function to boost fitting basis size for all calculations. No way to extrapolate def2 family basis sets Need to add more extrapolation schemes
As represented in the equation below, a CBS energy method is defined in several sequential stages (scf, corl, delta, delta2, delta3, delta4, delta5) covering treatment of the reference total energy, the correlation energy, a delta correction to the correlation energy, and a second delta correction, etc.. Each is activated by its stage_wfn keyword and is only allowed if all preceding stages are active.\[E_{\text{total}}^{\text{CBS}} = \mathcal{F}_{\textbf{scf_scheme}} \left(E_{\text{total},\; \text{SCF}}^{\textbf{scf_basis}}\right) \; + \mathcal{F}_{\textbf{corl_scheme}} \left(E_{\text{corl},\; \textbf{corl_wfn}}^{\textbf{corl_basis}}\right) \; + \delta_{\textbf{delta_wfn_lesser}}^{\textbf{delta_wfn}} \; + \delta_{\textbf{delta2_wfn_lesser}}^{\textbf{delta2_wfn}} \; + \delta_{\textbf{delta3_wfn_lesser}}^{\textbf{delta3_wfn}} \; + \delta_{\textbf{delta4_wfn_lesser}}^{\textbf{delta4_wfn}} \; + \delta_{\textbf{delta5_wfn_lesser}}^{\textbf{delta5_wfn}}\]
Here, \(\mathcal{F}\) is an energy or energy extrapolation scheme, and the following also hold.\[\delta_{\textbf{delta_wfn_lesser}}^{\textbf{delta_wfn}} \; = \mathcal{F}_{\textbf{delta_scheme}} \left(E_{\text{corl},\; \textbf{delta_wfn}}^{\textbf{delta_basis}}\right) - \mathcal{F}_{\textbf{delta_scheme}} \left(E_{\text{corl},\; \textbf{delta_wfn_lesser}}^{\textbf{delta_basis}}\right)\]\[\delta_{\textbf{delta2_wfn_lesser}}^{\textbf{delta2_wfn}} \; = \mathcal{F}_{\textbf{delta2_scheme}} \left(E_{\text{corl},\; \textbf{delta2_wfn}}^{\textbf{delta2_basis}}\right) - \mathcal{F}_{\textbf{delta2_scheme}} \left(E_{\text{corl},\; \textbf{delta2_wfn_lesser}}^{\textbf{delta2_basis}}\right)\]\[\delta_{\textbf{delta3_wfn_lesser}}^{\textbf{delta3_wfn}} \; = \mathcal{F}_{\textbf{delta3_scheme}} \left(E_{\text{corl},\; \textbf{delta3_wfn}}^{\textbf{delta3_basis}}\right) - \mathcal{F}_{\textbf{delta3_scheme}} \left(E_{\text{corl},\; \textbf{delta3_wfn_lesser}}^{\textbf{delta3_basis}}\right)\]\[\delta_{\textbf{delta4_wfn_lesser}}^{\textbf{delta4_wfn}} \; = \mathcal{F}_{\textbf{delta4_scheme}} \left(E_{\text{corl},\; \textbf{delta4_wfn}}^{\textbf{delta4_basis}}\right) - \mathcal{F}_{\textbf{delta4_scheme}} \left(E_{\text{corl},\; \textbf{delta4_wfn_lesser}}^{\textbf{delta4_basis}}\right)\]\[\delta_{\textbf{delta5_wfn_lesser}}^{\textbf{delta5_wfn}} \; = \mathcal{F}_{\textbf{delta5_scheme}} \left(E_{\text{corl},\; \textbf{delta5_wfn}}^{\textbf{delta5_basis}}\right) - \mathcal{F}_{\textbf{delta5_scheme}} \left(E_{\text{corl},\; \textbf{delta5_wfn_lesser}}^{\textbf{delta5_basis}}\right)\]
A translation of this ungainly equation to example [5] below is as follows. In words, this is a double- and triple-zeta 2-point Helgaker-extrapolated CCSD(T) coupled-cluster correlation correction appended to a triple- and quadruple-zeta 2-point Helgaker-extrapolated MP2 correlation energy appended to a SCF/aug-cc-pVQZ reference energy.\[E_{\text{total}}^{\text{CBS}} = \mathcal{F}_{\text{highest_1}} \left(E_{\text{total},\; \text{SCF}}^{\text{aug-cc-pVQZ}}\right) \; + \mathcal{F}_{\text{corl_xtpl_helgaker_2}} \left(E_{\text{corl},\; \text{MP2}}^{\text{aug-cc-pV[TQ]Z}}\right) \; + \delta_{\text{MP2}}^{\text{CCSD(T)}}\]\[\delta_{\text{MP2}}^{\text{CCSD(T)}} \; = \mathcal{F}_{\text{corl_xtpl_helgaker_2}} \left(E_{\text{corl},\; \text{CCSD(T)}}^{\text{aug-cc-pV[DT]Z}}\right) - \mathcal{F}_{\text{corl_xtpl_helgaker_2}} \left(E_{\text{corl},\; \text{MP2}}^{\text{aug-cc-pV[DT]Z}}\right)\]
Energy Methods
The presence of a stage_wfn keyword is the indicator to incorporate (and check for stage_basis and stage_scheme keywords) and compute that stage in defining the CBS energy.
The cbs() function requires, at a minimum,
name='scf'and
scf_basiskeywords to be specified for reference-step only jobs and
nameand
corl_basiskeywords for correlated jobs.
The following energy methods have been set up for cbs().
scf hf mp2 mp2.5 mp3 mp4(sdq) mp4 mp n omp2 omp2.5 omp3 olccd lccd lccsd cepa(0) cepa(1) cepa(3) acpf aqcc qcisd cc2 ccsd fno-ccsd bccd cc3 qcisd(t) ccsd(t) fno-ccsd(t) bccd(t) cisd cisdt cisdtq ci n fci mrccsd mrccsd(t) mrccsdt mrccsdt(q) Parameters: name( string) –
'scf'||
'ccsd'|| etc.
First argument, usually unlabeled. Indicates the computational method for the correlation energy, unless only reference step to be performed, in which case should be
'scf'. Overruled if stage_wfn keywords supplied.
scf_wfn( string) –
\(\Rightarrow\)
'scf'\(\Leftarrow\) ||
'c4-scf'|| etc.
Indicates the energy method for which the reference energy is to be obtained. Generally unnecessary, as ‘scf’ is
thescf in PSI4 but can be used to direct lone scf components to run in PSI4 or Cfour in a mixed-program composite method. corl_wfn( string) –
'mp2'||
'ccsd(t)'|| etc.
Indicates the energy method for which the correlation energy is to be obtained. Can also be specified with
nameor as the unlabeled first argument to the function.
delta_wfn( string) –
'ccsd'||
'ccsd(t)'|| etc.
Indicates the (superior) energy method for which a delta correction to the correlation energy is to be obtained.
delta_wfn_lesser( string) –
\(\Rightarrow\)
corl_wfn\(\Leftarrow\) ||
'mp2'|| etc.
Indicates the inferior energy method for which a delta correction to the correlation energy is to be obtained.
delta2_wfn( string) –
'ccsd'||
'ccsd(t)'|| etc.
Indicates the (superior) energy method for which a second delta correction to the correlation energy is to be obtained.
delta2_wfn_lesser( string) –
\(\Rightarrow\)
delta_wfn\(\Leftarrow\) ||
'ccsd(t)'|| etc.
Indicates the inferior energy method for which a second delta correction to the correlation energy is to be obtained.
delta3_wfn( string) –
'ccsd'||
'ccsd(t)'|| etc.
Indicates the (superior) energy method for which a third delta correction to the correlation energy is to be obtained.
delta3_wfn_lesser( string) –
\(\Rightarrow\)
delta2_wfn\(\Leftarrow\) ||
'ccsd(t)'|| etc.
Indicates the inferior energy method for which a third delta correction to the correlation energy is to be obtained.
delta4_wfn( string) –
'ccsd'||
'ccsd(t)'|| etc.
Indicates the (superior) energy method for which a fourth delta correction to the correlation energy is to be obtained.
delta4_wfn_lesser( string) –
\(\Rightarrow\)
delta3_wfn\(\Leftarrow\) ||
'ccsd(t)'|| etc.
Indicates the inferior energy method for which a fourth delta correction to the correlation energy is to be obtained.
delta5_wfn( string) –
'ccsd'||
'ccsd(t)'|| etc.
Indicates the (superior) energy method for which a fifth delta correction to the correlation energy is to be obtained.
delta5_wfn_lesser( string) –
\(\Rightarrow\)
delta4_wfn\(\Leftarrow\) ||
'ccsd(t)'|| etc.
Indicates the inferior energy method for which a fifth delta correction to the correlation energy is to be obtained.
Basis Sets
Currently, the basis set set through
setcommands have no influence on a cbs calculation.
Parameters: scf_basis(basis string) –
\(\Rightarrow\)
corl_basis\(\Leftarrow\) ||
'cc-pV[TQ]Z'||
'jun-cc-pv[tq5]z'||
'6-31G*'|| etc.
Indicates the sequence of basis sets employed for the reference energy. If any correlation method is specified,
scf_basiscan default to
corl_basis.
corl_basis(basis string) –
'cc-pV[TQ]Z'||
'jun-cc-pv[tq5]z'||
'6-31G*'|| etc.
Indicates the sequence of basis sets employed for the correlation energy.
delta_basis(basis string) –
'cc-pV[TQ]Z'||
'jun-cc-pv[tq5]z'||
'6-31G*'|| etc.
Indicates the sequence of basis sets employed for the delta correction to the correlation energy.
delta2_basis(basis string) –
'cc-pV[TQ]Z'||
'jun-cc-pv[tq5]z'||
'6-31G*'|| etc.
Indicates the sequence of basis sets employed for the second delta correction to the correlation energy.
delta3_basis(basis string) –
'cc-pV[TQ]Z'||
'jun-cc-pv[tq5]z'||
'6-31G*'|| etc.
Indicates the sequence of basis sets employed for the third delta correction to the correlation energy.
delta4_basis(basis string) –
'cc-pV[TQ]Z'||
'jun-cc-pv[tq5]z'||
'6-31G*'|| etc.
Indicates the sequence of basis sets employed for the fourth delta correction to the correlation energy.
delta5_basis(basis string) –
'cc-pV[TQ]Z'||
'jun-cc-pv[tq5]z'||
'6-31G*'|| etc.
Indicates the sequence of basis sets employed for the fifth delta correction to the correlation energy.
Schemes
Transformations of the energy through basis set extrapolation for each stage of the CBS definition. A complaint is generated if number of basis sets in stage_basis does not exactly satisfy requirements of stage_scheme. An exception is the default,
'xtpl_highest_1', which uses the best basis set available. See sec:cbs_xtpl for all available schemes.
Parameters: scf_scheme( function) –
\(\Rightarrow\)
xtpl_highest_1\(\Leftarrow\) ||
scf_xtpl_helgaker_3|| etc.
Indicates the basis set extrapolation scheme to be applied to the reference energy. Defaults to
scf_xtpl_helgaker_3()if three valid basis sets present in
scf_basis,
scf_xtpl_helgaker_2()if two valid basis sets present in
scf_basis, and
xtpl_highest_1()otherwise.
xtpl_highest_1 scf_xtpl_helgaker_3 scf_xtpl_helgaker_2 scf_xtpl_truhlar_2 scf_xtpl_karton_2 corl_scheme( function) –
\(\Rightarrow\)
xtpl_highest_1\(\Leftarrow\) ||
corl_xtpl_helgaker_2|| etc.
Indicates the basis set extrapolation scheme to be applied to the correlation energy. Defaults to
corl_xtpl_helgaker_2()if two valid basis sets present in
corl_basisand
xtpl_highest_1()otherwise.
xtpl_highest_1 corl_xtpl_helgaker_2 delta_scheme( function) –
\(\Rightarrow\)
xtpl_highest_1\(\Leftarrow\) ||
corl_xtpl_helgaker_2|| etc.
Indicates the basis set extrapolation scheme to be applied to the delta correction to the correlation energy. Defaults to
corl_xtpl_helgaker_2()if two valid basis sets present in
delta_basisand
xtpl_highest_1()otherwise.
xtpl_highest_1 corl_xtpl_helgaker_2 delta2_scheme( function) –
\(\Rightarrow\)
xtpl_highest_1\(\Leftarrow\) ||
corl_xtpl_helgaker_2|| etc.
Indicates the basis set extrapolation scheme to be applied to the second delta correction to the correlation energy. Defaults to
corl_xtpl_helgaker_2()if two valid basis sets present in
delta2_basisand
xtpl_highest_1()otherwise.
xtpl_highest_1 corl_xtpl_helgaker_2 delta3_scheme( function) –
\(\Rightarrow\)
xtpl_highest_1\(\Leftarrow\) ||
corl_xtpl_helgaker_2|| etc.
Indicates the basis set extrapolation scheme to be applied to the third delta correction to the correlation energy. Defaults to
corl_xtpl_helgaker_2()if two valid basis sets present in
delta3_basisand
xtpl_highest_1()otherwise.
delta4_scheme( function) –
\(\Rightarrow\)
xtpl_highest_1\(\Leftarrow\) ||
corl_xtpl_helgaker_2|| etc.
Indicates the basis set extrapolation scheme to be applied to the fourth delta correction to the correlation energy. Defaults to
corl_xtpl_helgaker_2()if two valid basis sets present in
delta4_basisand
xtpl_highest_1()otherwise.
delta5_scheme( function) –
\(\Rightarrow\)
xtpl_highest_1\(\Leftarrow\) ||
corl_xtpl_helgaker_2|| etc.
Indicates the basis set extrapolation scheme to be applied to the fifth delta correction to the correlation energy. Defaults to
corl_xtpl_helgaker_2()if two valid basis sets present in
delta5_basisand
xtpl_highest_1()otherwise.
molecule(molecule) –
h2o|| etc.
The target molecule, if not the last molecule defined.
Examples: 1 2 >>> # [1] replicates with cbs() the simple model chemistry scf/cc-pVDZ: set basis cc-pVDZ energy('scf') >>> cbs(name='scf', scf_basis='cc-pVDZ') 1 2 >>> # [2] replicates with cbs() the simple model chemistry mp2/jun-cc-pVDZ: set basis jun-cc-pVDZ energy('mp2') >>> cbs(name='mp2', corl_basis='jun-cc-pVDZ') 1 2 >>> # [3] DTQ-zeta extrapolated scf reference energy >>> cbs(name='scf', scf_basis='cc-pV[DTQ]Z', scf_scheme=scf_xtpl_helgaker_3) 1 2 >>> # [4] DT-zeta extrapolated mp2 correlation energy atop a T-zeta reference >>> cbs(corl_wfn='mp2', corl_basis='cc-pv[dt]z', corl_scheme=corl_xtpl_helgaker_2) 1 2 3 >>> # [5] a DT-zeta extrapolated coupled-cluster correction atop a TQ-zeta extrapolated mp2 correlation energy atop a Q-zeta reference (both equivalent) >>> cbs(corl_wfn='mp2', corl_basis='aug-cc-pv[tq]z', delta_wfn='ccsd(t)', delta_basis='aug-cc-pv[dt]z') >>> cbs(energy, wfn='mp2', corl_basis='aug-cc-pv[tq]z', corl_scheme=corl_xtpl_helgaker_2, delta_wfn='ccsd(t)', delta_basis='aug-cc-pv[dt]z', delta_scheme=corl_xtpl_helgaker_2) 1 2 >>> # [6] a D-zeta ccsd(t) correction atop a DT-zeta extrapolated ccsd cluster correction atop a TQ-zeta extrapolated mp2 correlation energy atop a Q-zeta reference >>> cbs(name='mp2', corl_basis='aug-cc-pv[tq]z', corl_scheme=corl_xtpl_helgaker_2, delta_wfn='ccsd', delta_basis='aug-cc-pv[dt]z', delta_scheme=corl_xtpl_helgaker_2, delta2_wfn='ccsd(t)', delta2_wfn_lesser='ccsd', delta2_basis='aug-cc-pvdz') 1 2 >>> # [7] cbs() coupled with database() >>> TODO database('mp2', 'BASIC', subset=['h2o','nh3'], symm='on', func=cbs, corl_basis='cc-pV[tq]z', corl_scheme=corl_xtpl_helgaker_2, delta_wfn='ccsd(t)', delta_basis='sto-3g') 1 2 >>> # [8] cbs() coupled with optimize() >>> TODO optimize('mp2', corl_basis='cc-pV[DT]Z', corl_scheme=corl_xtpl_helgaker_2, func=cbs) |
Given a continuous map $f\colon X\to Y$ between two non-empty topological spaces, show that there is homotopy equivalence between the mapping cylinder $(X\times I)\sqcup _{f}Y$ and Y.
Here we have I=$[0,1]$, $ (x,1)\sim f(x)$ on X.
Is the following proof correct?:
Let's denote by $\cong$ a homotopy equivalence.
I is contractible so $I\cong \{1\}$, and obviously $X \cong X$,
So we have $X\times I \cong X\times \{1\} \cong f(X)$
Therefore $X\times I \sqcup_f Y\cong f(X)\sqcup_f Y = Y$
Thank you for your corrections and comments. |
I am working on a problem to prove, but I do not understand it completely. Where should I use inductive method? What is the base case? And so on. Here is my problem:
A truth assignment $M$ is a function that maps propositional variables to $\{0, 1\}$ ($1$ for true and $0$ for false). We write $M\vDash x$ if $x$ is true under $M$. We define a partial order $\leq$ on truth assignments by $M \le M'$ if $M(p) \le M'(p)$ for every propositional variable $p$.
A propositional formula is
positiveif it only contains connectives $\wedge$ and $\vee$ (i.e., no negation $\lnot$ or implication $\Rightarrow$).
Use Proof By Induction to show that for any truth assignments $M$ and $M'$ such that $M\le M'$, and any positive propositional formula $x$, if $M \vDash x$, then $M' \vDash x$.
I am really confused. Any help is welcome. Thank you. |
Homoclinic orbits for discrete Hamiltonian systems with local super-quadratic conditions
Center for Applied Mathematics, Guangzhou University, Guangzhou, Guangdong, 510006, China
$\triangle [p(n)\triangle u(n-1)]-L(n)u(n)+\nabla W(n, u(n)) = 0, $
$p(n), L(n)$
$W(n, x)$
$N$
$n$
zhongwenzy$
$σ(\mathcal{A})$
$\mathcal{A}$
$l^2(\mathbb{Z}, \mathbb{R}^{\mathcal{N}})$
$(\mathcal{A}u)(n) = \triangle [p(n)\triangle u(n-1)]-L(n)u(n)$
$\lim_{|x|\to ∞}\frac{W(n, x)}{|x|^2} = ∞$
$ n∈ \mathbb{Z}$ Keywords:Discrete Hamiltonian system, homoclinic orbit, strongly indefinite functional, local super-quadratic condition. Mathematics Subject Classification:Primary: 39A11, 58E05; Secondary: 70H05. Citation:Qinqin Zhang. Homoclinic orbits for discrete Hamiltonian systems with local super-quadratic conditions. Communications on Pure & Applied Analysis, 2019, 18 (1) : 425-434. doi: 10.3934/cpaa.2019021
References:
[1]
R. P. Agarwal,
[2]
C. D. Ahlbran and A. C. Peterson,
[3]
Z. Balanov, C. Carcía - Azpeitia and W. Krawcewicz,
On Variational and Topological Methods in Nonlinear Difference Equations,
[4] [5]
V. Coti Zelati and P. H. Rabinowitz,
Homoclinic orbits for second second order Hamiltoniansystems possessing superquadratic potentials,
[6]
D. E. Edmunds, W. D. Evans,
[7]
Z. M. Guo and J. S. Yu,
The existence of periodic and subharmonic solutions of subquadratic second order difference equation,
[8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
X. H. Tang and S. T. Chen,
Ground state solutions of Nehari-Pohozaev type for SchrödingerPoisson problems with general potentials,
[18]
X. H. Tang and X. Y. Lin,
Existence and multiplicity of homoclinic solutions for second-orderdiscrete Hamiltonian systems with subquadratic potential,
[19] [20] [21]
J. S. Yu and Z. M. Guo,
Homoclinic orbits for nonlinear difference equations containingboth advance and retardation,
[22] [23]
Q. Zhang,
Homoclinic orbits for discrete Hamiltonian systems with indefinite linear part andsuper linear terms,
[24]
Z. Zhou and J. S. Yu,
On the existence of homoclinic solutions of a class of discrete nonlinearperiodic systems,
[25]
Z. Zhou, J. S. Yu and Y. Chen,
Homoclinic solutions in periodic difference equations withsaturable nonlinearity,
show all references
References:
[1]
R. P. Agarwal,
[2]
C. D. Ahlbran and A. C. Peterson,
[3]
Z. Balanov, C. Carcía - Azpeitia and W. Krawcewicz,
On Variational and Topological Methods in Nonlinear Difference Equations,
[4] [5]
V. Coti Zelati and P. H. Rabinowitz,
Homoclinic orbits for second second order Hamiltoniansystems possessing superquadratic potentials,
[6]
D. E. Edmunds, W. D. Evans,
[7]
Z. M. Guo and J. S. Yu,
The existence of periodic and subharmonic solutions of subquadratic second order difference equation,
[8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
X. H. Tang and S. T. Chen,
Ground state solutions of Nehari-Pohozaev type for SchrödingerPoisson problems with general potentials,
[18]
X. H. Tang and X. Y. Lin,
Existence and multiplicity of homoclinic solutions for second-orderdiscrete Hamiltonian systems with subquadratic potential,
[19] [20] [21]
J. S. Yu and Z. M. Guo,
Homoclinic orbits for nonlinear difference equations containingboth advance and retardation,
[22] [23]
Q. Zhang,
Homoclinic orbits for discrete Hamiltonian systems with indefinite linear part andsuper linear terms,
[24]
Z. Zhou and J. S. Yu,
On the existence of homoclinic solutions of a class of discrete nonlinearperiodic systems,
[25]
Z. Zhou, J. S. Yu and Y. Chen,
Homoclinic solutions in periodic difference equations withsaturable nonlinearity,
[1]
Xiao-Fei Zhang, Fei Guo.
Multiplicity of subharmonic solutions and periodic solutions of a particular type of super-quadratic Hamiltonian systems.
[2] [3] [4]
Alain Bensoussan, Jens Frehse.
On diagonal elliptic and parabolic systems with
super-quadratic Hamiltonians.
[5]
Shigui Ruan, Junjie Wei, Jianhong Wu.
Bifurcation from a homoclinic orbit in partial functional differential equations.
[6]
Ying Lv, Yan-Fang Xue, Chun-Lei Tang.
Homoclinic orbits for a class of asymptotically quadratic Hamiltonian systems.
[7]
Oksana Koltsova, Lev Lerman.
Hamiltonian dynamics near nontransverse homoclinic orbit to saddle-focus equilibrium.
[8]
Jun Wang, Junxiang Xu, Fubao Zhang.
Homoclinic orbits for a class of Hamiltonian systems
with superquadratic or asymptotically quadratic potentials.
[9] [10]
Jiaquan Liu, Yuxia Guo, Pingan Zeng.
Relationship of the morse index and the $L^\infty$ bound of solutions for a strongly indefinite differential superlinear system.
[11]
Li-Li Wan, Chun-Lei Tang.
Existence and multiplicity of homoclinic orbits for second order Hamiltonian systems without (
[12]
Jun Wang, Junxiang Xu, Fubao Zhang.
Homoclinic orbits for superlinear Hamiltonian systems
without Ambrosetti-Rabinowitz growth condition.
[13]
Christian Bonatti, Lorenzo J. Díaz, Todd Fisher.
Super-exponential growth of the number of periodic orbits inside homoclinic classes.
[14]
Benoît Grébert, Tiphaine Jézéquel, Laurent Thomann.
Dynamics of Klein-Gordon on a compact surface near a homoclinic orbit.
[15] [16] [17] [18]
B. Buffoni, F. Giannoni.
Brake periodic orbits of prescribed Hamiltonian for indefinite Lagrangian systems.
[19]
Fabrizio Colombo, Davide Guidetti.
Identification of the memory kernel in the strongly damped wave equation by a flux condition.
[20]
Sergey Gonchenko, Ivan Ovsyannikov.
Homoclinic tangencies to resonant saddles and discrete Lorenz attractors.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
The answer is $m/n$. The reason is that
$$f(n,x) = \sum_{j=0}^{\infty} \frac{x^{j n}}{(j n)!} = \frac1{n} \sum_{k=0}^{n-1} \exp{\left ( e^{i 2 \pi k/n} x\right )} $$
The sum is dominated by the $k=0$ term as $x \to \infty$. The ratio of such terms is thus $m/n$.
ADDENDUM
Proof of the above assertion is straightforward. The Taylor expansion of the RHS is
$$\frac1{n} \sum_{k=0}^{n-1} \sum_{j=0}^{\infty} \frac{e^{i 2 \pi j k/n} x^j}{j!} $$
Reverse order of summation (justified because each individual sum absolutely converges):
$$\frac1{n} \sum_{j=0}^{\infty} \sum_{k=0}^{n-1} \frac{e^{i 2 \pi j k/n} x^j}{j!} = \frac1{n} \sum_{j=0}^{\infty}\frac{ x^j}{j!} \sum_{k=0}^{n-1} e^{i 2 \pi j k/n}$$
The inner sum is a geometrical series, so the Taylor expansion is now
$$ \frac1{n} \sum_{j=0}^{\infty}\frac{ x^j}{j!} \frac{e^{i 2 \pi j} - 1}{e^{i 2 \pi j/n} - 1} $$
It should be clear that the latter factor is equal to zero unless $j$ is equal to a multiple of $n$, where it is equal to $n$. QED. |
Along the lines of Glen O's answer, this answer attempts to explain the solvability of the problem, rather than provide the answer, which has already been given. Instead of using the meta-knowledge approach, which, as Glen stated, can get hard to follow, I use the range-base approach used in Rubio's answer, and specifically address some of the objections being raised.
The argument has been put forward that when Mark fails to answer on the first morning, he gives Rose no new information. This is actually true (sort of— see the last spoiler section of this answer). Rose could have predicted beforehand with certainty that Mark would fail to answer on the first day, so his failure to answer doesn't tell her anything she didn't know. However, that doesn't make the problem unsolvable. To see why, you must understand the following logical axiom: Additional information never invalidates a valid deduction. In other words, if I know that all of the statements $P_1,\dots P_n$ and $Q$ are true, and that $R$ is definitely true if $P_1, \dots P_n$ are true, I can conclude that $R$ is true. My additional knowledge that $Q$ is true, though unnecessary to deduce $R$, doesn't hamper my ability to deduce $R$ from $P_1,\dots P_n$. I will call this rule
LUI for "Law of Unnecessary Information." (It may have some other name, but I don't know it, so I'm giving it a new one.)
The line of reasoning goes as follows:
Let $R,\;M$ be the number of bars on Rose's and Mark's windows, respectively. Before the first question is asked, both Mark and Rose know the following:
$P_1$: Mark knows the value of $M$
$P_2$: Rose knows the value of $R$
$P_3$: $M+R=20 \;\vee \;M+R=18\;$ ($\vee$ means "or", in case you're unfamiliar with the notation)
$P_4$: $M\ge 2\;\wedge\;R \ge2\;$ ($\wedge$ means "and")
$P_5$: Both of them know every statement on this list, and every statement that can be deduced from statements they both know.
To help keep track of $P_5$ I will say that I will call a statement $P$ (with some subscript) only if it is known to both prisoners (or neither); thus, $P_5$ becomes "the other prisoner knows every $P$ that I know."
Additionally, Mark knows that $M=12$ and Rose knows that $R=8$. Call this knowledge $Q_M$ and $Q_R$, respectively.
Finally, as soon as one of them is asked the question for $k^\text{th}$ time, they both know (and know that one another know, etc.) $P_{\leftarrow k}$:
$P_{\leftarrow k}$: The other prisoner could not deduce the value of $M+R$ given the information they already had.
After Mark doesn't answer on the morning of day one, both prisoners can deduce from $P_1, P_3, P_4, P_5,$ and $P_{\leftarrow 2}$ that $M\le 16$ (call this $P_6$). It is true that both prisoners have more information than this about the value of $M$, but LUI tells us that that doesn't invalidate the deduction. It basically just means that Rose won't be surprised when she gets asked the question. She already knows she will be.
By the following morning, both prisoners can deduce from $P_1\dots P_6$ and $P_{\leftarrow 3}$ that $4\le R \le 16$ ($P_7$), and that evening, they can deduce from $P1,\dots P_7$ and $P_{\leftarrow 4}$ that $4 \le M \le 14$ ($P_8$). Again, both prisoners know all of this already. (But the conclusions are still valid by LUI.)
On the next day, in a similar manner, they can deduce in the morning that $6 \le R \le 14$ ($P_9$), and in the evening that $6 \le M \le 12$ ($P_{10}$). Here's where things get interesting. Mark can deduce from $P_3$ and $Q_M$ that $R$ is either $6$ or $8$, but $R=6\wedge P_{10} \wedge P_3\implies M+R=18$ and $R=6\wedge P_{10} \wedge P_3\wedge\left[R=6\wedge P_{10} \wedge P_3\implies M+R=18\right]\implies \neg P_{\leftarrow 7}$. When he gets asked the question again on the following morning, he learns that $P_{\leftarrow 7}$ is true, and can thus deduce that $R \neq 6$ and therefore $R=8$ and $M+R=20$. This is actually the first time in the sequence that a $P_{\leftarrow k}$ provides any more information about the value of $M+R$ than the prisoner already has, but the sequence of irrelevant questions is necessary to establish the deep metaknowledge Glen talks about. In this formulation, all this metaknowledge is encapsulated in $P_5$. When a prisoner is asked a question, $P_5$ says that they can deduce not only $P_{\leftarrow k}$ but also that both of them know $P_{\leftarrow k}$ and, by repeatedly applying $P_5$, that both of them know that both of them know $P_{\leftarrow k}$ and so on. For any $P_{\leftarrow k}$, there is some level of "we both know that we both know" that can't be deduced from $P_1\dots P_5$ and $Q_M$ or $Q_R$ alone. This is the "new information" being "learned" at each stage. Really nothing new is learned until Rose fails to answer on the $3^\text{rd}$ evening, but the sequence of non-answers $P_{\leftarrow k}$ is necessary to provide the deductive path to $P_{\leftarrow 7}$.
In fact, viewing it another way, the fact that not answering provides "no new information" (and in fact doesn't provide any new
direct information about the number of bars) is exactly why the puzzle is solvable, because
It says that the previous answer provided no new information. Because they both know that the number of bars is either $18$ or $20$ (only two possibilities), any new information about the number of bars (eliminating a possibility) will allow them to give the answer; thus, not answering sends the message "I have not yet received any new information," which, eventually,
is new information for the other prisoner.
The "conversation" the prisoners have amounts to this:
Mark: I don't know how many bars there are.
Rose: I already knew that (that you wouldn't know).
Mark: I already knew that (that you'd know I wouldn't know).
Rose: I already knew THAT (etc.)
Mark: I already knew THAT.
Rose: I already knew $\mathbf {THAT}$.
Mark (To the Evil Logician): There are $20$ bars.
But how, you may ask, can a series of messages that provide their recipient with no new information lead to one that does? Simple!
The non-answers provide no new information to the recipient, but they do provide information to the sender. If I tell you that I'm secretly a ninja, you might already know that, but even if you do, knowledge is gained, because by telling you, I give
myself the knowledge that you know I'm a ninja, and that you know I know you know I'm a ninja, etc. Thus, each message sent, even if the recipient already knows it, provides the sender with information. After several such questions, this is enough information that a message recipient can draw conclusions based on the sender's inability to draw any conclusions from the information they know the sender has.
Ok, fine, you might say, but what, exactly, is learned when Mark fails to answer on the first morning, and how can you prove this was not already known? Great question, thanks for asking. You see...
At this point, we have to resort to metaknowledge (I know she knows I know...) even though it can get confusing, However, I'll break it down in such a way as to hopefully satisfy anyone who still objects that there is (meta)knowledge available after Mark fails to answer the first question was not available before he did so. Specifically,
After failing to answer the first question, Now, that's a mouthful, so let's break it down into parts:
Mark gains the information that Rose knows that Mark knows that Rose knows that Mark knows that Rose knows that Mark's window has less than $18$ bars.
$R_0$:Mark's window doesn't have $18$ bars
$M_1$:Rose knows $R_0$
$R_2$:Mark knows $M_1$
$M_3$:Rose knows $R_2$
$R_4$:Mark knows $M_3$
$M_5$:Rose knows $R_4$
My claim is that A) Before he fails to answer on the first morning, Mark does not know $M_5$, and B) Afterwards, he does. Let's examine A) first:
To show that Mark doesn't know $M_5$ beforehand, we work backwards from $R_0$. In order for Rose to know that Mark's window doesn't have $18$ bars, her window would have to have more than $2$ bars. Since the rules (and numbers of bars) imply that they both have an even number of bars, in order for Mark to know $M_1$, he would have to know that Rose's window has at least $4$ bars. The only way for him to know that is if his window has less than $16$ bars. Thus, for rose to know $R_2$, she must know that Mark has no more than $14$ bars, which requires that she have at least $6$ bars. For Mark to know $M_3$, then, he must have no more than $12$ bars, so for Rose to know $R_4$ she must have at least $8$ bars, and for Mark to know $M_5$ he must have no more than $10$ bars. But he does have more than $10$ bars, so he doesn't know $M_5$ beforehand.
To see why Mark must know $M_5$ after he fails to answer the question, we must realize that they both know the rules of the game and one of the rules of the game is that they both know the rules of the game. This creates an infinite loop of meta-knowledge, meaning that they both know that they both know that they both know... the rules, no matter how many times you repeat "they both know". This infinite-depth meta-knowledge extends to anything that can be deduced from the rules. If Mark's window had $18$ bars, he could deduce from the rules that Rose must have $2$, and the tower must have $20$ in total. Because he doesn't answer, rose will be asked, and when she is, she will know that he couldn't deduce the answer, and therefore has less than $18$ bars. Because this is all deduced directly from the rules, rather than the private knowledge that either prisoner has, it inherits the infinite meta-knowledge of the rules, and Mark knows $M_5$.
So, Mark learns $M_5$. Does Rose learn anything? It's tempting to think that she doesn't, because she can predict in advance that Mark won't answer and therefore, one might think, she can draw in advance any conclusions that could be drawn from his not answering. However, as was shown above, by not answering, Mark learns $M_5$. Not answering changes the state of Mark's knowledge. This means that Rose's ability to predict Mark's behavior doesn't prevent her from gaining new information. She can predict in advance both what he will do (not answer) and what he will learn when he does it ($M_5$), but since he doesn't learn $M_5$ until he actually declines to answer, his failure to answer provides her with the information that he knows $M_5$. Since he didn't know $M_5$ beforehand, the knowledge that he does is by definition new information for Rose. Rose already knew that she now would know this, but until Mark doesn't answer, she doesn't actually know it (because it isn't true). By following this prediction logic out, it's possible to show that Rose knows (at the start) that Mark will be unable to answer until the $4^\text{th}$ morning, but not whether or not he'll be able to answer then. Mark, meanwhile, knows that Rose will be unable to answer until the $3^\text{rd}$ evening, but not whether or not she'll be able to answer then. As soon as one of the prisoners observes an event that they were unable to predict at the beginning, they can deduce from it something they didn't know about the state of the other's knowledge. Since the only hidden information is how many bars are in the other prisoners window, and they know that it must be one of two values, learning new information about that allows them to eliminate one of the values and find the correct result. |
What is ordered grammar in the theory of computation?
put on hold as off-topic by R B, Gamow, Jan Johannsen, Marzio De Biasi, Sasho Nikolov yesterday
This question appears to be off-topic. The users who voted to close gave this specific reason:
"Your question does not appear to be a research-levelquestion in theoretical computer science. For more information about the scope, please see help center. Your question might be suitable for Computer Science which has a broader scope." – R B, Gamow, Jan Johannsen, Marzio De Biasi, Sasho Nikolov
Ordered grammars are a special case of context-free grammars with regulated rewriting. Another name for context free grammar with regulated rewriting is controlled grammar.
But, what is regulated rewriting? Regulated rewriting alters the "derivation mode" of context-free grammars by adding some control mechanism to the derivation relation. This control mechanism allows for more expressive power than context-free grammars, while maintaining some of the key features, such as a parse tree, which are important e.g. for linguistic applications. Now let us return to ordered grammars.
An ordered grammar is an extension to a context-free grammar $(N,T,S,P)$, where the derivation is controlled by a partial order $\le$ on the productions. The partial order imposes further constraints on which production rules in $P$ can be used to rewrite a nonterminal in $N$. For two strings $xAy$ and $xzy$, with $A\in N$ and $x,y,z \in (N\cup T)^*$, we say that $xAy$ directly derives $xzy$ if $p = A\to z$ is a production, and if there is no other production $p' = B \to z'$ ranked higher in the partial order than $p$, where $B$ occurs in $xAy$.
In complete analogy to the context-free grammars, the relation 'derives' is defined as the transitive closure of the 'directly derives' relation, and the generated language is the set of terminal strings derived from the start symbol $S$.
Ordered grammars can generate languages such as {$a^{2^n} \mid n\ge 0$}, but they are strictly less expressive than Turing machines.
Definition and examples drawn from: Jürgen Dassow: Grammars with Regulated Rewriting. Tarragona PhD programme, manuscript. |
Just as with groups, we can study
homomorphisms to understand the similarities between different rings. Homomorphisms [ edit ] Definition [ edit ]
Let
R and S be two rings. Then a function is called a f : R → S {\displaystyle f:R\to S} ring homomorphism or simply homomorphism if for every , the following properties hold: r 1 , r 2 ∈ R {\displaystyle r_{1},r_{2}\in R} f ( r 1 r 2 ) = f ( r 1 ) f ( r 2 ) , {\displaystyle f(r_{1}r_{2})=f(r_{1})f(r_{2}),} f ( r 1 + r 2 ) = f ( r 1 ) + f ( r 2 ) . {\displaystyle f(r_{1}+r_{2})=f(r_{1})+f(r_{2}).}
In other words,
f is a ring homomorphism if it preserves additive and multiplicative structure.
Furthermore, if
R and S are rings with unity and , then f ( 1 R ) = 1 S {\displaystyle f(1_{R})=1_{S}} f is called a unital ring homomorphism. Examples [ edit ] Let be the function mapping f : Z → M 2 ( Z ) {\displaystyle f:\mathbb {Z} \to M_{2}(\mathbb {Z} )} . Then one can easily check that a ↦ ( a 0 0 0 ) {\displaystyle a\mapsto {\begin{pmatrix}a&0\\0&0\end{pmatrix}}} is a homomorphism, but not a unital ring homomorphism. f {\displaystyle f} If we define , then we can see that g : a ↦ ( a 0 0 a ) {\displaystyle g:a\mapsto {\begin{pmatrix}a&0\\0&a\end{pmatrix}}} is a unital homomorphism. g {\displaystyle g} The zero homomorphism is the homomorphism which maps ever element to the zero element of its codomain. Theorem: Let and R {\displaystyle R} be integral domains, and let S {\displaystyle S} be a nonzero homomorphism. Then f : R → S {\displaystyle f:R\to S} is unital. f {\displaystyle f} Proof: . But then by cancellation, 1 S f ( 1 R ) = f ( 1 R ) = f ( 1 R 2 ) = f ( 1 R ) f ( 1 R ) {\displaystyle 1_{S}f(1_{R})=f(1_{R})=f(1_{R}^{2})=f(1_{R})f(1_{R})} . f ( 1 R ) = 1 S {\displaystyle f(1_{R})=1_{S}}
In fact, we could have weakened our requirement for R a small amount (How?).
Theorem: Let be rings and R , S {\displaystyle R,S} a homomorphism. Let φ : R → S {\displaystyle \varphi :R\to S} be a subring of R ′ {\displaystyle R'} and R {\displaystyle R} a subring of S ′ {\displaystyle S'} . Then S {\displaystyle S} is a subring of φ ( R ′ ) {\displaystyle \varphi (R')} and S {\displaystyle S} is a subring of φ − 1 ( S ′ ) {\displaystyle \varphi ^{-1}(S')} . That is, the kernel and image of a homomorphism are subrings. R {\displaystyle R} Proof: Proof omitted. Theorem: Let be rings and R , S {\displaystyle R,S} be a homomorphism. Then φ : R → S {\displaystyle \varphi :R\to S} is injective if and only if φ {\displaystyle \varphi } . ker φ = 0 {\displaystyle \ker \varphi =0} Proof: Consider as a group homomorphism of the additive group of φ {\displaystyle \varphi } . R {\displaystyle R} Theorem: Let be fields, and F , E {\displaystyle F,E} be a nonzero homomorphism. Then φ : F → E {\displaystyle \varphi :F\to E} is injective, and φ {\displaystyle \varphi } . φ ( x ) − 1 = φ ( x − 1 ) {\displaystyle \varphi (x)^{-1}=\varphi (x^{-1})}
Proof: We know since fields are integral domains. Let φ ( 1 ) = 1 {\displaystyle \varphi (1)=1} be nonzero. Then x ∈ F {\displaystyle x\in F} . So φ ( x − 1 ) φ ( x ) = φ ( x − 1 x ) = φ ( 1 ) = 1 {\displaystyle \varphi (x^{-1})\varphi (x)=\varphi (x^{-1}x)=\varphi (1)=1} . So φ ( x ) − 1 = φ ( x − 1 ) {\displaystyle \varphi (x)^{-1}=\varphi (x^{-1})} (recall you were asked to prove units are nonzero as an exercise). So φ ( x ) ≠ 0 {\displaystyle \varphi (x)\neq 0} . ker φ = 0 {\displaystyle \ker \varphi =0} Isomorphisms [ edit ] Definition [ edit ]
Let
be rings. An R , S {\displaystyle R,S} isomorphism between and R {\displaystyle R} is an invertible homomorphism. If an isomorphism exists, S {\displaystyle S} and R {\displaystyle R} are said to be S {\displaystyle S} isomorphic, denoted . Just as with groups, an isomorphism tells us that two objects are R ≅ S {\displaystyle R\cong S} algebraically the same. Examples [ edit ] The function defined above is an isomorphism between g {\displaystyle g} and the set of integer scalar matrices of size 2, Z {\displaystyle \mathbb {Z} } . S = { λ I 2 | λ ∈ Z } {\displaystyle S=\left\{\lambda I_{2}|\lambda \in \mathbb {Z} \right\}} Similarly, the function mapping φ : C → M 2 ( R ) {\displaystyle \varphi :\mathbb {C} \to M_{2}(\mathbb {R} )} where z ↦ ( a − b b a ) {\displaystyle z\mapsto {\begin{pmatrix}a&-b\\b&a\end{pmatrix}}} is an isomorphism. This is called the z = a + b i {\displaystyle z=a+bi} matrix representation of a complex number. The Fourier transform defined by F : L 1 → L 1 {\displaystyle {\mathcal {F}}:L^{1}\to L^{1}} is an isomorphism mapping integrable functions with pointwise multiplication to integrable functions with convolution multiplication. F ( f ) = ∫ R f ( t ) e − i ω t d t {\displaystyle {\mathcal {F}}(f)=\int _{\mathbb {R} }f(t)e^{-i\omega t}dt}
Exercise: An isomorphism from a ring to itself is called an automorphism. Prove that the following functions are automorphisms: f : C → C , f ( a + b i ) = a − b i {\displaystyle f:\mathbb {C} \to \mathbb {C} ,f(a+bi)=a-bi} Define the set , and let Q ( 2 ) = { a + b 2 | a , b ∈ Q } {\displaystyle \mathbb {Q} ({\sqrt {2}})=\left\{a+b{\sqrt {2}}|a,b\in \mathbb {Q} \right\}} g : Q ( 2 ) → Q ( 2 ) , g ( a + b 2 ) = a − b 2 {\displaystyle g:\mathbb {Q} ({\sqrt {2}})\to \mathbb {Q} ({\sqrt {2}}),g(a+b{\sqrt {2}})=a-b{\sqrt {2}}} |
Let $A$ be an abelian variety over a p-adic field $K$. Let $I$ be the inertia group of $K$. There is a Yoneda pairing $$H^n(\hat{\mathbb{Z}},A^I) \times Ext^{2-n}_{\hat{\mathbb{Z}}}(A^I,\mathbb{Z}) \to \mathbb{Q}/\mathbb{Z} \quad (*)$$ Is this pairing non-degenerate ?
What I know is that
For $n=2$, both groups are 0.
For $n=1$, $H^1(\hat{\mathbb{Z}},A^I)$ is a finite group.
The pairing $(*)$ is non-degenerate if $A^I$ is replaced by a finitely generated abelian group.
All these facts are available in the book Arithmetic Duality Theorems by Milne. But that is about all I know. I find the groups $Ext^{2-n}_{\hat{\mathbb{Z}}}(A^I,\mathbb{Z})$ very hard to understand. I don't even know if $Ext^{1}_{\hat{\mathbb{Z}}}(A^I,\mathbb{Z})$ is finite or not.
Any help is much appreciated. Thank you very much. |
I have solution that can guarantee the end goal being reached in a maximum of
four five steps. (Many thanks for @dmg and @Taemyr for their comments to fix this solution.) The trick is to:
Reduce the cups into the following configuration:
$\begin{bmatrix} \circ & \bullet \\ \bullet & \circ \end{bmatrix} $
where turning any two cups along the diagonal will satisfy the end goal
For a constructive proof, we first consider the following
two three CASES:
CASE 1) the orientation of a single cup is different from the others, e.g. $\begin{bmatrix} \circ & \bullet \\ \circ & \circ \end{bmatrix} $ or $\begin{bmatrix} \bullet & \circ \\ \bullet & \bullet \end{bmatrix} $
CASE 2a) two cups are in each orientation along the diagonal, e.g. $\begin{bmatrix} \circ & \bullet \\ \bullet & \circ \end{bmatrix} $ CASE 2b) two cups are in each orientation along two sides, e.g. $\begin{bmatrix} \circ & \bullet \\ \circ & \bullet \end{bmatrix} $
Step 1
Take two cups along the diagonal.
- If they are different, flip $\circ$ to $\bullet$. If this doesn't sound the bell, we either have $\begin{bmatrix} \circ & \bullet \\ \bullet & \circ \end{bmatrix}$ or $\begin{bmatrix} \circ & \bullet \\ \circ & \circ \end{bmatrix}$, with the majority orientation being $\circ$. In either case, proceed to step 2.
- If they are the same, flip both. If this doesn't sound the bell, we either have $\begin{bmatrix} \bullet & \circ \\ \bullet & \bullet \end{bmatrix}$ or $\begin{bmatrix} \circ & \bullet \\ \circ & \circ \end{bmatrix}$. Either way, the majority orientation is known, and we can proceed to step 3.
Step 2
Take two cups along the diagonal.
- If they are different, we now know we have $\begin{bmatrix} \circ & \bullet \\ \circ & \circ \end{bmatrix}$. Flip $\bullet$ to $\circ$ to win.
- If they are the same, flip both. If this doesn't sound the bell, we now know we have $\begin{bmatrix} \circ & \bullet \\ \circ & \circ \end{bmatrix}$. Proceed to Step 3.
Step 3
At this point, we have already figured out what the majority orientation is, so we assume we have $\begin{bmatrix} \circ & \bullet \\ \circ & \circ \end{bmatrix} $ WLOG. Then:
Take two cups along the diagonal.
- If the cups are different, flip $\bullet$ to $\circ$ to win.
- If the cups are both $\circ$, flip one of them. We now know we have $\begin{bmatrix} \circ & \bullet \\ \circ & \bullet \end{bmatrix}$.
Step 4
Take two adjacent cups along an edge and flip both.
- If the cups chosen are the same, we win.
- If they are different, we now know we have $\begin{bmatrix} \circ & \bullet \\ \bullet & \circ \end{bmatrix} $
Step 5
Take two cups along a diagonal and flip both. We win! |
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 5 John Baez (UC Riverside) 1.2.9 April 6 Edray Goins (Purdue) 1.3 Past Colloquia Mathematics Colloquium
All colloquia are on Fridays at 4:00 pm in Van Vleck B239,
unless otherwise indicated. Spring 2018
date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 5 (Thursday, Room: 911) John Baez (UC Riverside) Monoidal categories of networks Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) Free complexes on smooth toric varieties Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran May 4 Henry Cohn (Microsoft Research and MIT) TBA Ellenberg date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia)
Title: Elliptic curves and Goldfeld's conjecture
Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses.
February 2 Thomas Fai (Harvard)
Title: The Lubricated Immersed Boundary Method
Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
February 5 Alex Lubotzky (Hebrew University)
Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes
Abstract:
Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders.
In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1.
This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders.
February 6 Alex Lubotzky (Hebrew University)
Title: Groups' approximation, stability and high dimensional expanders
Abstract:
Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm.
The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated.
All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom.
February 9 Wes Pegden (CMU)
Title: The fractal nature of the Abelian Sandpile
Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor.
Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.
March 2 Aaron Bertram (Utah)
Title: Stability in Algebraic Geometry
Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area.
March 16 Anne Gelb (Dartmouth)
Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity
Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data.
April 5 John Baez (UC Riverside)
Title: Monoidal categories of networks
Abstract: Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, chemical reaction networks, signal-flow graphs, Bayesian networks, food webs, Feynman diagrams and the like. Far from mere informal tools, many of these diagrammatic languages fit into a rigorous framework: category theory. I will explain a bit of how this works and discuss some applications.
April 6 Edray Goins (Purdue)
Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups
Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math]
This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure
of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus.
This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
===April 16 Christine Berkesch Zamaere (Minnesota)===#Berkesch Title: Free complexes on smooth toric varieties
Abstract: Free resolutions have been a key part of using homological algebra to compute and characterize geometric invariants over projective space. Over more general smooth toric varieties, this is not the case. We will discuss the another family of complexes, called virtual resolutions, which appear to play the role of free resolutions in this setting. This is joint work with Daniel Erman and Gregory G. Smith. |
Carbonated beer flowing from a keg through a short length of tubing results in large quantities of foam. Unintuitively (at least to me), increasing the length of tubing results in a less frothy drink.
Why?
Internet brewing forums suggest that foam formation can be caused by a rapid drop in pressure when the beer leaves the keg. They state that the a more gradual pressure drop due to frictional effects of the beer in a longer pipe solves the froth problem.
I understand that classical nucleation theory can show that the free energy at a nucleation site is $\Delta G = \frac{4}{3}\pi r^3 \Delta g + 4 \pi r^2 \sigma$, where $r$ is the radius of the bubble. Because $\Delta g<0$ but $\sigma>0$ the $r^2$ and $r^3$ terms fight each other and create a nucleation energy barrier.
My understanding is that the critical radius $r^*$ associated with this energy barrier is fairly large for typical carbonated beers and so homogeneous nucleation is extremely unlikely to occur in a pint glass. However heterogeneous nucleation can occur where the glass is sufficiently rough, and I believe his is because the interfacial energy is lower and hence the energy barrier is easier to overcome.
If the brewers are correct when they say the rate of change of pressure is significant, what is the physical explanation for this? Can a rapid change in pressure somehow help overcome the energy barrier like a scratch on the glass, or is there an alternative explanation? |
The GCH is the statement that $\forall \kappa \geq \aleph_0 : 2^\kappa = \kappa^+$. That is, $\forall \alpha :2^{\aleph_\alpha}=\aleph_{\alpha+1}$.
I was told that the Generalized Continuum Hypothesis is equivalent to the following identities. I'm curious of the proof but have no idea how to work it out.
$\sum_{\mu <\kappa}2^{\mu}=\kappa$.
$\kappa^{\text{cf}(\kappa)}=\kappa^+$ for any infinite $\kappa$.
To be clear on definitions: $\kappa^+$ is a successor cardinal; $\text{cf}(\kappa)$ denotes the cofinality of $\kappa$, which is the least limit ordinal $\theta$ cut that there is an increasing sequence over $\theta$ that is cofinal in $\kappa$.
I'd appreciate it if someone could explain these proofs. |
Communities (15)
TeX - LaTeX
4k
4k44 gold badges2424 silver badges7070 bronze badges
Mathematics
1.4k
1.4k1010 silver badges1616 bronze badges
Science Fiction & Fantasy
726
72611 gold badge55 silver badges1010 bronze badges
MathOverflow
185
18555 bronze badges
Graphic Design
161
16166 bronze badges View network profile → Top network posts 95 Why was Darth Vader so careless? 49 How does PGF/TikZ work? 42 Why does plain TeX have a \bye command? 30 Second-largest connected component of mathematicians 26 Exponential integral $ \int_0^\infty \frac{x^t}{\Gamma(t+1)}\text dt$ 26 Why does \big\mid not work? 23 Tell TeXstudio to compile a particular document with LuaLaTeX View more network posts → Top tags (10)
31 Why vote for the answer, but not the question? Jan 21 '15
29 Does Donald Knuth have a profile on TeX.SE? Apr 24 '15
25 Why does TeX.SE exist? May 24 '15
17 Do you have a life? [closed] Jun 8 '15
14 Is a showcase of preambles off-topic? Sep 25 '15
9 Template for question formulation Jun 7 '15
6 Why can't TeX.SE compile my TeX code? Apr 22 '15
1 Are questions about TeXmacs on-topic? Jun 7 '15
-1 Writing TeX in TeX-style Jun 7 '15 |
On existence and nonexistence of the positive solutions of non-newtonian filtration equation
1.
Department of Mathematics, Hacettepe University, 06800 Beytepe - Ankara, Turkey
$ \rho (|x|) \frac{\partial u}{\partial t}- \sum_{i=1}^N D_i(u^{m-1}|D_i u|^{\lambda -1}D_i u)+g(u)+lu=f(x)$ (1)
or, after the change $v=u^{\sigma}$, $\sigma =\frac{m+\lambda -1}{\lambda }, $ of equation
$\rho (|x|) \frac{\partial v^{\frac{1}{ \sigma }}}{\partial t}-\sigma ^{-\lambda }\sum_{i=1} ^N D_i(|D_i v|^{\lambda -1}D_i v)+g(v^{\frac{1}{\sigma }}) +lv^{\frac{1}{ \sigma }}=f(x),$ (1')
in unbounded domain $R_+\times R^N,$ where the term $g(s)$ is supposed to satisfy just a lower polynomial growth condition and $g'(s) > -l_1$. The existence of the solution in $ L^{1+1/\sigma}(0, T; L^{1+1/\sigma}(R^N))\cap L^{\lambda +1}(0, T; W^{1,\lambda +1}(R^N))$ is proved. Also, under some condition on $g(s)$ and $u_0$ is shown a nonexistence of the solution.
Keywords:nonexistence., Nonlinear degenerate equation, existence, FSP(finite speed of propagation of perturbations). Mathematics Subject Classification:Primary: 35K15, 35K65; Secondary: 35B3. Citation:Emil Novruzov. On existence and nonexistence of the positive solutions of non-newtonian filtration equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 719-730. doi: 10.3934/cpaa.2011.10.719
References:
[1]
N. Ahmed and D. K. Sunada,
[2]
D. Blanchard and G. A. Francfort,
[3]
S. P. Degtyarev and A. F. Tedeev,
[4] [5] [6]
H. A. Levine,
[7]
J. L. Lions, "Quelques Methodes de Resolution des Problemes aux Limites Nonlineaires,",
[8]
J. L. Lions and E. Magenes, "Non-Homogeneous Boundary Value Problems and Applications,",
[9]
A. V. Martynenko, and A. F. Tedeev,
[10] [11] [12] [13]
G. Reyes and J. L. Vázquez,
[14]
A. F. Tedeev,
[15] [16]
M. Tsutsumi,
[17]
Z. Xiang, Ch. Mu and X. Hu,
[18]
Y. Zhou,
show all references
References:
[1]
N. Ahmed and D. K. Sunada,
[2]
D. Blanchard and G. A. Francfort,
[3]
S. P. Degtyarev and A. F. Tedeev,
[4] [5] [6]
H. A. Levine,
[7]
J. L. Lions, "Quelques Methodes de Resolution des Problemes aux Limites Nonlineaires,",
[8]
J. L. Lions and E. Magenes, "Non-Homogeneous Boundary Value Problems and Applications,",
[9]
A. V. Martynenko, and A. F. Tedeev,
[10] [11] [12] [13]
G. Reyes and J. L. Vázquez,
[14]
A. F. Tedeev,
[15] [16]
M. Tsutsumi,
[17]
Z. Xiang, Ch. Mu and X. Hu,
[18]
Y. Zhou,
[1]
S. Bonafede, G. R. Cirmi, A.F. Tedeev.
Finite speed of propagation for the porous media equation with lower order terms.
[2] [3]
Lihua Min, Xiaoping Yang.
Finite speed of propagation and algebraic time decay of solutions
to a generalized thin film equation.
[4]
Jean-Daniel Djida, Juan J. Nieto, Iván Area.
Nonlocal time-porous medium equation: Weak solutions and finite speed of propagation.
[5] [6] [7]
Belkacem Said-Houari, Flávio A. Falcão Nascimento.
Global existence and nonexistence for the viscoelastic wave equation
with nonlinear boundary damping-source interaction.
[8]
Gabriele Bonanno, Pasquale Candito, Roberto Livrea, Nikolaos S. Papageorgiou.
Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems.
[9]
Takahiro Hashimoto.
Existence and nonexistence of nontrivial solutions of some nonlinear fourth order elliptic equations.
[10]
Eric R. Kaufmann.
Existence and nonexistence of positive solutions for a nonlinear fractional boundary value problem.
[11]
Xie Li, Zhaoyin Xiang.
Existence and nonexistence of local/global solutions for a nonhomogeneous heat equation.
[12]
Hongwei Zhang, Qingying Hu.
Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition.
[13]
Andrea L. Bertozzi, Dejan Slepcev.
Existence and uniqueness of solutions to an aggregation equation with degenerate diffusion.
[14] [15] [16]
Françoise Demengel, O. Goubet.
Existence of boundary blow up solutions for singular or degenerate fully nonlinear equations.
[17]
Francesco Leonetti, Pier Vincenzo Petricca.
Existence of bounded solutions to some nonlinear degenerate elliptic systems.
[18]
Chufen Wu, Peixuan Weng.
Asymptotic speed of propagation and traveling wavefronts
for a SIR epidemic model.
[19]
Patrick W. Dondl, Michael Scheutzow.
Positive speed of propagation in a semilinear parabolic interface model with unbounded random coefficients.
[20]
Elena Trofimchuk, Manuel Pinto, Sergei Trofimchuk.
On the minimal speed of front propagation in a model of the Belousov-Zhabotinsky reaction.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
I am trying to prove that $X^4 - 4Y^4 = -Z^2$ has no solutions in non zero integers. I know there are similar questions on MS, but that minus signs before the $Z$ gives me a hard time.
For the moment, I converted the equation to the equivalent $X^4 + Z^2 = 4Y^2$, and I supposed that $(x,y,z)$ is a primitive solution for this equation. Thus we have $(x^2)^2 +z^2 = (2y^2)^2$.
Now $(2y^2)^2$ is even, it tells us that $x$ and $z$ must have the same parity. But we know by Diophante's equation that there exist $a$ and $b$ with $a > b$, $(a, b) = 1$. If $a \not \equiv b \pmod 2$, we have $x^2 = a^2 - b^2, z = 2ab$ and $2y^2 = a^b + b^2$. But this is impossible, because of $2y^2 = a^2 + b^2$, $a$ and $b$ should have the same parity, which is not our hypothesis.
If $a$ and $b$ have the same parity, they are both odd (otherwise they are not coprime) and we have $x^2 = \frac{a^2 - b^2}{2}$, $z = ab$ and $2y^2 = \frac{a^2 + b^2}{2}$. But once again, as $(a, b) = 1$ and are both odd, they have no factor "2" in common thus $2y^2 = \frac{a^2 + b^2}{2}$ is impossible.
Therefore the equation $X^4 - 4Y^4 = -Z^2$ has no solution in $\mathbb{N} \setminus \{0\}$.
What do you think of this argument? My first idea was to use the infinite descent, but in this case I did not manage to do it. My second idea was to prove that it was equivalent to another equation such as $X^4 + Y^4 = Z^2$ which we know has no solutions such that $XYZ \neq 0$ but I did not have more success. |
Very quick answer...
The exponential of a Hamiltonian matrix is symplectic, a property that you probably wish to preserve, otherwise you would simply use a non-structure-preserving method. Indeed, there is no real speed advantage in using structured method, just structure preservation.
A possible way to solve your problem is the following. First find a symplectic matrix such that $\hat{H}=M^{-1}HM=\begin{bmatrix} \hat{A} & -\hat{G}\\ 0 & -\hat{A}^T \end{bmatrix}$ is Hamiltonian and block upper triangular, and $\hat{A}$ has eigenvalues in the left half-plane. You get this matrix for instance by taking $\begin{bmatrix}I & 0\\ X & I\end{bmatrix}$, where $X$ solves the Riccati equation associated to $H$, or (more stable since it's orthogonal) by reordering the Schur decomposition of $H$ and applying the Laub trick (i.e., replacing the unitary Schur factor $\begin{bmatrix}U_{11} & U_{12} \\ U_{21} & U_{22}\end{bmatrix}$ with $\begin{bmatrix}U_{11} & -U_{12} \\ U_{12} & U_{11}\end{bmatrix}$). You may have trouble doing it if the Hamiltonian has eigenvalues on the imaginary axis, but that's a long story and for now I will suppose it doesn't happen in your problem.
Once you have $M$, you have $\exp(H)=M\exp(\hat{H})M^{-1}$, and you can compute $$\exp(\hat{H}) = \begin{bmatrix} \exp(\hat{A}) & X\\ 0 & \exp(-\hat{A}^T) \end{bmatrix},$$where $X$ solves a certain Lyapunov equation, I believe something like $$\hat{A} X + X \hat{A}^T = -\exp(\hat{A}) \hat{G} - \hat{G} \exp(-\hat{A}^T)$$ (signs may be wrong; impose $\exp(\hat{H})\hat{H}=\hat{H}\exp(\hat{H})$ and expand blocks to get the correct equation. Look up "Schur-Parlett method" for a reference to this trick).
Then the three factors are exactly symplectic. Just use them separately: do not compute the product or you will lose this property numerically. |
This answer focuses more on minimalizing the code, rather than finding the source of the problem, as the top-voted answer does. It is intended to be concise and hands-on, but digestible rather than exhaustive. Suggestions for improvement welcome!
Here are some strategies for reducing your code, which will help you get better and faster answers, since it will be clearer what your problem is and the other users will see that you put some effort into producing a concise Minimal Working Example. Thanks for that!
Most likely, not all of these things will apply to your question, so just pick what does apply. However, it is advised that you provide the community with something that will reproduce the problem in the easiest way possible. Typically this requires code that starts with
\documentclass and ends with
\end{document} (if using LaTeX). It will allow readers to copy-and-paste-and-compile your code and see exactly what problems you might be experiencing.
What follows below are snippets of code;
bad references imply that it should typically not be used, as it may not be part of the problem, while good references make suggestions that should be used instead. Note that these snippets should still form part of a larger,
\documentclass...
\end{document} structure as mentioned above.
Document Class - Bad:
\documentclass{MyUniversitysThesisClass}
- Bad:
\documentclass[..]{standalone}
...unless your problem relates to the
standalone document class.
standalone is meant for cropping stand-alone images within a main document usually. If this doesn't pertain to you, don't use it.
+ Good:
\documentclass{article}
Using a non-standard document class? Does your problem still show up with
article? Then
use .
article
Document Class Options - Bad:
\documentclass[12pt, a5paper, final, oneside, onecolumn]{article}
+ Good:
\documentclass{article}
Using any options for your document class? Does your problem still show up without them? Then
get rid of them. Comments - Bad:
\usepackage{booktabs} % hübschere Tabllen, besseres Spacing
\usepackage{colortbl} % farbige Tabellenzellen
\usepackage{multirow} % mehrzeilige Zellen in Tabellen
\usepackage{subfloat} % Sub-Gleitumgebungen
+ Good:
\usepackage{booktabs}
\usepackage{colortbl}
\usepackage{multirow}
\usepackage{subfloat}
You put comments in your code to remember what packages are there for? Great habit, but usually not necessary in a MWE –
get rid of them. Loading Packages - Bad:
\usepackage{a4wide}
\usepackage{amsmath, amsthm, amssymb}
\usepackage{url}
\usepackage[algoruled,vlined]{algorithm2e}
\usepackage{graphicx}
\usepackage[ngerman, american]{babel}
\usepackage{booktabs}
\usepackage{units}
\usepackage{makeidx}
\makeindex
\usepackage[usenames,dvipsnames]{color}
\usepackage{colortbl}
\usepackage{epstopdf}
\usepackage{rotating}
+ Good:
% Assuming your problem is related e.g. to the rotation of a figure, you might need:
\usepackage{rotating}
You’ve developed an awesome template with lots of helpful packages? Does your problem still show up if you remove some or even most of them? Then
get rid of those that aren’t necessary for reproducing the problem. (If you should later find out that another package is complicating the situation, you can always ask another question or edit the existing question.)
In most cases, even packages like
inputenc or
fontenc are not necessary in MWEs, even though they are essential for many non-English documents in “real” documents.
Images - Bad:
\includegraphics{graphs/dataset17b.pdf}
+ Good:
\usepackage[demo]{graphicx}
....
\includegraphics{graphs/dataset17b.pdf}
+ Good:
\usepackage{graphicx}
....
\includegraphics{example-image}% Image from the mwe package
Your problem includes
an image? Does your problem show up with any image? Then use the option for the package
demo
graphicx – this way, other users who don’t have your image file won’t get an error message because of that. If you prefer an actual image that you can rotate, stretch, etc., use the
, which provides a number of dummy images, named e.g.
mwe package
example-image.
If your problem is specific to the size of the included image, still use
mwe's
example-image, but also specify the
width and
height so it more readily replicates your
custom-image dimensions. Again, this way the problem is reproducible without using your image.
Text - Bad:
In \cite{cite:0}, it is shown that $\Delta \subset {U_{\mathcal{{D}}}}$. Hence
Y. Q. Qian's characterization of conditionally uncountable elements was a
milestone in constructive algebra. Now it has long been known that there exists
an almost everywhere Clifford right-canonically pseudo-integrable, Clairaut
subset \cite{cite:0}. The groundbreaking work of J. Davis on isomorphisms was a
major advance. In future work, we plan to address questions of uniqueness as
well as degeneracy. Thus in \cite{cite:0}, the main result was the
classification of meromorphic, completely left-invariant systems.
+ Good:
\usepackage{lipsum} % just for dummy text
...
\lipsum[1-3]
+ Good:
Foo bar baz.
Need a
few paragraphs of text to demonstrate your problem? Use a package that produces dummy text. Popular choices are
lipsum (plain paragraphs) and
blindtext (can produce entire documents with section titles, lists, and formulae).
Need just a
tiny amount of text? Then keep it maximally simple; avoid formulae, italics, tables – anything that’s not essential to the problem. Popular choices for dummy words are
foo,
bar, and
baz.
Bibliography Files + Good:
\usepackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@book{Knu86,
author = {Knuth, Donald E.},
year = {1986},
title = {The \TeX book},
}
\end{filecontents*}
\bibliography{\jobname} % if you’re using BibTeX
\addbibresource{\jobname.bib} % if you’re using biblatex
Need a
.bib file to reproduce your problem? Use a maximally simple entry embedded in a in the preamble. During the compilation, this will create a .bib file in the same directory as the
filecontents environment
.tex file, so users compiling your code only need to save one file by themselves.
Another option for
biblatex would be to use the file
biblatex-examples.bib, which should be installed with
biblatex by default. You can find it in
bibtex/bib/biblatex/.
Data -- Bad:
Never include data as an image. - Bad:
Number of points Values
10 100
20 400
30 1200
40 2345
etc...
+ Good:
\usepackage{filecontents}
\begin{filecontents*}{data.txt}
Number of points, Values
10, 100
20, 400
30, 1200
40, 2345
\end{filecontents*}
Including the data as part of the MWE makes the example portable as well. Of course, the input may differ depending on what package you use to manage the data (some require CSV, some don't).
Index + Good:
\usepackage{filecontents}
\begin{filecontents*}{\jobname.ist}
delim_0 "\\dotfill "
\end{filecontents*}
The index style can be included in the
filecontents* environment in the preamble. The contents (and file extension) will differ according to the required indexing application (
makeindex or
xindy).
Sometimes a problem can only be demonstrated with an index that spans several pages. The
testidx package is like
lipsum etc but the dummy text is interspersed with
\index to make it easier to test index styles. It has over 400 top-level terms (along with some sub-items and sub-sub-items) that includes every basic Latin letter group (A–Z) as well some extended Latin characters and a few digraphs.
- Bad:
\begin{document}
aa\index{aa}
ab\index{ab}
...
zy\index{zy}
zz\index{zz}
\printindex
\end{document}
+ Good:
\begin{document}
\testidx
\printindex
\end{document}
If page breaking is the source of your problem (for example, after a letter group heading or between an item and sub-item), there's a high probability of an awkward break occurring given the large number of test items, but you can alter the page dimensions or font size to ensure one occurs in your MWE.
Glossaries
The
glossaries package comes with some files containing dummy entries, which can be used in MWEs.
- Bad:
\newglossaryentry{sample1}{name={sample1},description={description 1}}
...
\newglossaryentry{sample100}{name={sample100},description={description 100}}
\newacronym{ac1}{ac1}{acronym 1}
...
\newacronym{ac100}{ac100}{acronym 100}
+ Good:
\loadglsentries{example-glossaries-brief}
\loadglsentries[\acronymtype]{example-glossaries-acronym}
See Dummy Entries for Testing for a complete list of dummy entry files provided by
glossaries. There's an additional file
example-glossaries-xr.tex provided by
glossaries-extra.
Formatting your code
Formatting of code is done using Markdown. See the relevant FAQ How do I mark code blocks?. There also exists some syntax-highlighting, a discussion of which can be following at What is syntax highlighting and how does it work?.
With the above in mind, don't post your code in comments, since comments only support a limited amount of Markdown.
Posting a Picture of Your Output
It’s often helpful to see what your current, faulty output looks like. If you’re not sure how to do that, have a look at How does one add a LaTeX output to a question/answer? and how can i upload an image to be included in a question or answer?.
Selection of packages inspired by Inconsistent rotations with \sidewaysfigure. Math ramble generated by Mathgen. Bibliography sample from lockstep’s question biblatex: Putting thin spaces between initials. |
A fourth order implicit symmetric and symplectic exponentially fitted Runge-Kutta-Nyström method for solving oscillatory problems
1.
Department of Mathematics, Beijing Jiaotong University Haibin College, Cangzhou, China
2.
School of Science, Beijing Jiaotong University, Beijing, China
In this paper, we derive an implicit symmetric, symplectic and exponentially fitted Runge-Kutta-Nyström (ISSEFRKN) method. The new integrator ISSEFRKN2 is of fourth order and integrates exactly differential systems whose solutions can be expressed as linear combinations of functions from the set $\{\exp(λ t), \exp(-λ t)|λ∈ \mathbb{C}\}$, or equivalently $\{\sin(ω t), \cos(ω t)|λ = iω, ~ω∈ \mathbb{R}\}$. We analysis the periodicity stability of the derived method ISSEFRKN2. Some the existing implicit RKN methods in the literature are used to compare with ISSEFRKN2 for several oscillatory problems. Numerical results show that the method ISSEFRKN2 possess a more accuracy among them.
Keywords:Implicit, symmetric, symplectic, exponentially fitted, Runge-Kutta-Nyström method, stability. Mathematics Subject Classification:Primary: 65L05, 65L06; Secondary: 65M20, 65N40. Citation:Wenjuan Zhai, Bingzhen Chen. A fourth order implicit symmetric and symplectic exponentially fitted Runge-Kutta-Nyström method for solving oscillatory problems. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 71-84. doi: 10.3934/naco.2019006
References:
[1]
P. Albrecht, The extension of the theory of A-methods to RK methods,
[2] [3]
R. A. Al-Khasawneh, F. Ismail and M. Suleiman,
Embedded diagonally implicit Runge-Kutta-Nyström 4(3) pair for solving special second-order IVPs,
[4] [5] [6] [7] [8]
E. Hairer, C. Lubich and G. Wanner,
[9] [10] [11] [12]
Z. Kalogiratou, T. Monovasilis and T. E. Simos,
A sixth order symmetric and symplectic diagonally implicit Runge-Kutta method,
[13]
K. W. Moo, N. Senu, F. Ismail and N. M. Arifin,
A zero-dissipative phase-fitted fourth order diagonally implicit Runge-Kutta-Nyström method for solving oscillatory problems,
[14]
B. Paternoster,
Runge-Kutta(-Nyström) methods for ODEs with periodic solutions based on trigonometric polynomials,
[15]
M. Z. Qin and W. J. Zhu,
Canonical Runge-Kutta-Nyström methods for second order ordinary differential equations,
[16] [17]
J. M. Sanz-Serna and M. P. Calvo,
[18] [19]
P. W. Sharp, J. M. Fine and K. Burrage,
Two stage and three stage diagonally implicit Runge-Nutta-Nyström methods of orders three and four,
[20]
T. E. Simos,
An exponentially-fitted Runge-Kutta method for the numerical integration of initial-value problems with periodic or oscillating solutions,
[21] [22] [23]
G. Vanden Berghe, M. Van Daele and H. Van de Vyver,
Exponential fitted Runge-Kutta methods of collocation type: fixed or variable knot points?,
[24] [25]
X. You and B. Chen,
Symmetric and symplectic exponentially fitted Runge-Kutta(-Nyström) methods for Hamiltonian problems,
show all references
References:
[1]
P. Albrecht, The extension of the theory of A-methods to RK methods,
[2] [3]
R. A. Al-Khasawneh, F. Ismail and M. Suleiman,
Embedded diagonally implicit Runge-Kutta-Nyström 4(3) pair for solving special second-order IVPs,
[4] [5] [6] [7] [8]
E. Hairer, C. Lubich and G. Wanner,
[9] [10] [11] [12]
Z. Kalogiratou, T. Monovasilis and T. E. Simos,
A sixth order symmetric and symplectic diagonally implicit Runge-Kutta method,
[13]
K. W. Moo, N. Senu, F. Ismail and N. M. Arifin,
A zero-dissipative phase-fitted fourth order diagonally implicit Runge-Kutta-Nyström method for solving oscillatory problems,
[14]
B. Paternoster,
Runge-Kutta(-Nyström) methods for ODEs with periodic solutions based on trigonometric polynomials,
[15]
M. Z. Qin and W. J. Zhu,
Canonical Runge-Kutta-Nyström methods for second order ordinary differential equations,
[16] [17]
J. M. Sanz-Serna and M. P. Calvo,
[18] [19]
P. W. Sharp, J. M. Fine and K. Burrage,
Two stage and three stage diagonally implicit Runge-Nutta-Nyström methods of orders three and four,
[20]
T. E. Simos,
An exponentially-fitted Runge-Kutta method for the numerical integration of initial-value problems with periodic or oscillating solutions,
[21] [22] [23]
G. Vanden Berghe, M. Van Daele and H. Van de Vyver,
Exponential fitted Runge-Kutta methods of collocation type: fixed or variable knot points?,
[24] [25]
X. You and B. Chen,
Symmetric and symplectic exponentially fitted Runge-Kutta(-Nyström) methods for Hamiltonian problems,
[1]
Antonia Katzouraki, Tania Stathaki.
Intelligent traffic control on internet-like topologies - integration of graph principles to the classic Runge--Kutta method.
[2]
Da Xu.
Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature.
[3]
Sihong Shao, Huazhong Tang.
Higher-order accurate Runge-Kutta discontinuous Galerkin methods for
a nonlinear Dirac model.
[4]
Per Christian Moan, Jitse Niesen.
On an asymptotic method for computing the modified energy
for symplectic methods.
[5]
Qiang Du, Manlin Li.
On the stochastic immersed boundary method with an implicit interface formulation.
[6]
Haitao Che, Haibin Chen, Yiju Wang.
On the M-eigenvalue estimation of fourth-order partially symmetric tensors.
[7]
Anita T. Layton, J. Thomas Beale.
A partially implicit hybrid method for computing interface motion in Stokes flow.
[8]
Ruijun Zhao, Yong-Tao Zhang, Shanqin Chen.
Krylov implicit integration factor WENO method for SIR model with directed diffusion.
[9]
Carles Simó, Dmitry Treschev.
Stability islands in the vicinity of separatrices of near-integrable symplectic maps.
[10]
Xinlong Feng, Huailing Song, Tao Tang, Jiang Yang.
Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation.
[11]
Kareem T. Elgindy.
Optimal control of a parabolic distributed parameter system using a fully exponentially convergent barycentric shifted gegenbauer integral pseudospectral method.
[12]
Cheng-Dar Liou.
Note on "Cost analysis of the M/M/R machine repair problem with second optional repair: Newton-Quasi method".
[13]
Kuo-Hsiung Wang, Chuen-Wen Liao, Tseng-Chang Yen.
Cost analysis of the M/M/R machine repair problem with second
optional repair: Newton-Quasi method.
[14]
Frederic Laurent-Polz, James Montaldi, Mark Roberts.
Point vortices on the sphere: Stability of symmetric relative equilibria.
[15] [16]
Salvador Cruz-García, Catherine García-Reimbert.
On the spectral stability of standing waves of the one-dimensional $M^5$-model.
[17]
Weipeng Hu, Zichen Deng, Yuyue Qin.
Multi-symplectic method to simulate Soliton resonance of (2+1)-dimensional
Boussinesq equation.
[18]
Tina Hartley, Thomas Wanner.
A semi-implicit spectral method for
stochastic nonlocal phase-field models.
[19]
Xufeng Xiao, Xinlong Feng, Jinyun Yuan.
The stabilized semi-implicit finite element method for the surface Allen-Cahn equation.
[20]
Impact Factor:
Tools Metrics Other articles
by authors
[Back to Top] |
In this section we will discuss relation between geometric and arithmetic mean. Geometric mean : If a single geometric mean 'G' is inserted between two given numbers 'a' and 'b', then G is known as the geometric mean between 'a' and 'b'.G.M. = G = $G^{2} = \sqrt{ab}$ Arithmetic mean: If a single arithmetic mean 'A' is inserted between two given numbers 'a' and 'b', then A is known as the arithmetic mean between 'a' and 'b'.A.M. = A = $\frac{a + b}{2}$
Properties of Arithmetic and Geometric means
Theorem 1: If A and G are arithmetic and geometric mean respectively between two positive numbers a and b, then A (AM) > G (GM) Proof : We have,A.M. = A = $\frac{a + b}{2}$ and G.M. = G = $G^{2} = \sqrt{ab}$
A - G = $\frac{a + b}{2} - \sqrt{ab}$
= $\frac{a + b -2\sqrt{ab} }{2}$
= $\frac{(\sqrt{a} -\sqrt{b})^{2}}{2}$ > 0
∴ A - G > 0⇒ A > G
Theorem 2: If A and G are arithmetic and geometric mean respectively between two positive numbers a and b, then the quadratic equation having a,b as its roots is $x^{2} - 2Ax + G^{2}$ = 0 Proof : We have,A.M. = A = $\frac{a + b}{2}$ and G.M. = G = $G^{2} = \sqrt{ab}$The given equation has two roots 'a' and 'b' is$x^{2} - 2Ax + G^{2}$ = 0$x^{2} - 2\times \frac{a + b}{2}$ + ab = 0$x^{2} - (a + b)x$ + ab = 0
Theorem 3: If A and G are arithmetic and geometric mean respectively between two positive numbers a and b, then the numbers are A $\pm \sqrt{A^{2} - G^{2}}$ Proof : The given equation has two roots 'a' and 'b' is$x^{2} - 2Ax + G^{2}$ = 0Now use the quadratic formulax= $\frac {2A \pm \sqrt{A^{2} -G^{2}}}{2}$∴ x = A $\pm \sqrt{A^{2} - G^{2}}$
Examples on relation between geometric and arithmetic mean
1) Find two positive numbers whose difference is 12 and whose A.M exceeds the G.M. by 2. Solution: Let the two numbers be x and y then ,x - y = 12 -------(i)A.M - G.M = 2$\frac{x + y}{2} - \sqrt{xy}$ = 2x + y - 2$\sqrt{xy}$ = 4$(\sqrt{x} - \sqrt{y})^{2}$ = 4$\sqrt{x} - \sqrt{y} = \pm$2 ---------(ii)Now x - y = 2$(\sqrt{x} - \sqrt{y})(\sqrt{x} + \sqrt{y})$ = 12⇒ $(\sqrt{x} + \sqrt{y}) \times \pm$2 = 12$(\sqrt{x} + \sqrt{y}) = \pm$6 --------(iii)Solve equation (ii) and (iii) we getx = 16 and y = 4So the two numbers are 16 and 4.
2) Find the two numbers whose A.M. is 34 and G.M. is 16. Solution: Let the two numbers be x and y then ,AM = 34 = $\frac{x + y}{2}$ ⇒ x + y = 68 ------(i)GM= 16 = $\sqrt{xy}$ ⇒ xy = 256 -----(ii)$(x - y)^{2} = (x + y)^{2}$ - 4xy$(x - y)^{2} = 68^{2} - 4 \times$ 256$(x - y)^{2}$ = 3600∴ x- y = 60and x + y = 68Adding above two equations we get,2x = 128x = 64∴ y = 4. |
Forgot password? New user? Sign up
Existing user? Log in
Hello, I propose this problem to the Brilliant community. Hope you enjoy it! This problem was one of the questions in Olympiads.
What is the remainder obtained of the long division x81+x49+x25+x9+xx3−x\large \dfrac{x^{81}+x^{49}+x^{25}+x^{9}+x}{x^3-x}x3−xx81+x49+x25+x9+x?
What is the remainder obtained of the long division x81+x49+x25+x9+xx3−x\large \dfrac{x^{81}+x^{49}+x^{25}+x^{9}+x}{x^3-x}x3−xx81+x49+x25+x9+x?
Note by Puneet Pinku 3 years, 1 month ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
One way to solve this is to simplify by the common factor x first, and then to use algebraic long division in a faster way ( " +...+ " after recognising the repetitive parts):
x80+x48+x24+x8+1x2−1=x78+x76+...+x48+2x46+2x44+...+2x24+3x22+3x20+...+3x8+4x6+4x4+4x2+4+5x2−1 \frac {x^{80}+x^{48}+x^{24}+x^8+1}{x^2 - 1} = x^{78}+x^{76}+...+x^{48}+2x^{46}+2x^{44}+...+2x^{24}+3x^{22}+3x^{20}+...+3x^8+4x^6+4x^4+4x^2+4+ \frac { \boxed {5} }{x^2-1} x2−1x80+x48+x24+x8+1=x78+x76+...+x48+2x46+2x44+...+2x24+3x22+3x20+...+3x8+4x6+4x4+4x2+4+x2−15
Log in to reply
How to find that 1 will be the coefficient for this much time or 4 will be there for only few numbers and lastly 5 will come... I mean can you explain the pattern a bit more clearly...
For the algebraic (or polynomial) long division method in general, you can find many notes, videos etc. on the Internet (e.g. https://brilliant.org/wiki/polynomial-division/ or https://revisionmaths.com/advanced-level-maths-revision/pure-maths/algebra/algebraic-long-division ).
Just follow the method in the case of this division and you will see. (The coefficient increases at some points, because you will have the same powers of x from your remainder (at the previous step) and you also have an original term there (e.g. x48+x48=2x46 x^{48 }+ x^{48} = 2x^{46} x48+x48=2x46
@Zee Ell – Did you perform the whole long division or somehow you analyzed and figured out the coefficients??? I recently found a new method to solve it..... I will be posting it as question.....
@Puneet Pinku – I started the whole, but jumped to the key points (where the remainders "got company" from the original polynomial) after recognising the pattern. With further analysis, the process can be shortened even further.
We see that after dividing by xxx, we have the expression
x80+x48+x24+x8+1(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8+1
Now consider
x80+x48+x24+x8+1(x+1)(x−1)−5(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}-\frac{5}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8+1−(x+1)(x−1)5
=x80+x48+x24+x8−4(x+1)(x−1)=\frac{x^{80}+x^{48}+x^{24}+x^8-4}{(x+1)(x-1)}=(x+1)(x−1)x80+x48+x24+x8−4
We see that x−1x-1x−1 and x+1x+1x+1 are factors of x80+x48+x24+x8−4x^{80}+x^{48}+x^{24}+x^8-4x80+x48+x24+x8−4 by the factor theorem as 1 and -1 are roots of this polynomial. Hence we can write
x80+x48+x24+x8−4(x+1)(x−1)=x80+x48+x24+x8+1(x+1)(x−1)−5(x+1)(x−1)=p(x)+0(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8-4}{(x+1)(x-1)}=\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}-\frac{5}{(x+1)(x-1)}=p(x)+\frac{0}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8−4=(x+1)(x−1)x80+x48+x24+x8+1−(x+1)(x−1)5=p(x)+(x+1)(x−1)0
For some polynomial p(x)p(x)p(x)
Therefore
x80+x48+x24+x8+1(x+1)(x−1)=p(x)+5(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}=p(x)+\frac{5}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8+1=p(x)+(x+1)(x−1)5.
Can you just point out the mistake in my solution:
Let the remainder be r(x)=(Ax^2+Bx+C).
let P(x) be the polynomial on the numerator.
P(x)=(x^3-x)g(x)+r(x)
setting x=0,
P(0)=r(0)=C
or,C=0...................(1)
setting x=1,
P(1)=r(1)=A+B
or,A+B=5................(2)
setting x=-1,
P(-1)=r(-1)=A-B
or,A-B=-5..................(3)
Solving (1),(2),and (3), we get A=C=0,and B=5.
So, remainder=5x (ans)
The problem is, that x cannot be 0. That would make the denominator of the fraction zero (division by zero).
Problem Loading...
Note Loading...
Set Loading... |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
The question is from Rudin's Functional Analysis chapter 1 number 17. It is stated as follows.
Show f$ \rightarrow D^\alpha f$ is a continuous mapping of $C^{\infty}(\Omega)$ into $C^{\infty}(\Omega)$ and also $\mathscr{D}_K$ into $\mathscr{D}_K$, for every multi-index $\alpha$.
I tried gathering all the relevant information I could below, but don't know where to go with it. Any and all help is appreciated. Thanks in advance.
$\alpha = (\alpha_1, \ldots, \alpha_n)$ is an ordered n-tuple of nonnegative integers.
$D^\alpha = \displaystyle \left(\frac{\partial}{\partial x_1}\right)^{\alpha_1} \dots \left(\frac{\partial}{\partial x_n}\right)^{\alpha_n}$
whose order is $ |\alpha | = \alpha_1 + \dots + \alpha_n$. If $|\alpha| = 0,$ $ D^\alpha f = f$.
I know the space $C(\Omega)$ is the vector space of all complex-valued continuous functions on $\Omega$ and $\Omega$ is the union of countably many compact sets $K_n \neq \emptyset$ which can be chosen so that $K_n$ lies in the interior of $K_{n+1}$ $(n = 1,2,3,\ldots)$.
$C^\infty(\Omega)$ is the set of all complex functions $f$ such that $D^\alpha f \in C(\Omega)$ for every multi-index $\alpha$.
If $K$ is a compact set in $\mathbb{R}^n$, then $\mathscr{D}_K$ denotes the space of all $f \in C^\infty(\mathbb{R}^n)$ whose support lies in $K$. |
Proportional, Integral and Derivative (PID) control architectures cover a significant portion of today’s industrial control applications. The PID control law for a Single-Input Single-Output (SISO) system is given by
\begin{equation}
u(t) = K_p e(t) + K_i \int_{\tau=0}^T e(\tau) dt + K_d \dot{e}(t)
\end{equation}
with system input $u$, system output $y$, control gains $K_p$, $K_i$, $K_d$ and error $e = y – y_\text{des}$ This control law found widespread adoption thanks to its simplicity, the small number of open tuning parameters and the availability of simple tuning rules.
More complex systems, however, usually require the control of multiple coupled inputs given multi-dimensional system outputs. Manual tuning therefore quickly becomes tedious and simple heuristics are no longer applicable to coupled controllers.
In this project, we adapt general methods from model-based reinforcement learning (RL) to the specific PID architecture in particular for Multi-Input Multi-Output (MIMO) systems and possibly gain scheduled control designs. Based on the Probabilistic Inference for Learning Control (PILCO) framework, the finite horizon optimal control problem can be solved efficiently whilst incorporating model uncertainty in a fully probabilistic fashion.
In [ ], we demonstrate how to incorporate the PID controller in the probabilistic prediction and optimization step of PILCO. Therefore, arbitrary PID control structures can be optimized, whilst taking into account the full non-linear system dynamics and accounting for uncertainty caused by missing data or complex dynamics. Our proposed state augmentations enable efficient, gradient-based controller optimization.
We demonstrate iterative learning control without prior system knowledge on the robotic platform Apollo. Apollo, equipped with its 7 Degree-of-Freedom (DoF) arm, is learning to balance the inverted pendulum within 7 learning iterations (equivalent to around 100 sec of interaction time). |
Exponential
Exponential \(\exp\) is solution of equation
\(\exp'(x)=\exp(x)\),
\(\exp(0)=1\)
Where "prime" demotes the derivative.
Notation \(\exp(x)=e^x\)
is also used; constant
\( \displaystyle \mathrm e=\sum_{n=0}^\infty \frac{1}{n!} \approx 2.71828182846 \)
The same function is called also "natural exponent" or "exponential to base \(\mathrm e\), in order to distinguish it from exponential to other base \(b\), denoted as
\(\exp_b(z)=b^z=\exp\big(\ln(b) z\big)\)
where \(\ln\) denotes the natural logarithm,
\(\ln=\exp^{-1}\)
where the superscript at the name of function indicates its iterate; logarithm is minus first iterate of the exponent, id set, the increase function. In wide range of values of \(z\), the identity holds,
\(\exp\big(\ln(z)\big)=z\) |
Revista Matemática Iberoamericana
Full-Text PDF (303 KB) | Metadata | Table of Contents | RMI summary
Volume 28, Issue 2, 2012, pp. 351–369 DOI: 10.4171/rmi/680
Published online: 2012-04-22
On curvature and the bilinear multiplier problemS. Zubin Gautam
[1](1) Indiana University, Bloomington, USA
We provide sufficient normal curvature conditions on the boundary of a domain $D \subset \mathbb{R}^4$ to guarantee unboundedness of the bilinear Fourier multiplier operator $\mathrm{T}_D$ with symbol $\chi_D$ outside the local $L^2$ setting, i.e., from $L^{p_1} ( \mathbb{R}^2) \times L^{p_2} ( \mathbb{R}^2) \rightarrow L^{p_3'} ( \mathbb{R}^2)$ with $\sum \frac{1}{p_j} = 1$ and $p_j <2$ for some $j$. In particular, these curvature conditions are satisfied by any domain $D$ that is locally strictly convex at a single boundary point.
Keywords: Bilinear Fourier multipliers, multilinear operators
Gautam S. Zubin: On curvature and the bilinear multiplier problem.
Rev. Mat. Iberoam. 28 (2012), 351-369. doi: 10.4171/rmi/680 |
The diagram shows a British 50 pence coin.
The seven arcs $AB$, $BC$, . . . , $FG$, $GA$ are of equal length and each arc is formed from the circle of radius a having its centre at the vertex diametrically opposite the mid-point of the arc. Show that the area of the face of the coin is
$$\frac{a^2}{2}(\pi-7\tan\frac{\pi}{14})$$
How can i prove it? |
I can't describe it, just i can't do it. Can you help me please. Note that there is an arrow showing the position "t".
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
For fun I tried to make what you specified with my limited knowledge. A short look on this wikibook chapter brought me a long way.
I am sure it can be done much more elegant, shorter, logical, etc, but this was as close I could get. I am not sure how to add the little bars in the right delimiter, I hope they are not mandatory. From your drawing it is also not clear to me if the middle parts should be smaller or in the same size. Perhaps it can serve you as a starting point.
\documentclass{article}\usepackage{amsmath}\usepackage{mathtools}\begin{document}\[\begin{bmatrix} \begin{pmatrix} 0 \\ \vdots \\ 0 \end{pmatrix} \\ \vdots \\ \begin{pmatrix} 0 \\ \vdots \\ t \\ \vdots \\ 0 \end{pmatrix} \\ \vdots \\ \begin{pmatrix} 0 \\ \vdots \\ 0 \end{pmatrix} \\\end{bmatrix}\begin{matrix*}[l] \left. \vphantom{\begin{pmatrix} 0 \\ \vdots \\ 0 \end{pmatrix} } \right\} 1\\ \vphantom{\vdots} \\ \left. \vphantom{\begin{pmatrix} 0 \\ \vdots \\ t \\ \vdots \\ 0 \end{pmatrix} } \right\} \ell \\ \vphantom{\vdots} \\ \left. \vphantom{\begin{pmatrix} 0 \\ \vdots \\ 0 \end{pmatrix} } \right\} N\end{matrix*}\]\end{document}
Your notation is not only inconvenient for the reader but also wrong or ambiguous. I start reading from the top: it is a zero vector then ellipsis mean that it is repeated and I arrive to a nonzero vector. Weird but OK I continue; then next ellipsis mean that it is also repeated but then I arrive again another zero vector. I start from the middle a nonzero vector is repeated both sides but both ends up in zero vectors. Besides you didn't even manage to denote the individual vector sizes which would make this even more crowded.
When do exactly these vectors switch meanings? Before the ellipsis? Is it only the middle is different or only the end points are zero vectors.
Long story short ellipsis were used when there was no other way but to place these symbols manually hand written to typewritten documents. We are long beyond those times so please consider using ellipses sparingly and take advantage of LaTeX instead.
You can do all kinds of things drawing lines with TikZ or using size information such as
$0_{1\times k}$ etc. but for such repeated patterns use Kronecker products.
Otherwise you can search the site for all kinds of matrix embellishments asked before and use them but I doubt that they will ever be able to help in this case.
\documentclass{article}\usepackage{mathtools}%<-- fixes enhances amsmath\begin{document}\[M = \begin{pmatrix}0_{l-1}\\[1mm] 1\\0_{N-l}\end{pmatrix}\otimes\begin{pmatrix}0_{t-1}\\[1mm] 1\\0_{k-t}\end{pmatrix}\]\end{document}
I found helpfull the answer of Todd Davies in LaTeX help and I did this:
\documentclass{article}\usepackage{amsmath}\newcommand\MyLBrace[2]{%\left.\rule{0pt}{#1}\right\}\text{#2}}\begin{document} $$\begin{bmatrix} \left(\begin{smallmatrix}0\\ \vdots\\0\end{smallmatrix}\right)\\ \vdots\\ \left(\begin{smallmatrix}0\\ \vdots\\1\\ \vdots \\0\end{smallmatrix}\right) \!\!\!\!\!\! \begin{smallmatrix} \\ \\ \rightarrow t\\ \\ \end{smallmatrix}\\ \vdots\\ \left(\begin{smallmatrix}0\\ \vdots\\0\end{smallmatrix}\right)\\ \end{bmatrix}\!\!\!\!\!\! \begin{array}{l} \MyLBrace{3ex}{1} \\ \\ \\ \MyLBrace{5.5ex}{l} \\ \\ \\ \MyLBrace{3ex}{N} \end{array}$$\end{document}
And it works fine, Thanks to all |
Difference between revisions of "Algebra and Algebraic Geometry Seminar Spring 2018"
(→Schedules)
(29 intermediate revisions by 7 users not shown) Line 1: Line 1: −
The seminar meets on Fridays at 2:25 pm in room
+
The seminar meets on Fridays at 2:25 pm in room .
−
Here is the schedule for [[Algebraic Geometry Seminar Spring 2017 | the previous semester]]
+
Here is the schedule for [[Algebraic Geometry Seminar Spring 2017 | the previous semester]], [[Algebraic Geometry Seminar 2018 | the next semester]], and for [[Algebraic Geometry Seminar | this semester]].
−
==Algebra and Algebraic Geometry Mailing List==
==Algebra and Algebraic Geometry Mailing List==
Line 8: Line 7:
*Please join the [https://admin.lists.wisc.edu/index.php?p=11&l=ags AGS Mailing List] to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link).
*Please join the [https://admin.lists.wisc.edu/index.php?p=11&l=ags AGS Mailing List] to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link).
+ + + + + + +
== Spring 2018 Schedule ==
== Spring 2018 Schedule ==
Line 65: Line 71:
|March 23
|March 23
|[http://www-personal.umich.edu/~ptoste/ Phil Tosteson (Michigan)]
|[http://www-personal.umich.edu/~ptoste/ Phil Tosteson (Michigan)]
−
|[[#Phil Tosteson|
+
|[[#Phil Tosteson|]]
|Steven
|Steven
|-
|-
|-
|-
|April 6
|April 6
−
|Wei Ho
+
|Wei Ho
−
|
+
|
|Daniel/Wanlin
|Daniel/Wanlin
|-
|-
|-
|-
|April 13
|April 13
−
|
+
|
−
|
+
|
|Daniel
|Daniel
|-
|-
|April 20
|April 20
|Alena Pirutka (NYU)
|Alena Pirutka (NYU)
−
|[[#Alena Pirutka|
+
|[[#Alena Pirutka|]]
|Jordan
|Jordan
+ + + + +
|-
|-
|April 27
|April 27
Line 92: Line 103:
|May 4
|May 4
|John Lesieutre (UIC)
|John Lesieutre (UIC)
−
|[[#John Lesieutre|
+
|[[#John Lesieutre|]]
|Daniel
|Daniel
|}
|}
Line 165: Line 176:
There is a close relationship between derived loop spaces, a geometric object, and (periodic) cyclic homology, a categorical invariant. In this talk we will discuss this relationship and how it leads to an equivariant localization result, which has an intuitive interpretation using the language of derived loop spaces. We discuss ongoing generalizations and potential applications in computing the periodic cyclic homology of categories of equivariant (coherent) sheaves on algebraic varieties.
There is a close relationship between derived loop spaces, a geometric object, and (periodic) cyclic homology, a categorical invariant. In this talk we will discuss this relationship and how it leads to an equivariant localization result, which has an intuitive interpretation using the language of derived loop spaces. We discuss ongoing generalizations and potential applications in computing the periodic cyclic homology of categories of equivariant (coherent) sheaves on algebraic varieties.
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
===Alexander Yom Din===
===Alexander Yom Din===
Line 170: Line 213:
'''Drinfeld-Gaitsgory functor and contragradient duality for (g,K)-modules'''
'''Drinfeld-Gaitsgory functor and contragradient duality for (g,K)-modules'''
−
Drinfeld suggested the definition of a certain endo-functor, called the pseudo-identity functor (or the Drinfeld-Gaitsgory functor), on the category of D-modules on an algebraic stack. We extend this definition to an arbitrary DG category, and show that if certain finiteness conditions are satisfied, this functor is the inverse of the Serre functor. We show that the pseudo-identity functor for
+
Drinfeld suggested the definition of a certain endo-functor, called the pseudo-identity functor (or the Drinfeld-Gaitsgory functor), on the category of D-modules on an algebraic stack. We extend this definition to an arbitrary DG category, and show that if certain finiteness conditions are satisfied, this functor is the inverse of the Serre functor. We show that the pseudo-identity functor for (g,K)-modules is isomorphic to the composition of cohomological and contragredient dualities, which is parallel to an analogous assertion for p-adic groups.
In this talk I will try to discuss some of these results and around them. This is joint work with Dennis Gaitsgory.
In this talk I will try to discuss some of these results and around them. This is joint work with Dennis Gaitsgory.
+ + + + + Latest revision as of 10:25, 26 December 2018
The seminar meets on Fridays at 2:25 pm in room B235.
Contents 1 Algebra and Algebraic Geometry Mailing List 2 Schedules 3 Spring 2018 Schedule 4 Abstracts Algebra and Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Schedules Spring 2018 Schedule Abstracts Tasos Moulinos Derived Azumaya Algebras and Twisted K-theory
Topological K-theory of dg-categories is a localizing invariant of dg-categories over [math] \mathbb{C} [/math] taking values in the [math] \infty [/math]-category of [math] KU [/math]-modules. In this talk I describe a relative version of this construction; namely for [math]X[/math] a quasi-compact, quasi-separated [math] \mathbb{C} [/math]-scheme I construct a functor valued in the [math] \infty [/math]-category of sheaves of spectra on [math] X(\mathbb{C}) [/math], the complex points of [math]X[/math]. For inputs of the form [math]\operatorname{Perf}(X, A)[/math] where [math]A[/math] is an Azumaya algebra over [math]X[/math], I characterize the values of this functor in terms of the twisted topological K-theory of [math] X(\mathbb{C}) [/math]. From this I deduce a certain decomposition, for [math] X [/math] a finite CW-complex equipped with a bundle [math] P [/math] of projective spaces over [math] X [/math], of [math] KU(P) [/math] in terms of the twisted topological K-theory of [math] X [/math] ; this is a topological analogue of a result of Quillen’s on the algebraic K-theory of Severi-Brauer schemes.
Roman Fedorov A conjecture of Grothendieck and Serre on principal bundles in mixedcharacteristic
Let G be a reductive group scheme over a regular local ring R. An old conjecture of Grothendieck and Serre predicts that such a principal bundle is trivial, if it is trivial over the fraction field of R. The conjecture has recently been proved in the "geometric" case, that is, when R contains a field. In the remaining case, the difficulty comes from the fact, that the situation is more rigid, so that a certain general position argument does not go through. I will discuss this difficulty and a way to circumvent it to obtain some partial results.
Juliette Bruce Asymptotic Syzygies in the Semi-Ample Setting
In recent years numerous conjectures have been made describing the asymptotic Betti numbers of a projective variety as the embedding line bundle becomes more ample. I will discuss recent work attempting to generalize these conjectures to the case when the embedding line bundle becomes more semi-ample. (Recall a line bundle is semi-ample if a sufficiently large multiple is base point free.) In particular, I will discuss how the monomial methods of Ein, Erman, and Lazarsfeld used to prove non-vanishing results on projective space can be extended to prove non-vanishing results for products of projective space.
Andrei Caldararu Computing a categorical Gromov-Witten invariant
In his 2005 paper "The Gromov-Witten potential associated to a TCFT" Kevin Costello described a procedure for recovering an analogue of the Gromov-Witten potential directly out of a cyclic A-inifinity algebra or category. Applying his construction to the derived category of sheaves of a complex projective variety provides a definition of higher genus B-model Gromov-Witten invariants, independent of the BCOV formalism. This has several advantages. Due to the categorical invariance of these invariants, categorical mirror symmetry automatically implies classical mirror symmetry to all genera. Also, the construction can be applied to other categories like categories of matrix factorization, giving a direct definition of FJRW invariants, for example.
In my talk I shall describe the details of the computation (joint with Junwu Tu) of the invariant, at g=1, n=1, for elliptic curves. The result agrees with the predictions of mirror symmetry, matching classical calculations of Dijkgraaf. It is the first non-trivial computation of a categorical Gromov-Witten invariant.
Aron Heleodoro Normally ordered tensor product of Tate objects and decomposition of higher adeles
In this talk I will introduce the different tensor products that exist on Tate objects over vector spaces (or more generally coherent sheaves on a given scheme). As an application, I will explain how these can be used to describe higher adeles on an n-dimensional smooth scheme. Both Tate objects and higher adeles would be introduced in the talk. (This is based on joint work with Braunling, Groechenig and Wolfson.)
Moisés Herradón Cueto Local type of difference equations
The theory of algebraic differential equations on the affine line is very well-understood. In particular, there is a well-defined notion of restricting a D-module to a formal neighborhood of a point, and these restrictions are completely described by two vector spaces, called vanishing cycles and nearby cycles, and some maps between them. We give an analogous notion of "restriction to a formal disk" for difference equations that satisfies several desirable properties: first of all, a difference module can be recovered uniquely from its restriction to the complement of a point and its restriction to a formal disk around this point. Secondly, it gives rise to a local Mellin transform, which relates vanishing cycles of a difference module to nearby cycles of its Mellin transform. Since the Mellin transform of a difference module is a D-module, the Mellin transform brings us back to the familiar world of D-modules.
Eva Elduque On the signed Euler characteristic property for subvarieties of Abelian varieties
Franecki and Kapranov proved that the Euler characteristic of a perverse sheaf on a semi-abelian variety is non-negative. This result has several purely topological consequences regarding the sign of the (topological and intersection homology) Euler characteristic of a subvariety of an abelian variety, and it is natural to attempt to justify them by more elementary methods. In this talk, we'll explore the geometric tools used recently in the proof of the signed Euler characteristic property. Joint work with Christian Geske and Laurentiu Maxim.
Harrison Chen Equivariant localization for periodic cyclic homology and derived loop spaces
There is a close relationship between derived loop spaces, a geometric object, and (periodic) cyclic homology, a categorical invariant. In this talk we will discuss this relationship and how it leads to an equivariant localization result, which has an intuitive interpretation using the language of derived loop spaces. We discuss ongoing generalizations and potential applications in computing the periodic cyclic homology of categories of equivariant (coherent) sheaves on algebraic varieties.
Phil Tosteson Stability in the homology of Deligne-Mumford compactifications
The space [math]\bar M_{g,n}[/math] is a compactification of the moduli space algebraic curves with marked points, obtained by allowing smooth curves to degenerate to nodal ones. We will talk about how the asymptotic behavior of its homology, [math]H_i(\bar M_{g,n})[/math], for [math]n \gg 0[/math] can be studied using the representation theory of the category of finite sets and surjections.
Wei Ho Noncommutative Galois closures and moduli problems
In this talk, we will discuss the notion of a Galois closure for a possibly noncommutative algebra. We will explain how this problem is related to certain moduli problems involving genus one curves and torsors for Jacobians of higher genus curves. This is joint work with Matt Satriano.
Daniel Corey Initial degenerations of Grassmannians
Let Gr_0(d,n) denote the open subvariety of the Grassmannian Gr(d,n) consisting of d-1 dimensional subspaces of P^{n-1} meeting the toric boundary transversely. We prove that Gr_0(3,7) is schoen in the sense that all of its initial degenerations are smooth. The main technique we will use is to express the initial degenerations of Gr_0(3,7) as a inverse limit of thin Schubert cells. We use this to show that the Chow quotient of Gr(3,7) by the maximal torus H in GL(7) is the log canonical compactification of the moduli space of 7 lines in P^2 in linear general position.
Alena Pirutka Irrationality problems
Let X be a projective algebraic variety, the set of solutions of a system of homogeneous polynomial equations. Several classical notions describe how ``unconstrained
the solutions are, i.e., how close X is to projective space: there are notions of rational, unirational and stably rational varieties. Over the field of complex numbers, these notions coincide in dimensions one and two, but diverge in higherdimensions. In the last years, many new classes of non stably rational varieties were found, using a specialization technique, introduced by C. Voisin. This method also allowed to prove that the rationality is not a deformation invariant in smooth and projective families of complex varieties: this is a joint work with B. Hassett and Y. Tschinkel. In my talk I will describe classical examples, as well as the recent progress around these rationality questions. Nero Budur Homotopy of singular algebraic varieties
By work of Simpson, Kollár, Kapovich, every finitely generated group can be the fundamental group of an irreducible complex algebraic variety with only normal crossings and Whitney umbrellas as singularities. In contrast, we show that if a complex algebraic variety has no weight zero 1-cohomology classes, then the fundamental group is strongly restricted: the irreducible components of the cohomology jump loci of rank one local systems containing the constant sheaf are complex affine tori. Same for links and Milnor fibers. This is joint work with Marcel Rubió.
Alexander Yom Din Drinfeld-Gaitsgory functor and contragradient duality for (g,K)-modules
Drinfeld suggested the definition of a certain endo-functor, called the pseudo-identity functor (or the Drinfeld-Gaitsgory functor), on the category of D-modules on an algebraic stack. We extend this definition to an arbitrary DG category, and show that if certain finiteness conditions are satisfied, this functor is the inverse of the Serre functor. We show that the pseudo-identity functor for (g,K)-modules is isomorphic to the composition of cohomological and contragredient dualities, which is parallel to an analogous assertion for p-adic groups.
In this talk I will try to discuss some of these results and around them. This is joint work with Dennis Gaitsgory.
John Lesieutre Some higher-dimensional cases of the Kawaguchi-Silverman conjecture
Given a dominant rational self-map f : X -->X of a variety defined over a number field, the first dynamical degree $\lambda_1(f)$ and the arithmetic degree $\alpha_f(P)$ are two measures of the complexity of the dynamics of f: the first measures the rate of growth of the degrees of the iterates f^n, while the second measures the rate of growth of the heights of the iterates f^n(P) for a point P. A conjecture of Kawaguchi and Silverman predicts that if P has Zariski-dense orbit, then these two quantities coincide. I will prove this conjecture in several higher-dimensional settings, including for all automorphisms of hyper-K\"ahler varieties. This is joint work with Matthew Satriano. |
Let $\Omega \in \mathbb R^n$, $n>1$, be a bounded domain with smooth boundary. Let $u \in C^1 (\bar \Omega)$ be harmonic in $\Omega$.
(a) Prove $\max_{\bar \Omega} |\nabla u|^2=\max_{\partial \Omega} |\nabla u|^2$.
(b) Let $u$ satisfies $u \in C^2 (\mathbb R^n)$, $\Delta u =0$ on $\mathbb R^n$. Show that if $u \in L^2 (\mathbb R^n)$, then $u$ is identically equal to zero.
Thoughts: For part (a), I think I need to apply the maximum principle but based on the maximum principle, do I need to show that $\nabla u$ is harmonic and is in $C^2$? Further thoughts: I can't comment because of low points. For John Ma, we know the definition of $C^1$ function, then we can just required $u$ to satisfy $\Delta u =0$. Final thoughts: Just checked my previous homework and I think I need to show $|\nabla u |^2$ is subharmonic, then apply the maximum principle to subharmonic functions.
Any hints would be appreciated.
I think I need to show $\mid \nabla u \mid ^2$ is subharmonic, then apply the maximum principle to subharmonic functions. |
I am trying to show that for $0<\alpha<2,$ $$\text{PV}\left (\int_{-\infty}^{+\infty} \frac{e^{\alpha x}}{e^{2x}-1}\mathrm d x\right )=-\frac{\pi}{2}\cot\left (\frac{\alpha\pi}{2} \right )\tag{$\star$}$$ to gain some familiarity with the concept of Principal Value.
My attempts
First of all I started by expanding the integral. Let $R>0$ be a positive real number. Then $$ \begin{align} \int_{-R}^R\frac{e^{\alpha x}}{e^{2x}-1}\mathrm dx&= \int_{-R}^R e^{\alpha x}\left (\sum_{n\ge 1}e^{-2nx}\right )\mathrm dx=\sum_{n\ge 1}\int_{-R}^R e^{(\alpha-2n)x}\mathrm d x \\ &=\sum_{n\ge 1}\left .\frac{e^{(\alpha-2n)x}}{\alpha-2n} \right |^{x=R}_{x=-R}=\sum_{n\ge 1}\frac{1}{\alpha-2n}\left ( e^{(\alpha-2n)R} -e^{-(\alpha-2n)R}\right) \end{align} $$ but I don't know how to continue from here. I tried evaluating the series through complex analytic methods but I was not successful.
I tried substituting $e^{2x}=u$. The integral becomes $$\int_{-R}^R\frac{e^{2x\frac{\alpha}{2}}}{e^{2x}-1}\mathrm dx=\frac{1}{2}\int_{e^{-2R}}^{e^{2R}}\frac{u^{\frac{\alpha}{2}-1}}{u-1}\mathrm du$$ but this doesn't look very promising. I was unable to manipulate this expression to evaluate the integral.
Question: How can I evaluate the principal value $(\star)$? |
First treat the inequality as if it were to be an equation: $ x^{2} - 3 = 0 $. Thus, $$ x^{2} = 3 $$ and solving for $x$, $$x =\pm \sqrt{3}$$
This makes $x = -\sqrt{3}$ and $x = \sqrt{3}$ the "roots" of that equation.
Now to solve the $inequality$ $ x^{2} - 3 > 0 $, keep in mind the aforementioned "roots" into the following cases:
Case #1: When $x < -\sqrt{3}$, the inequality holds true since $x^{2} - 3$ is positive.
Case #2: $-\sqrt{3} < x < \sqrt{3}$ would make the inequality false because $x^{2} - 3$ is negative.
Case #3: When $x > \sqrt{3}$, the inequality holds true since $x^{2} - 3$ is also positive.
Thus the solutions to the inequality are $x < -\sqrt{3} \quad \textbf{or} \quad x > \sqrt{3}$.
The solutions expressed in interval notation: $(-\infty, -\sqrt{3}) \cup (\sqrt{3}, \infty)$. |
Set $w=\phi(z)=i\frac{1+z}{1-z}$ (which maps the unit disk in the complex plane to the upper half of the complex plane). Show that the Schwarz Christoffel formula, $$f(z)=A_1\int_0^z \frac 1{(w-x_1)^{\beta_1}(w-x_2)^{\beta_2}\cdots(w-x_n)^{\beta_n}}\ dw+A_2,\quad (z \in \mathbb H),$$ retains the same form.
Question 1: What does this question mean? What do they mean by CS "retains the same form"? What on earth are they asking for? What does SC formula have to do with $\phi(z)$?
Help! (Once again, sorry for the vaguely worded question - I know everyone hates when you just post "help").
EDIT: OK, based on math chatroom discussions and some comments in the text, I think this question is asking to show that $\phi(f(z))$ has the same form as $f(z)$. However, I can't figure out the next step after substituting $f(z)$ into the equation for $\phi(z)$. What is the next step after $$i\frac{1+(A_1\int_0^z \frac 1{(w-x_1)^{\beta_1}(w-x_2)^{\beta_2}\cdots(w-x_n)^{\beta_n}}\ dw+A_2)}{1-(A_1\int_0^z \frac 1{(w-x_1)^{\beta_1}(w-x_2)^{\beta_2}\cdots(w-x_n)^{\beta_n}}\ dw+A_2)}?$$Thank you. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.