text
stringlengths 256
16.4k
|
|---|
I'm not sure where I could pose a challenge to find best $f(n)$ so people will join in. $n\ge 5$ will never probably be proven optimal, but some lucky computations or out of the box analysis might give nice results.
(Given $n$ fixed digits and operations $(+,-,\times,\div)$, whats the highest $N\in\mathbb N$, such that all numbers $1\dots N$ can be built? $f(n)=N$)
@TheSimpliFire You mentioned base, is it true that using digits $\lt b$ means we can represent some number $N$ using $\le (b+1)\log_b N$ digits, if only $+,\times$ are allowed?
If $b=2$, $3\log_2 N$ bound is given: https://arxiv.org/pdf/1310.2894.pdf and explained: " The upper bound can be obtained by writing $N$ in binary and finding a representation using Horner’s algorithm."
So if we actually allow $\le b$ digits, we have $log_b N$ digits and that many bases, so the bound would be $2\log_b N$? https://en.wikipedia.org/wiki/Horner%27s_method @TheSimpliFire
The problem is inverting the bound which is not trivial if $b\ne 2$.
For example, we can build $1=2-1$ using $1,2$ digits but adding onto $5$ and having now a set $1,2,5$ does NOT allow to rebuild $1$ since all digits must be used.
So keeping consecutive integers from $n-1$ digit case is not guaranteed. This is the issue.
The $d$ is fixed at $n$ digits and all need to be used.
Thats why I took $d_i=2^{i-1}$ digit sets since we can divide two largest to get the $n-1$ case and this allows to obtain bound $f(n)\ge2^n-1$ eventually.
Inductively.
$i=1,\dots,n$
This is not the issue if all digits are $1$'s also, on which they give bound $3\log_2 N\ge a(N)$ which can be translated to $f(n)\ge 2^{N/3}$ since multiplying two $1$'s reduces the case to $n-1$ and allows induction.
We need to inductively build digits $d_i$ so next set can achieve at least what previous one did.
Otherwise, it is hard to prove the next step is better when adding more digits.
For example we can add $d_0,d_0/2,d_0/2$ where $d_0$ can be anything since $d_0-d_0/2-d_0/2$ reduces us to case $n-3$. The comments discuss setting better bounds using similar construction (on my last question)
I'm not sure if you have the full context of the question or if this makes sense so sorry for clogging up the chat :P
|
This question already has an answer here:
If $X= \newcommand{\W}{\operatorname{W}}\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{.{^{.^{\dots}}}}}}}}}}} $ then what is the value of $X^2-e^{1/X}$ ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
If $X= \newcommand{\W}{\operatorname{W}}\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{\sqrt{A}^{.{^{.^{\dots}}}}}}}}}}} $ then what is the value of $X^2-e^{1/X}$ ?
You have $X= \sqrt{A}^X.$ So
$$\ln X = X \ln \sqrt{A} = \frac{X}{2}\ln A$$
$$\frac{2\ln X}{X} = \ln A$$
$$ A = \exp\left(\frac{2\ln X}{X}\right).$$
Firstly, this has to converge, which occurs when $e^{-2e}\le A\le e^{2e^{-1}}$. More elaboration on the convergence is discussed in this question.
$$X=A^{X/2}=e^{\frac12\ln(A)X}$$
$$Xe^{-\frac12\ln(A)X}=1$$
$$-\frac12\ln(A)Xe^{-\frac12\ln(A)X}=-\frac12\ln(A)$$
$$-\frac12\ln(A)X=W\left(-\frac12\ln(A)\right)$$
$$X=\frac{W\left(-\frac12\ln(A)\right)}{-\frac12\ln(A)}=e^{-W\left(-\frac12\ln(A)\right)}$$
Where I used the Lambert W function. Now it's easy to compute the rest.
|
The First Isomorphism Theorem, Intuitively
Welcome back to our little discussion on quotient groups! (If you're just now tuning in, be sure to check out "What's a Quotient Group, Really?" Part 1 and Part 2!) We're wrapping up this mini series by looking at a few examples. I'd like to take my time emphasizing intuition, so I've decided to give each example its own post. Today we'll take an intuitive look at the quotient given in the First Isomorphism Theorem.
Example #1: The First Isomorphism Theorem
Suppose $\phi:G\to H$ is a homomorphism of groups (let's assume it's not the map that sends everything to the identity, otherwise there's nothing interesting to say) and recall that $\ker\phi\subset G$ means "You belong to $\ker\phi$ if and only if you map to the identity $e_H$ in $H$." I'd like to convince you why it's helpful to think of the quotient $G/\ker\phi$ as consisting of all the stuff in $G$ that doesn't map to $e_H$.
First notice that every element of $G$ is either 1) in $\ker\phi$ or 2) is not. And there's only one way to satisfy 1)---you're simply
in the kernel. This is why we have exactly one "trivial" coset, $\ker\phi$. On the other hand, there may be many ways to satisfy 2) and is why there may be many "nontrivial" cosets.
But just
how might an element $g\in G$ satisfy 2)? Well, $\phi(g)\neq e_H$, of course! But notice! There could be many elements besides $g$ who also map to the same $\phi(g)$ under $\phi$. (After all, we haven't required that $\phi$ be injective.) In fact, every element of the form $gg'$ where $g'\in\ker\phi$ fits the bill. So we group all those elements together in one pile, one coset, and denote it $g\ker\phi$. The notation for this is quite good: the little $g$ reminds us, "Hey, these are all the folks that map to the value of $\phi$ at that $g$." And multiplying $g$ by $\ker\phi$ on the right is suggestive of what we just observed: we can obtain other elements with the same image $\phi(g)$ by multiplying $g$ on the right by things in $\ker\phi$.
I like to imagine the elements of $G$ as starting off as dots scattered everywhere,
which we can then organize into little piles according to their image under $\phi$. In fact, let's color-code them:
(Notice $\phi$ isn't necessarily surjective.) Now here's the key observation: we get one such pile for every element in the set $\phi(G)=\{h\in H|\phi(g)=h \text{ for some $g\in G$}\}$. The idea, then, behind forming the quotient $G/\ker\phi$ is that we might as well consider the collection of green dots as a
single green dot and call it the coset $\ker\phi$. And we might as well consider the collection of pink dots as a single pink dot and call it the coset $g_1\ker\phi$, and so on. So we get a nice, clean picture like this:
Intuitively, then, we should expect a one-to-one correspondence between the cosets of $G/\ker\phi$ and the elements of $\phi(G)$. That's what the image above is indicating. And that's
exactly what the First Isomorphism Theorem means when it tells us there is a bijection $$G/\ker \phi \cong \phi(G).$$(In fact, it's richer than a bijection of sets---it's actually an isomorphism of groups!) Pretty cool, huh? We should also notice that there are exactly $|\phi(G)\smallsetminus\{e_H\}|$ ways to "fail" to be in $\ker\phi$, and exactly $1=|\{e_h\}|$ way to be in $\ker\phi$. Typically, $|\phi(G)\smallsetminus\{e_H\}|$ is bigger than one, and so the interesting or substantial part of the quotient $G/\ker\phi$ lies in its subset of nontrival cosets, $g_1\ker\phi,g_2\ker\phi,\ldots.$ The First Isomorphism Theorem implies that this is the same as viewing the interesting or substantial part of $\phi(G)$ as lying in all the elements of $g$ that don't map to the identity in $H$.
And this is why I like to think of $G/\ker\phi$ as "things in $G$ that don't map to the identity."
A closing remark
At some point, you may have seen the First Isomorphism Theorem stated as follows:
Theorem: Let $\phi:G\to H$ be a group homomorphism and let $\pi:G\to G/\ker\phi$ be the cannonical (surjective) homomorphism $g\mapsto g\ker\phi$. Then there is a unique isomorphism $\psi:G/\ker\phi\to \phi(G)$ so that $\phi=\psi\circ\pi$, i.e. so that the diagram on the right commutes.
On the surface, this sounds fancy, but it's really just the concise version of our discussion above. We can see this by matching up our pictures with the diagram:
The diagram simply states the obvious: the map $\phi$ from $G$ naturally partitions the elements of $G$ into little (color-coded) piles, according to where they land in $H$. This therefore gives us
two ways to get from an arbitrary $g\in G$ to its image $\phi(g)\in H$. We can either send it directly there via $\phi$. This is the diagonal arrow in the diagram. Or we can first send it to its corresponding color/pile/coset, and then realize, "Aha, everyone in that particular color/pile/coset maps to $\phi(g)$, therefore $g$ goes there too." This is the composition of the horizontal and vertical maps, $\pi$ and $\psi$. And the uniqueness of $\psi$ just says, "This is super obvious!"
(By the way, there's nothing really special about groups here. There's a first isomorphism theorem for other algebraic objects, and the same intuition holds.)
|
The Sierpinski Space and Its Special Property The Basic Idea
Last time we chatted about a pervasive theme in mathematics, namely that
objects are determined by their relationships with other objects, or more informally, you can learn a lot about an object by studying its interactions with other things. Today I'd to give an explicit illustration of this theme in the case when "objects" = topological spaces
and
"relationships with other objects" = continuous functions.
The goal of this post, then, is to convince you that
The topology on a space X is completely determined by the set* of all continuous functions to X.
But what do I mean by "is completely determined by"? Well, suppose $Z$ is any topological space, and let hom$(Z,X)$ denote the set of all continuous functions from $Z$ to $X$. Then the above means the following:
#1: the topology on X dictates what hom( Z,X) must be. Xdictates what hom( Z,X) must be. and #2: hom( Z,X) dictates what the topology on X must be. Z,X) dictates what the topology on Xmust be. Now what exactly do these two notions mean? To answer this, I think it will help to look at a concrete example. In fact, let's consider the case when $X$ is one of the simplest topological spaces out there---the Sierpinski space.
From English to Math
Start with a set $S$ with two elements, say $\{0,1\}$. We can turn this set into a topological space---called the
Sierpinski space---by declaring the open sets to be $\emptyset$, $1$ and $S$. We'll call this the Sierpinski topology. (Incidentally, there are only three possible topologies on $\{0,1\}$--the discrete one, the indiscrete one, and this one.) Notice that what we call the elements isn't so important. That is, you can replace "0" and "1" by "red" and "blue" or "dog" and "cat," if you like. Either way, we can illustrate the Sierpinski space as shown on the right. Now suppose $Z$ is any topological space and observe that for any open set $U$ in $Z$, we can construct a function $f_U$ from $Z$ to the Sierpinski space $S$ which sends every point in $U$ to 1 and all other points to 0:$$f_U(z)=\begin{cases}1, &\text{if $z\in U$,}\\0, &\text{if $z\not\in U$}.\end{cases}$$Even better, such a map $f_U$ is continuous! The preimage of each open set in $S$ is open in Z: $f_U^{-1}(\emptyset)=\emptyset$, $f_U^{-1}(\{1\})=U,$ and $f_U^{-1}(S)=Z$. On the flip side, if $f:Z\to S$ is any continuous function then we can consider the subset of $Z$ $$U_f=f^{-1}(\{1\})=\{z\in Z:f(z)=1\}.$$ Since $f$ is continuous and $\{1\}$ is open in $S$, the set $U_f$ is necessarily open! These two constructions reveal that there is a bijection of sets:**
And in fact, the Sierpinski topology on $S$ is
completely characterized by this property! That is, the Sierpinski topology on a two-point set $S$ is the ONLY topology (up to homeomorphism) on $S$ that satisfies the property: "continuous functions from any space $Z$ to $S$ are in one-to-one correspondence with the open sets of $Z$."***This follows from the following two observations that we alluded to earlier:
#1: The topology on S dictates what hom(Z,S) must be.
That is, if we endow $S$ with the Sierpinski topology then the maps of hom$(Z,S)$
must be in one-to-one correspondence with the open sets of $Z$. We saw this above. But what if we were to give $S$ the indiscrete topology? Or the discrete topology? The the set hom$(Z,S)$ will change accordingly. Indeed, suppose $S$ has the indiscrete topology. Then every function $Z\to S$ is continuous! In other words hom$(Z,S)$ is the set of all functions $f:Z\to S$, and the number of such $f$ may exceed the number of open subsets of $Z$.
Suppose now that $S$ has the discrete topology. Then number of continuous functions $f:Z\to S$ may be
smaller than the number of open subsets of $Z$. To see this, suppose $U\subset Z$ is open and consider the function $f_U:Z\to S$ that we defined earlier. This map is continuous if and only if $f_U^{-1}(\{1\})=U$ is open---which it is---AND if $f_U^{-1}(\{0\})=Z\smallsetminus U$ is open---which it may not be.
This is a helpful observation: whenever a space $X$ (any $X$, not just $S$) has the indiscrete topology, it's easy to be continuous! In fact,
every function to $X$ will be continuous. But if $X$ has the discrete topology, it's much harder for functions to $X$ to be continuous. Fewer open sets in $X$ = more continuous functions to $X$. More open sets in $X$ = fewer continuous functions to $X$.
#2: hom(Z,S) dictates what the topology on S must be.
To see this, suppose $\tau$ is any topology on the set $S$. If for any space $Z$ the maps of hom$(Z,S)$ are in one-to-one correspondence with the open subsets of $Z$, I claim the topology $\tau$ MUST be the Sierpinski topology. But this follows from our conversation above! If $\tau$ is the Sierpinski topology, then the claim holds. But if $\tau$ is either the indiscrete or the discrete topology, we've just seen that the continuous maps may
not be in bijection with the open subsets of $Z$.
Pretty cool, huh? The Sierpinski topology is
just the right topology to put on a two-point space so that continuous maps from any space $Z$ correspond exactly with the open sets of $Z$. And the punchline is that we can play a similar game on any topological space $X$ to discover that
the data of the topology on a space X is "encoded"in hom(Z,X)
(or hom$(X,Z)$)! In other words, a topological space is completely determined by the continuous functions to it.
Digging Deeper
Those with a little knowledge of category theory may like to know that today's theme---and the theme of our last post---is a consequence of the following proposition (which is not terribly hard to prove and is actually fun to try!):
Proposition: Let $\mathsf{C}$ be a locally small category. The following are equivalent: $f:X\to Y$ is an isomorphism. For all objects $Z$ in $\mathsf{C}$, $f^*:\text{hom}(Y,Z)\to\text{hom}(X,Z)$ is an isomorphism. For all objects $Z$ in $\mathsf{C},$ $f_*:\text{hom}(Z,X)\to\text{hom}(Z,Y)$ is an isomorphism.
Here $f^*$ is called the
pullback of $f$ and it sends a morphism $g\in\text{hom}(Y,Z)$ to the morphism $g\circ f\in\text{hom}(X,Z)$. Likewise, $f_*$ is the pushforward of $f$ and it sends a morphism $h\in\text{hom}(Z,X)$ to the morphism $f\circ h\in\text{hom}(Z,Y)$.
For concreteness, it might help to think of $X$ and $Y$ as topological spaces and hom$(Z,X)$ as the set of continuous functions to $X$ from $Z$. The proposition then tells us that two spaces $X$ and $Y$ have the same topology (i.e. are homeomorphic)
if and only if the set of continuous functions to (or from) $X$ is the same as the set of continuous functions to (or from) $Y$. That is to say, a topological space $X$ is completely determined by the set of all continuous maps to it!
But notice the proposition holds for
any (locally small) category! Thus we recover the statement: an object is completely determined by the set of morphisms to (or from) it. In short,
*If you're concerned with the word "set" here, note that the category of all topological spaces and continuous functions is a
locally small category, that is for any two spaces X and Y, the collection of continuous functions between them is a bona fide set .
**In fact, if we view
Z and S as plain sets (and not topological spaces), notice all functions from Z to S look like indicator functions on the subsets of Z. So the set of all functions from Z to S is in bijection with the set of all subsets of Z. This is a helpful thing to keep in mind.
***This is actually a key fact. What's important about the Sierpinski topology---or
any topology, for that matter---is not so much its definition but rather the property that the space possesses once it's endowed with the topology.
|
Show/Hide Sub-topics (O Level) Volume (V) is defined as the amount of space occupied by a three-dimensional object as measured in cubic units. SI unit of volume is metres cube ($m^{3}$). It is a scalar quantity. Density ($\rho$) is defined as the mass (m) of a substance per unit volume. SI unit of density is kilograms per metre cube (kg m -3). It is a scalar quantity. Another common unit of density is $\text{g cm}^{-3}$. $1000 \text{ kg m}^{-3} = 1 \text{ g cm}^{-3}$ $\rho = \frac{m}{V}$ The density of a substance does not change as you move from place to place as the mass and volume does not depend on the gravitational acceleration of the point that the object is at.
Examine the figure above.
Why does the cork float and the rock sink in water? Ans: The density of a substance determines whether it will float or sink in different liquids (or gases).
The cork is less dense than water $\rightarrow$ it floats in water.
The rock is denser than water $\rightarrow$ it sinks in water.
Self-Test Questions A block of concrete with dimensions 0.4 m, 0.3 m and 0.1 m has a density of $2500 \text{ kg m}^{-3}$. Calculate the mass of the block.
The volume of the block is given by:
$$\begin{aligned} V &= 0.4 \times 0.3 \times 0.1 \\ &= 0.012 \text{ m}^{3} \end{aligned}$$
From the density equation ($\rho = \frac{m}{V}$), we have:
$$\begin{aligned} m &= \rho V \\ &= 2500 \times 0.012 \\ &= 30 \text{ kg} \end{aligned}$$
|
Osaka Journal of Mathematics Osaka J. Math. Volume 42, Number 3 (2005), 633-651. Asymptotic behavior of least energy solutions to a four-dimensional biharmonic semilinear problem Abstract
In this paper, we study the following fourth order elliptic problem $(E_p)$: \begin{eqnarray*} (E_p) \left \{ \begin{array}{l} \Delta^2 u = u^p \quad \mbox{in} \ \Omega, \\ u > 0 \quad \mbox{in} \ \Omega, \\ u |_{\partial\Omega} = \Delta u |_{\partial\Omega} = 0 \end{array} \right. \end{eqnarray*} where $\Omega$ is a smooth bounded domain in $\mathbf{R}^4$, $\Delta^2 = \Delta\Delta$ is a biharmonic operator and $p >1$ is any positive number.
We investigate the asymptotic behavior as $p \to \infty$ of the least energy solutions to $(E_p)$. Combining the arguments of Ren-Wei [8] and Wei [10], we show that the least energy solutions remain bounded uniformly in $p$, and on convex bounded domains, they have one or two ``peaks'' away form the boundary. If it happens that the only one peak point appears, we further prove that the peak point must be a critical point of the Robin function of $\Delta^2$ under the Navier boundary condition.
Article information Source Osaka J. Math., Volume 42, Number 3 (2005), 633-651. Dates First available in Project Euclid: 21 July 2006 Permanent link to this document https://projecteuclid.org/euclid.ojm/1153494506 Mathematical Reviews number (MathSciNet) MR2166726 Zentralblatt MATH identifier 1165.35352 Citation
Takahashi, Futoshi. Asymptotic behavior of least energy solutions to a four-dimensional biharmonic semilinear problem. Osaka J. Math. 42 (2005), no. 3, 633--651. https://projecteuclid.org/euclid.ojm/1153494506
|
When you fit a generalized linear model (GLM) in R and call
confint on the model object, you get confidence intervals for the model coefficients. But you also get an interesting message:
Waiting for profiling to be done...
What's that all about? What exactly is being profiled? Put simply, it's telling you that it's calculating a
profile likelihood ratio confidence interval.
The typical way to calculate a 95% confidence interval is to multiply the standard error of an estimate by some normal quantile such as 1.96 and add/subtract that product to/from the estimate to get an interval. In the context of GLMs, we sometimes call that a Wald confidence interval.
Another way to determine an upper and lower bound of plausible values for a model coefficient is to find the minimum and maximum value of the set of all coefficients that satisfy the following:
\[-2\log\left(\frac{L(\beta_{0}, \beta_{1}|y_{1},…,y_{n})}{L(\hat{\beta_{0}}, \hat{\beta_{1}}|y_{1},…,y_{n})}\right) < \chi_{1,1-\alpha}^{2}\]
Inside the parentheses is a ratio of
likelihoods. In the denominator is the likelihood of the model we fit. In the numerator is the likelihood of the same model but with different coefficients. (More on that in a moment.) We take the log of the ratio and multiply by -2. This gives us a likelihood ratio test (LRT) statistic. This statistic is typically used to test whether a coefficient is equal to some value, such as 0, with the null likelihood in the numerator (model without coefficient, that is, equal to 0) and the alternative or estimated likelihood in the denominator (model with coefficient). If the LRT statistic is less than \(\chi_{1,0.95}^{2} \approx 3.84\), we fail to reject the null. The coefficient is statisically not much different from 0. That means the likelihood ratio is close to 1. The likelihood of the model without the coefficient is almost as high the model with it. On the other hand, if the ratio is small, that means the likelihood of the model without the coefficient is much smaller than the likelihood of the model with the coefficient. This leads to a larger LRT statistic since it's being log transformed, which leads to a value larger than 3.84 and thus rejection of the null.
Now in the formula above, we are seeking all such coefficients in the numerator that would make it a true statement. You might say we're “profiling” many different null values and their respective LRT test statistics.
Do they fit the profile of a plausible coefficient value in our model? The smallest value we can get without violating the condition becomes our lower bound, and likewise with the largest value. When we're done we'll have a range of plausible values for our model coefficient that gives us some indication of the uncertainly of our estimate.
Let's load some data and fit a binomial GLM to illustrate these concepts. The following R code comes from the help page for
confint.glm. This is an example from the classic Modern Applied Statistics with S.
ldose is a dosing level and
sex is self-explanatory.
SF is number of successes and failures, where success is number of dead worms. We're interested in learning about the effects of dosing level and sex on number of worms killed. Presumably this worm is a pest of some sort.
# example from Venables and Ripley (2002, pp. 190-2.)
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
sex <- factor(rep(c("M", "F"), c(6, 6)))
SF <- cbind(numdead, numalive = 20-numdead)
budworm.lg <- glm(SF ~ sex + ldose, family = binomial)
summary(budworm.lg)
## ## Call: ## glm(formula = SF ~ sex + ldose, family = binomial) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.10540 -0.65343 -0.02225 0.48471 1.42944 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -3.4732 0.4685 -7.413 1.23e-13 *** ## sexM 1.1007 0.3558 3.093 0.00198 ** ## ldose 1.0642 0.1311 8.119 4.70e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 124.8756 on 11 degrees of freedom ## Residual deviance: 6.7571 on 9 degrees of freedom ## AIC: 42.867 ## ## Number of Fisher Scoring iterations: 4
The coefficient for
ldose looks significant. Let's determine a confidence interval for the coefficient using the
confint function. We call
confint on our model object,
budworm.lg and use the
parm argument to specify that we only want to do it for
ldose:
confint(budworm.lg, parm = "ldose")
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 0.8228708 1.3390581
We get our “waiting” message though there really was no wait. If we fit a larger model and request multiple confidence intervals, then there might actually be a waiting period of a few seconds. The lower bound is about 0.8 and the upper bound about 1.32. We might say every increase in dosing level increase the log odds of killing worms by at least 0.8. We could also exponentiate to get a CI for an odds ratio estimate:
exp(confint(budworm.lg, parm = "ldose"))
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 2.277027 3.815448
The odds of “success” (killing worms) is at least 2.3 times higher at one dosing level versus the next lower dosing level.
To better understand the profile likelihood ratio confidence interval, let's do it “manually”. Recall the denominator in the formula above was the likelihood of our fitted model. We can extract that with the
logLik function:
den <- logLik(budworm.lg)
den
## 'log Lik.' -18.43373 (df=3)
The numerator was the likelihood of a model with a
different coefficient. Here's the likelihood of a model with a coefficient of 1.05:
num <- logLik(glm(SF ~ sex + offset(1.05*ldose), family = binomial))
num
## 'log Lik.' -18.43965 (df=2)
Notice we used the
offset function. That allows us to fix the coefficient to 1.05 and not have it estimated.
Since we already extracted the
log likelihoods, we need to subtract them. Remember this rule from algebra?
\[\log\frac{M}{N} = \log M – \log N\]
So we subtract the denominator from the numerator, multiply by -2, and check if it's less than 3.84, which we calculate with
qchisq(p = 0.95, df = 1)
-2*(num - den)
## 'log Lik.' 0.01184421 (df=2)
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] TRUE
It is. 1.05 seems like a plausible value for the
ldose coefficient. That makes sense since the estimated value was 1.0642. Let's try it with a larger value, like 1.5:
num <- logLik(glm(SF ~ sex + offset(1.5*ldose), family = binomial))
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] FALSE
FALSE. 1.5 seems too big to be a plausible value for the
ldose coefficient.
Now that we have the general idea, we can program a
while loop to check different values until we exceed our threshold of 3.84.
cf <- budworm.lg$coefficients[3] # fitted coefficient 1.0642
cut <- qchisq(p = 0.95, df = 1) # about 3.84
e <- 0.001 # increment to add to coefficient
LR <- 0 # to kick start our while loop
while(LR < cut){
cf <- cf + e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(upper <- cf)
## ldose ## 1.339214
To begin we save the original coefficient to
cf, store the cutoff value to
cut, define our increment of 0.001 as
e, and set
LR to an initial value of 0. In the loop we increment our coefficient estimate which is used in the
offset function in the estimation step. There we extract the log likelihood and then calculate
LR. If
LR is less than
cut (3.84), the loop starts again with a new coefficient that is 0.001 higher. We see that our upper bound of 1.339214 is very close to what we got above using
confint (1.3390581). If we set
e to smaller values we'll get closer.
We can find the LR profile lower bound in a similar way. Instead of adding the increment we subtract it:
cf <- budworm.lg$coefficients[3] # reset cf
LR <- 0 # reset LR
while(LR < cut){
cf <- cf - e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(lower <- cf)
## ldose ## 0.822214
The result, 0.822214, is very close to the lower bound we got from
confint (0.8228708).
This is a
very basic implementation of calculating a likelihood ratio confidence interval. It is only meant to give a general sense of what's happening when you see that message
Waiting for profiling to be done.... I hope you found it helpful. To see how R does it, enter
getAnywhere(profile.glm) in the console and inspect the code. It's not for the faint of heart.
I have to mention the book Analysis of Categorical Data with R, from which I gained a better understanding of the material in this post. The authors have kindly shared their R code at the following web site if you want to have a look: http://www.chrisbilder.com/categorical/
To see how they “manually” calculate likelihood ratio confidence intervals, go to the following R script and see the section “Examples of how to find profile likelihood ratio intervals without confint()”: http://www.chrisbilder.com/categorical/Chapter2/Placekick.R
|
Forgot password? New user? Sign up
Existing user? Log in
Please help me solve the following Integral :
I=∫cos(2x)cosxdx \large I = \int\dfrac{\sqrt{cos(2x)}}{cosx} dx I=∫cosxcos(2x)dx
NoteNoteNote : Anyone and everyone please post your solution , this question appeared in my class test.
Note by Rishu Jaar 1 year, 11 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
cos2xcosx=cos2xcosxcos2x\dfrac{\sqrt{\cos 2x}}{\cos x} = \dfrac{\cos 2x}{\cos x \sqrt{\cos 2x}}cosxcos2x=cosxcos2xcos2x
2cos2x−1cosx2cos2x−1=2cosx2cos2x−1−1cosx2cos2x−1\dfrac{2\cos^2 x -1}{\cos x \sqrt{2\cos^2 x -1}} = \dfrac{2\cos x }{\sqrt{2\cos^2 x -1}}-\dfrac{1}{\cos x \sqrt{2\cos^2 x -1}}cosx2cos2x−12cos2x−1=2cos2x−12cosx−cosx2cos2x−11
For first term setting sinx=u\sin x = usinx=u will convert it into a standard integral. For the second term first set sinx=u\sin x = u sinx=u then then set 1t=u\dfrac{1}{t}=ut1=u. This will convert it into a standard integral.
Log in to reply
Hey nice approach , thank you bro, i wonder if you are this good from self - study or do you go to some good coaching? ⌣¨\ddot\smile⌣¨
Self study. Although I have taken DLP from Resonance.
@A Former Brilliant Member – Very nice , you joined any test series?
@Rishu Jaar – Yes , Resonance.
@A Former Brilliant Member – Oh that's great , what is your ranking (in around 8000 students i guess) , just curious , actually here in mathura , i joined only a local coaching(they were frauds) which was disastrous , and thus i want to check my level of preparedness.(i am not much confident)
@Rishu Jaar – My average All resonance rank is around 150. Trying to improve this figure.
@A Former Brilliant Member – Awesome rank bro, your best ? My best in PACE was APR 220 .
@Rishu Jaar – My best is ARR 33.
@A Former Brilliant Member – That's amazing , with ranks around those you can guarantee a top 1000 AIR in JEE for sure , i guess? Are you revising the course , i mean is your syllabus finished entirely?
@Rishu Jaar – No, I do have some topics to be covered. Will finish them soon.
@A Former Brilliant Member – Ok nice , Is the resonance DLP material enough or do you refer some books, what is your revision strategy?
@A Former Brilliant Member – It was nice talking to you , please do share your strategy when you are free , or owing to privacy we could chat at slack if you want.
@A Former Brilliant Member – Mayank , you must be revising as the final stage for JEE , Or do you have to complete some left course topics like me ? I would be very thankful to you if you share your stats. and strategy , so that i can analyze my self better.
@Md Zuhair , @Mayank Singhal , guys any help?
Let me try ... :)
Problem Loading...
Note Loading...
Set Loading...
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
Find the centre of mass of an uniform cone of height $h$ and radius $R$. Let the density of the cone be $\rho$.
It is obvious from the diagram that the x and y components of the centre of mass of a cone is 0:
$$\begin{aligned} x_{CM} &= 0 \\ y_{CM} &= 0 \end{aligned}$$
Hence, we just need to find $z_{CM}$. We will need to use the equation for the centre of mass:
$$z_{CM} = \frac{1}{M} \int z \, dm$$
During the computation, we will need this relation (obtained from similar triangles as seen from the diagram):
$$\frac{r}{z} = \frac{R}{h}$$
We have to find $dm$:
$$\begin{aligned} dm &= \rho \left( \pi r^{2} \right) \, dz \\ &= \rho \pi \frac{R^{2}}{h^{2}} z^{2} \, dz \end{aligned}$$
Then, to find $M$:
$$\begin{aligned} M &= \int \, dm \\ &= \int\limits_{0}^{h} \rho \pi \frac{R^{2}}{h^{2}} z^{2} \, dz \\ &= \rho \pi \frac{R^{2}}{h^{2}} \frac{z^{3}}{3} \Big|_{0}^{h} \\ &= \rho \pi R^{2} \frac{h}{3} \end{aligned}$$
Now, we have enough information to compute $z_{CM}$.
$$\begin{aligned} z_{CM} &= \frac{1}{M} \int z \, dm \\ &= \frac{1}{M} \int\limits_{0}^{h} z \rho \pi \frac{R^{2}}{h^{2}} z^{2} \, dz \\ &= \frac{\rho \pi}{M} \frac{R^{2}}{h^{2}} \frac{z^{4}}{4} \Big|_{0}^{h} \\ &= \frac{3}{\rho \pi R^{2}h} \rho \pi \frac{R^{2}}{h^{2}}\frac{h^{4}}{4} \\ &= \frac{3}{4} h \end{aligned}$$
|
Difference between revisions of "Ineffable"
m (→Ethereal cardinal: w)
m (as a weakening of subtle cardinals)
(2 intermediate revisions by the same user not shown) Line 1: Line 1:
{{DISPLAYTITLE: Ineffable cardinal}}
{{DISPLAYTITLE: Ineffable cardinal}}
−
Ineffable cardinals were introduced by Jensen and Kunen in <cite>JensenKunen1969:Ineffable</cite> and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant <cite>JensenKunen1969:Ineffable</cite>. This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$.
+
Ineffable cardinals were introduced by Jensen and Kunen in <cite>JensenKunen1969:Ineffable</cite> and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant <cite>JensenKunen1969:Ineffable</cite>. This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$.
If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree <cite>JensenKunen1969:Ineffable</cite> . A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$.
If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree <cite>JensenKunen1969:Ineffable</cite> . A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$.
Line 47: Line 47:
==Ethereal cardinal==
==Ethereal cardinal==
−
Ethereal cardinals were introduced by Ketonen in <cite>Ketonen1974:SomeCombinatorialPrinciples</cite> (information in this section from there).
+
Ethereal cardinals were introduced by Ketonen in <cite>Ketonen1974:SomeCombinatorialPrinciples</cite> (information in this section from there) .
Definition:
Definition:
Line 57: Line 57:
* Every ethereal cardinal is weakly [[inaccessible]].
* Every ethereal cardinal is weakly [[inaccessible]].
* A strongly inaccessible cardinal is ethereal if and only if it is subtle.
* A strongly inaccessible cardinal is ethereal if and only if it is subtle.
−
* If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $
+
* If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $(κ)$[[diamond principle]]) holds (where $2^\underset{\smile}{κ} = \bigcup \{ 2^α | α < κ \}$ is the weak power of $κ$).
''To be expanded.''
''To be expanded.''
Revision as of 12:34, 6 October 2019
Ineffable cardinals were introduced by Jensen and Kunen in [1] and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant [1]. This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$.
If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree [1] . A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$.
Contents Ineffable cardinals and the constructible universe
Ineffable cardinals are downward absolute to $L$. In $L$, an inaccessible cardinal $\kappa$ is ineffable if and only if there are no slim $\kappa$-Kurepa trees. Thus, for inaccessible cardinals, in $L$, ineffability is completely characterized using slim Kurepa trees. [1]
Ramsey cardinals are stationary limits of completely ineffable cardinals, they are weakly ineffable, but the least Ramsey cardinal is not ineffable. Ineffable Ramsey cardinals are limits of Ramsey cardinals, because ineffable cardinals are $Π^1_2$-indescribable and being Ramsey is a $Π^1_2$-statement. The least strongly Ramsey cardinal also is not ineffable, but super weakly Ramsey cardinals are ineffable. $1$-iterable (=weakly Ramsey) cardinals are weakly ineffable and stationary limits of completely ineffable cardinals. The least $1$-iterable cardinal is not ineffable. [2, 4]
Relations with other large cardinals Measurable cardinals are ineffable and stationary limits of ineffable cardinals. $\omega$-Erdős cardinals are stationary limits of ineffable cardinals, but not ineffable since they are $\Pi_1^1$-describable. [3] Ineffable cardinals are $\Pi^1_2$-indescribable [1]. Ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is ineffable iff it is normal 0-Ramsey. [6] Weakly ineffable cardinal
Weakly ineffable cardinals (also called almost ineffable) were introduced by Jensen and Kunen in [1] as a weakening of ineffable cardinals. An uncountable regular cardinal $\kappa$ is weakly ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ has size $\kappa$. If $\kappa$ is weakly ineffable, then $\diamondsuit_\kappa$ holds.
Weakly ineffable cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are $\Pi_1^1$-indescribable. [1] Ineffable cardinals are limits of weakly ineffable cardinals. Weakly ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is weakly ineffable iff it is genuine 0-Ramsey. [6] Subtle cardinal
Subtle cardinals were introduced by Jensen and Kunen in [1] as a weakening of weakly ineffable cardinals. A uncountable regular cardinal $\kappa$ is subtle if for every for every $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ and every closed unbounded $C\subseteq\kappa$ there are $\alpha<\beta$ in $C$ such that $A_\beta\cap\alpha=A_\alpha$. If $\kappa$ is subtle, then $\diamondsuit_\kappa$ holds.
Subtle cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are limits of subtle cardinals. [1] Subtle cardinals are stationary limits of totally indescribable cardinals. [1, 7] The least subtle cardinal is not weakly compact as it is $\Pi_1^1$-describable. $\alpha$-Erdős cardinals are subtle. [1] If $δ$ is a subtle cardinal, the set of cardinals $κ$ below $δ$ that are strongly uplifting in $V_δ$ is stationary.[8] for every class $\mathcal{A}$, in every club $B ⊆ δ$ there is $κ$ such that $\langle V_δ, \mathcal{A} ∩ V_δ \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd.”}$.[9] (The set of cardinals $κ$ below $δ$ that are $\mathcal{A}$-shrewd in $V_δ$ is stationary.) there is an $\eta$-shrewd cardinal below $δ$ for all $\eta < δ$.[9] Ethereal cardinal
Ethereal cardinals were introduced by Ketonen in [10] (information in this section from there) as a weakening of subtle cardinals.
Definition:
A regular cardinal $κ$ is called etherealif for every club $C$ in $κ$ and sequence $(S_α|α < κ)$ of sets such that for $α < κ$, $|S_α| = |α|$ and $S_α ⊆ α$, there are elements $α, β ∈ C$ such that $α < β$ and $|S_α ∩ S_β| = |α|$. I.e., symbolically(?):
$$κ \text{ – ethereal} \overset{\text{def}}{⟺} \left( κ \text{ – regular} ∧ \left( \forall_{C \text{ – club in $κ$}} \forall_{S : κ → \mathcal{P}(κ)} \left( \forall_{α < κ} |S_α| = |α| ∧ S_α ⊆ α \right) ⟹ \left( \exists_{α, β ∈ C} α < β ∧ |S_α ∩ S_β| = |α| \right) \right) \right)$$
Properties:
Every subtle cardinal is obviously ethereal. Every ethereal cardinal is weakly inaccessible. A strongly inaccessible cardinal is ethereal if and only if it is subtle. If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $♢_κ$ ($♢(κ)$, variant of the diamond principle) holds (where $2^\underset{\smile}{κ} = \bigcup \{ 2^α | α < κ \}$ is the weak power of $κ$). To be expanded. $n$-ineffable cardinal
The $n$-ineffable cardinals for $2\leq n<\omega$ were introduced by Baumgartner in [11] as a strengthening of ineffable cardinals. A cardinal is $n$-ineffable if for every function $F:[\kappa]^n\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^n$ is constant.
$2$-ineffable cardinals are exactly the ineffable cardinals. an $n+1$-ineffable cardinal is a stationary limit of $n$-ineffable cardinals. [11]
A cardinal $\kappa$ is totally ineffable if it is $n$-ineffable for every $n$.
a $1$-iterable cardinal is a stationary limit of totally ineffable cardinals. (this follows from material in [4]) Helix
(Information in this subsection come from [7] unless noted otherwise.)
For $k \geq 1$ we define:
$\mathcal{P}(x)$ is the powerset (set of all subsets) of $x$. $\mathcal{P}_k(x)$ is the set of all subsets of $x$ with exactly $k$ elements. $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$ is regressive iff for all $A \in \mathcal{P}_k(\lambda)$, we have $f(A) \subseteq \min(A)$. $E$ is $f$-homogenous iff $E \subseteq \lambda$ and for all $B,C \in \mathcal{P}_k(E)$, we have $f(B) \cap \min(B \cup C) = f(C) \cap \min(B \cup C)$. $\lambda$ is $k$-subtle iff $\lambda$ is a limit ordinal and for all clubs $C \subseteq \lambda$ and regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \in \mathcal{P}_{k+1}(C)$. $\lambda$ is $k$-almost ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \subseteq \lambda$ of cardinality $\lambda$. $\lambda$ is $k$-ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous stationary $A \subseteq \lambda$.
$0$-subtle, $0$-almost ineffable and $0$-ineffable cardinals can be defined as “uncountable regular cardinals” because for $k \geq 1$ all three properties imply being uncountable regular cardinals.
For $k \geq 1$, if $\kappa$ is a $k$-ineffable cardinal, then $\kappa$ is $k$-almost ineffable and the set of $k$-almost ineffable cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-almost ineffable cardinal, then $\kappa$ is $k$-subtle and the set of $k$-subtle cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-subtle cardinal, then the set of $(k-1)$-ineffable cardinals is stationary in $\kappa$. For $k \geq n \geq 0$, all $k$-ineffable cardinals are $n$-ineffable, all $k$-almost ineffable cardinals are $n$-almost ineffable and all $k$-subtle cardinals are $n$-subtle. Completely ineffable cardinal
Completely ineffable cardinals were introduced in [5] as a strengthening of ineffable cardinals. Define that a collection $R\subseteq P(\kappa)$ is a stationary class if
$R\neq\emptyset$, for all $A\in R$, $A$ is stationary in $\kappa$, if $A\in R$ and $B\supseteq A$, then $B\in R$.
A cardinal $\kappa$ is completely ineffable if there is a stationary class $R$ such that for every $A\in R$ and $F:[A]^2\to2$, there is $H\in R$ such that $F\upharpoonright [H]^2$ is constant.
Relations:
Completely ineffable cardinals are downward absolute to $L$. [5] Completely ineffable cardinals are limits of ineffable cardinals. [5] There are stationarily many completely ineffable, greatly Erdős cardinals below any Ramsey cardinal.[13] The following are equivalent:[6] $κ$ is completely ineffable. $κ$ is coherent $<ω$-Ramsey. $κ$ has the $ω$-filter property. Every completely ineffable is a stationary limit of $<ω$-Ramseys.[6] Completely Ramsey cardinals and $ω$-Ramsey cardinals are completely ineffable.[6] $ω$-Ramsey cardinals are limits of completely ineffable cardinals.[2]
References Jensen, Ronald and Kunen, Kenneth. Some combinatorial properties of $L$ and $V$.Unpublished, 1969. www bibtex Holy, Peter and Schlicht, Philipp. A hierarchy of Ramsey-like cardinals.Fundamenta Mathematicae 242:49-74, 2018. www arχiv DOI bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Gitman, Victoria. Ramsey-like cardinals.The Journal of Symbolic Logic 76(2):519-540, 2011. www arχiv MR bibtex Abramson, Fred and Harrington, Leo and Kleinberg, Eugene and Zwicker, William. Flipping properties: a unifying thread in the theory of large cardinals.Ann Math Logic 12(1):25--58, 1977. MR bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex Friedman, Harvey M. Subtle cardinals and linear orderings., 1998. www bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Rathjen, Michael. The art of ordinal analysis., 2006. www bibtex Ketonen, Jussi. Some combinatorial principles.Trans Amer Math Soc 188:387-394, 1974. DOI bibtex Baumgartner, James. Ineffability properties of cardinals. I.Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I, pp. 109--130. Colloq. Math. Soc. János Bolyai, Vol. 10, Amsterdam, 1975. MR bibtex Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex
|
Conveners QCD & Heavy Ions Radja Boughezal (Argonne National Laboratory) Daniel Tapia Takaki (University of Kansas) Salvatore Rappoccio (The State University of New York SUNY (US)) Olga Evdokimov (University of Illinois at Chicago (US)) QCD & Heavy Ions Olga Evdokimov (University of Illinois at Chicago (US)) Salvatore Rappoccio (The State University of New York SUNY (US)) Daniel Tapia Takaki (University of Kansas) Radja Boughezal (Argonne National Laboratory) QCD & Heavy Ions Salvatore Rappoccio (The State University of New York SUNY (US)) Radja Boughezal (Argonne National Laboratory) Daniel Tapia Takaki (University of Kansas) Olga Evdokimov (University of Illinois at Chicago (US)) QCD & Heavy Ions Salvatore Rappoccio (The State University of New York SUNY (US)) Radja Boughezal (Argonne National Laboratory) Olga Evdokimov (University of Illinois at Chicago (US)) Daniel Tapia Takaki (University of Kansas) QCD & Heavy Ions Salvatore Rappoccio (The State University of New York SUNY (US)) Olga Evdokimov (University of Illinois at Chicago (US)) Radja Boughezal (Argonne National Laboratory) Daniel Tapia Takaki (University of Kansas) QCD & Heavy Ions Radja Boughezal (Argonne National Laboratory) Olga Evdokimov (University of Illinois at Chicago (US)) Daniel Tapia Takaki (University of Kansas) Salvatore Rappoccio (The State University of New York SUNY (US)) Description
parallel sessions
Collimated jets of hadrons serve as precision tests of the standard model and in particular QCD. For example, jets observables have been applied extensively to constrain parton distribution functions and to probe the hot and dense medium created in heavy-ion collisions, as well as to the search for physics beyond the standard model. In this talk, I will mainly focus on recent higher order...
The latest results from LHC on the vector boson (W or Z) production in association with jets will be discussed.
Collisions of ultra-relativistic heavy ions is the only known way of experimentally studying new form of QCD matter at high temperatures and energy densities. In such collisions, a state of deconfined quarks and gluons known as the quark-gluon plasma (QGP) is produced. Hard probes or partons from early-time interactions with large momentum transfer are produced prior to the formation of the...
The critical phenomena of strongly interacting matter are presented in the random fluctuation walk model. The phase transitions are considered in systems where the Critical Point (CP) is a distinct singular one existence of which is dictated by the dynamics of conformal symmetry breaking.
The physical approach to the effective CP is predicted through the influence fluctuations of...
The critical point in particle physics at high temperature is studied through the ideal gas of scalars, the dilatons, in the model that implies the spontaneous breaking of an approximate scale symmetry. The dynamical system of identical particles weakly interacting to each other is considered. We found the critical temperature as a function of a dilaton mass, and the fluctuation of particle...
One of the key signatures of collectivity in heavy-ion collisions is the appearance of a ridge structure over wide pseudorapidity interval. Recently it was also found in small collision systems such as proton-proton or proton-ion collisions which origin is still on debate. In this work, contributions from the geometry fluctuations in the initial-state in pp collisions to the ridge structure...
The observed thermalization in particle production at colliders,
usually inferred from the presence of the exponential component in the transverse momentum distributions of produced particles and the thermal abundances of the hadron yields, is proposed as due to quantum entanglement inside the proton wave functions in the proton-proton collisions. This presentation will show our...
The BESIII Experiment at the Beijing Electron Positron Collider (BEPC2) collected large data samples for electron-positron collisions with center-of-mass above 4 GeV. The analysis of these samples has resulted in a number of surprising discoveries, such as the discoveries of the electrically charged "Zc" structures, which, if resonant, cannot be accommodated in the traditional charm quark and...
I will review recent advances of tools that help improve the performance and analytic understanding of boosted object tagging and background mitigation. I will also discuss the fast developing fields of heavy ion jet substructure studies and machine learning applications at high energy colliders. In the end I will discuss a new class of collinear drop observables which allows systematic...
Theoretical calculations for jet substructure observables with accuracy beyond leading-logarithm have recently become available. Such well-understood observables provide novel probes of QCD in a new, collinear regime at the LHC. In this talk, measurements by the ATLAS, CMS, ALICE and LHCb collaborations of such jet substructure observables are presented. These measurements may be performed in...
In 2010 the proton charge radius was extracted for the first time from muonic hydrogen, a bound state of a muon and a proton. The value obtained was five standard deviations away from the regular hydrogen extraction. Taken at face value, this might be an indication of a new force in nature coupling to muons, but not to electrons. It also forces us to reexamine our understanding of the...
A sample of 1.3 billion J/psi events accumulated in the BESIII detector offers a unique opportunity to study light hadron spectroscopy and decays. In this presentation, recent BESIII results on the production of light hadrons will be highlighted, including amplitude analyses of J/psi radiative and hadronic decays for a variety of channels. Results on light meson decays will also be reported,...
Electromagnetic form factors of baryons provide fundamental information about their structure and dynamics and provide rigorous tests of non-perturbative QCD as well as phenomenological models. However, results in the time-like region have large uncertainties. Production cross sections and form factors of hyperons have only barely been explored. Based on 500 pb^-1 of data collected with the...
The sPHENIX experiment is a major upgrade to the PHENIX experiment that is currently under construction at Brookhaven National Laboratory. It will begin collecting pp, pA, and AA data in early 2023, enabling high statistics measurements of jet modification factors, upsilon suppression, and heavy flavor production. These measurements will complement those from the LHC experiments and help...
The sPHENIX experiment at Brookhaven is a second-generation RHIC
experiment designed to measure jets and the upsilon states in heavy ion collisions with a combination of calorimetry and precision tracking. A compact tungsten-scintillating fiber electromagnetic calorimeter and a steel-scintillator hadronic calorimeter both read out with silicon photomultipliers are are central to the...
Measurements of the beam-energy dependence of anisotropic flow can provide important constraints for initial state models and for precision extraction of the chemical potential ($\mu_B$) and temperature ($T$) dependence of the QCD matter, specific shear viscosity $\eta/s$. It has been predicted that the $\mu_B$ and $T$ dependence on $\eta/s$ could be sensitive to the critical endpoint (CEP) in...
The observation of multiparticle correlations in Heavy Ion collisions have long been related to collective behaviour in the formed medium. Recent results both at RHIC and LHC provide strong arguments for the formation of such medium in smaller systems.
RHIC has a broad program to study the physics of small systems by systematically varying its size and energy for a better understanding of the... Entanglement and related subjects of quantum information science have become a hot topic in QCD. We review how and why this comes from a new point of view with fresh opportunities for experimental and theoretical investigation. The early history of QCD was dominated by makeshift models and case-by-case perturbative calculations. We now have new organizing principles and experimental...
Studies of small collision systems are essential to our understanding of the physics of strongly interacting matter at high temperatures. Proton-proton and proton-lead collisions with high particle multiplicities exhibit striking similarities to large nucleus-nucleus collisions, including apparent collective motion, quarkonium suppression, and similar hadrochemistry. An overview of recent...
In recent years the STAR Collaboration collected a large sample
of ultra-peripheral heavy-ion collisions. The photoproduction of J/Psi vector mesons is sensitive to the gluon content of the target nucleon or nucleus. We will present results from a statistically large sample of J/Psi production in Au+Au collisions. A significant result comes from the study of the pT distributions, which...
The $p_\perp$ dependence of the nuclear modification factor $R_{\rm AA}$ measured in XeXe and PbPb collisions at the LHC exhibits a universal shape, which can be very well reproduced in a simple energy loss model based on the BDMPS medium-induced gluon spectrum. The scaling is observed for various hadron species ($h^\pm$, $D$, $J/\psi$) in different centrality classes and at all colliding...
In relativistic heavy-ion collisions, correlations of particles with opposite quantum numbers provide insight into general charge creation mechanisms, the time scales of quark production, collective motion of the Quark Gluon Plasma (QGP), and re-scattering in the hadronic phase. The longitudinal and azimuthal widths of general charge balance functions for charged pion ($\pi^{\pm}$), kaon...
Measurements of heavy flavor hadrons and quarkonia in heavy ion collisions provide information about the in-medium color interaction inside the quark-gluon plasma (QGP), the high-density QCD medium created in heavy-ion collisions. Quarkonia, which are pairs of a heavy quark and an anti-quark, could be used to study the modification of color potential between the pairs. Heavy quarks are...
A rich set of quarkonia and heavy flavour result is observed by LHCb ALICE in pPb and pPbPb collisions collected at 5 and 8.16 TeV nucleon-nucleon centre-of-mass energies. In case of PbPb collisions heavy hadrons constitute unique probes for the hot and dense QCD medium produced in heavy-ion collisions: the Quark-Gluon Plasma. This talk presents production measurements of beauty hadrons and...
Recent phenomenological developments in the field of jet substructure have enabled new approaches to utilizing the internal energy density distributions of jets in physics analysis in ATLAS. In this poster, two applications of such substructure observables are presented. First, the performance of new jet substructure observables as inputs to multivariate techniques designed to identify jets...
Quantum entanglement has been proposed as the origin of thermalization in proton-proton collisions at the Large Hadron Collider (Phys. Rev. D98, 054007 (2018)). We present results of the entanglement entropy from charged-particle multiplicity data in nucleon-nucleon collisions at collider energies. These are compared with expected values from gluon distribution functions as well as...
|
In the last lesson, we saw that the derivative was the rate of change.
We saw multiple ways of calculating this rate of change.
Finally, we said that when we have a function $f(x)$, we can calculate the derivative with knowing the starting point and the change in our input, $x$:
$$ \frac{f(x_1 + \Delta x) - f(x_1)}{\Delta x} $$
So we saw previously that the derivative is the rate of change of our function. We express this as $ f'(x) = \frac{\Delta f}{\Delta x}$. So far we have only calculated the derivatives with linear functions. As we'll see, things becomes trickier when working with more complicated functions.
For example, let's imagine that we are coaching our runner to perform in a track meet.
We may want to know how well our track start does at one part of the race, say the starting point, versus another point later in the race. Then we will know what to focus on in practice. We can imagine the distance travelled by our track star's distance through time as represented by the function $f(x) = x^2$:
import plotlyfrom plotly.offline import iplot, init_notebook_modeinit_notebook_mode(connected=True)from graph import plot, build_layoutfrom calculus import function_values_tracex_squared = [(1, 2)]range_twenty_four = list(range(0, 25))six = list(map(lambda x: x/4.0, range_twenty_four))trace_x_squared = function_values_trace(x_squared, six)layout = build_layout(x_axis = {'title': 'number of seconds'}, y_axis = {'title': 'distance'})plot([trace_x_squared], layout)
The graph shows that from seconds zero through six, our track runner gets faster over time.
Now if we want to see how quickly our track star at second number two as opposed to some other second, what would we do? Well even if we knew nothing about derivatives, we would likely get a stop watch and at second 2 would use it to calculate the speed. Let's say that we start our stopwatch at second 2 and stop our stopwatch one second later.
from calculus import delta_tracesdelta_layout = build_layout(x_axis = {'title': 'number of seconds'}, y_axis = {'title': 'distance'})x_squared_delta_traces = delta_traces(x_squared, 2, line_length = 2, delta_x = 1)plot([trace_x_squared, *x_squared_delta_traces], delta_layout)
As the graph above shows, we measure the change at second two by starting our stopwatch at second 2 and stopping it one second later. So turning this into our formula for calculating a derivative of:
$$ f'(x) = \frac{f(x + \Delta x) - f(x)}{\Delta x} $$
we do the following:
and plugging in these values, we have:
$$ f'(2) = \frac{f(2 + 1) - f(2)}{ 1} = \frac{f(3) - f(2)}{1} $$
So our rate of change at second number 2, with a $\Delta x = 1$ is calculated by subtracting the function's output at second 2 from the function's output at second 3 and dividing by delta x, one.
Simplifying our calculation of $f'(x)$ further by calculating the outputs at x = 2 and x = 3 we have:
$$f'(2) = \frac{9 - 4}{1} = \frac{5}{1} = 5 $$
from graph import plotx_squared = [(1, 2)]x_squared_delta_traces = delta_traces(x_squared, 2, line_length = 4, delta_x = 1)layout = build_layout(x_axis = {'title': 'number of seconds', 'range': [1.5, 4]}, y_axis = {'title': 'distance', 'range': [0, 12]})plot([trace_x_squared, *x_squared_delta_traces ], layout)
Take a close look at the straight line in the graph above. That straight line is a supposed to be the rate of change of the function at the point $x = 2$. And it comes close. But it doesn't exactly line up. Our orange line quickly begins to move above the blue line, indicating that it has a faster rate of change than the blue line at $x = 2$. This means that our calculation that $f'(2) = 5 $ is a little high.
Here is
the problem:
x_squared_delta_traces = delta_traces(x_squared, 2, line_length = 4, delta_x = 1)layout = build_layout(x_axis = {'title': 'number of seconds', 'range': [1.5, 3.5]}, y_axis = {'title': 'distance', 'range': [0, 12]})plot([trace_x_squared, *x_squared_delta_traces ], layout)
In other words,
the runner would tell us that we are not capturing their speed at precisely second two:
This is because in between the clicks of our stopwatch from seconds two to three, our runner is getting faster and while we are supposed to be calculating his speed just at second 2, our calculation includes his increase in speed from seconds two to three.
Therefore, the orange line has a larger rate of change than the blue line because we have included this increase in speed at second three.
A mathematician would make the same point that we are not actually calculating the derivative:
Our derivative means we are calculating how fast a function is changing at any given moment, and precisely at that moment. And unlike in where our functions were linear, here the rate of change of our function is always changing. The larger our value of $\Delta x$, the less our derivative reflects the rate of change at just that point.
If you were holding a stopwatch and someone asked you to calculate their speed at second number 2, how could you be more accurate? Well, you would want decrease the change in seconds. Of course, our runner could continue to protest and say that we are still influenced by the speed at other times.
However, the mathematician has a solution to this. To calculate the rate of change at precisely one point, the solution is to use our imagination. We calculate the derivative with a $\Delta $ of 1, then calculate it again with a $\Delta x$ of .1, then again with $\Delta x$ of .01, then again with $\Delta $ .001. Our derivative calculation should show convergence on a single number as our $\Delta $ approaches zero and that number is our derivative.
** That is, the derivative of a function is a change in the function's output across $\Delta x$, as $\Delta x $ approaches zero **.
In this example, by decreasing $\Delta x$ we can see a fairly clear pattern.
$ \Delta x $ $ \frac{\Delta y}{\Delta x} $ 1 5 .1 4.1 .01 4.01 .001 4.001
Another way to see how we approach the derivative is by seeing how a line becomes more tangent to the curve as delta x decreases.
Tangent to the curve means that our line is just touching the curve.
The more that a line is tangent to the curve at a point, the more it's slope matches the derivative.
Ok, let's get a sense of what we mean by tangent to the curve. The orange line below is a line whose slope is calculated by using our derivative function, with delta x = 1. As you can see it is *
not tangent to our function, $f(x)$ * as it does not just touch the blue line, but rather touches it in two places.
from calculus import derivative_tracetangent_x_squared_trace = derivative_trace(x_squared, 2, 2, 1)x_squared = [(1, 2)]range_twenty_four = list(range(0, 25))six = list(map(lambda x: x/4.0, range_twenty_four))trace_x_squared = function_values_trace(x_squared, six)layout_not_tangent = build_layout(x_axis = {'range': [0, 4]}, y_axis = {'range': [0, 10]})plot([trace_x_squared, tangent_x_squared_trace], layout_not_tangent)
If our orange line had the same slope, or rate of change, as our function at that x = 2, it would just touch the blue line. We know from above that we get closer to the rate of change of the function as we decrease delta x in our derivative formula.
Let's look again using a smaller $\Delta x$.
Below are the plots of our lines using our derivative formula for when $\Delta x = 1$ and when $\Delta x = .1$
from graph import plot_figure, make_subplotsfrom calculus import derivative_trace range_twelve = list(range(0, 12))three = list(map(lambda x: x/4.0, range_twelve))trace_x_squared_to_three = function_values_trace(x_squared, three)tangent_x_squared = derivative_trace(x_squared, 2, 1, 1)tangent_x_squared_delta_tenth = derivative_trace(x_squared, 2, 1, .1)subplots = make_subplots([trace_x_squared_to_three, tangent_x_squared], [trace_x_squared_to_three, tangent_x_squared_delta_tenth])subplots
The graphs above illustrate when $\Delta x = 1$ and when $\Delta x = .1$ The graph to the left, with a smaller $\Delta x$ has a more tangent line.
Let's keep decreasing $\Delta x$ and see if we can approach the derivative even further.
tangent_x_squared_delta_hundredth = derivative_trace(x_squared, 2, 1, .01)tangent_x_squared_delta_thousandth = derivative_trace(x_squared, 2, 1, .001)subplots = make_subplots([trace_x_squared_to_three, tangent_x_squared_delta_hundredth], [trace_x_squared_to_three, tangent_x_squared_delta_thousandth])
$\Delta x = .01$ and $\Delta x = .001$
As you can see, as $\Delta x $ approaches zero, $f'(2) $ approaches $ 4 $. This convergence around one number as we change another number, is the *
limit *.
So to describe the above, at the point $x = 2 $, the
limit of $\frac{\Delta y}{\Delta x} $ -- that is the number that $\frac{\Delta y}{\Delta x} $ settles upon as $ \Delta x $ approaches zero -- is 4. We can abbreviate this into the following expression:
When $x = 2,\lim_{\Delta x\to0} \frac{\Delta y}{\Delta x} = 4 $.
Or, better yet, we can update and correct our definition of derivative to be:
$$ f'(x) = \lim_{ \Delta x \to0} \frac{f(x + \Delta x) - f(x)}{\Delta x} $$
So the derivative is the change in output as we
just nudge our input. That is how we calculate instantaneous rate of change. We can determine the runners speed at precisely second number 2, by calculating the runner's speed over shorter and shorter periods of time, to see what that number approaches.
One final definition before we go. Instead of $\Delta x$, mathematicians sometimes use the variable $h$ to describe the change in inputs. So replacing our $\Delta x$ symbols with $h$'s we have:
$$ f'(x) = \lim_{ h\to0} \frac{f(x + h) - f(x)}{h} $$
Above is the formula for the derivative for all types of functions linear and nonlinear.
In this section, we learned about derivatives. A derivative is the instantaneous rate of change of a function. To calculate the instantaneous rate of change of a function, we see the value that $\frac{\Delta y}{\Delta x} $ approaches as $\Delta x $ approaches zero. This way, we are not calculating the rate of change of a function across a given distance. Instead we are finding the rate of change at a specific moment.
|
Use Residue theorem to compute contour integral $$\int_C \frac{4e^z}{\sin z} dz$$
I need help figuring out singularities that are within the circle $|z|= 4$. I am stuck at that part. Thanks in advance.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Since $\sin(n\pi)=0$ and $e^{n\pi}\neq0$, $n=0,\pm1,\pm2,...$ $z=n\pi$ and $\lim_{z->n\pi}\frac{e^z}{\sin(z)}=\infty$, $z=n\pi$ are simple poles of $\frac{e^z}{\sin(z)}$. $z=-\pi,0,\pi$ are inside the circle $|z|=4$. Then by residue theorem
$\int_{|z|=4}\frac{e^z}{\sin(z)}dz=2\pi i(Res_{z=-\pi}f(z))+Res_{z=0}f(z))+Res_{z=\pi}f(z))=2\pi i(1-e^{\pi}-e^{-\pi})=2\pi i(1-2\cos(\pi))=6\pi i$.
|
Show that the following subests of $(\mathbb{R},\tau_e)$ are homeomorphic.
$(0,1)$ , $(0,+\infty)$, $(-\infty,+\infty)$.
(i) If I take $\varphi:(\mathbb{R},\tau_e) \rightarrow (0,+\infty)$, $x \mapsto e^x$ this map is continuous, bijective, and the inverse is continuous and so is a homeomorphism.
(ii) $\varphi:(0,1) \rightarrow (0,+\infty)$, $x \mapsto \frac{x}{1-|x|}$
I think this should work, in fact if $x=\frac{1}{N}$, $N>0$, I have $\varphi(x)=\frac{1}{N-1}$ and for $N$ large enough this is really close to $0$.
It's a bijective and continuous map and the inverse is also continuous.
(iii) $\varphi:(0,1) \rightarrow (-\infty,+\infty)$... I don't know how to move... any hint ?
thanks :)
|
I am trying to prove the need of a convexity adjustment to a forward rate by calculating the next expectation:
\begin{align*} P(t_0, T_s)E^{T_s}\big(L(T_s, T_s, T_e) \mid \mathcal{F}_{t_0}\big). \end{align*}
Where $E^{T_s}$ denotes the expectation under a T-measure with $P(t,T_s)$ as numéraire and $t_0< T_s < T_e $ and $L(T_s, T_s, T_e)$ is the libor rate observed in $T_s$ for the period between $T_s$ and $T_e$
To do it I would like to apply a change of measure so that I can calculate the expectation under a T*-measure with $P(t,T_e)$ as numéraire.
I know to do this change of measure I need to know the Radon-Nikodym derivative, so I need something like this:
\begin{align*} P(t_0, T_s)E^{T_s}\big(L(T_s, T_s, T_e) \mid \mathcal{F}_{t_0}\big)=P(t_0, T_s)E^{T_e}\big(\frac{dQ^{T_s}}{dQ^{T_e}}L(T_s, T_s, T_e) \mid \mathcal{F}_{t_0}\big) \end{align*} How do I know what value of $\frac{dQ^{T_s}}{dQ^{T_e}}$ changes from $Q^{T_s}$ to $Q^{T_e}$?
From what I've seen so far, the Radon-Nikodym derivative is easy to get when you have the distribution under which you are trying to calculate the expectation. For example if $X \sim N(0,1)$ with density function $f(x)$ I can calculate $E[X]$ the usual integral way, or I can introduce a measure $G$ where $g(x)$ can be the density function of say $X \sim N(0,100)$ and it would be the same if I calculate $E_g[X\frac{f(x)}{g(x)}]$ so here my Radon-Nikodym derivative is the division of two density functions. I've seen different publications in where this is used to change from one measure to another, but still I don't seem to understand how you know what value to use for each case, specially in the case I'm asking now since I'm not sure of the density functions I should be using.
The only thing that cross through my mind is that $L(T_s, T_s, T_e)$ is a martingale under $Q^{T_e}$. So perhaps I should assign it this dynamics $dL(t, T_s, T_e) = \sigma_s L(t, T_s, T_e) d W_t^s$ from there I can get a density function which would be like the $g(x)$ in my example. Then if I can find how $L(T_s, T_s, T_e)$ dynamics are under $Q^{T_s}$ maybe I could get the $f(x)$ and the division would be my Radon-Nikodym?
Much help appreciated
|
Consider a non-negative continuous process $X = \left (X_t \right)_ {t\geq 0}$ satisfying $ \mathbb E \left \{ \bar X \right\}< \infty $ (where $ \bar X =\sup _{0\leq t \leq T} X_t $) and its Snell envelope
$$ \hat {X_\theta} = \underset {\tau \in \mathcal T _{\theta,T} } {\text{ess sup}} \ \mathbb E \left\{ X_\tau | \mathcal F_\theta \right \}$$
I'd like to understand how justify the following inequality:
$$\mathbb E \left\{ \sup_{0\leq t \leq T} \hat X_t^p\right \} \leq \mathbb E \left\{ \sup_{0\leq t \leq T} \bar X_t^p\right \} $$
where $\bar X_t = \mathbb E \left\{ \bar X | \mathcal F_t \right \}$
|
Solve in integers: $x^3+x^2-16=2^y$.
my attempt:
of course $y\ge 0$, then $2^y\ge 1$, so $x\ge 1$.
for $y=0,1,2,3$ there is no good $x$.
so $y\ge 4$ and we have equation $x^2(x+1)=16(2^z+1)$, where $z=y-4\ge 0$.
what now?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Solve in integers: $x^3+x^2-16=2^y$.
my attempt:
of course $y\ge 0$, then $2^y\ge 1$, so $x\ge 1$.
for $y=0,1,2,3$ there is no good $x$.
so $y\ge 4$ and we have equation $x^2(x+1)=16(2^z+1)$, where $z=y-4\ge 0$.
what now?
$x^3+x^2=2^y+16$. The RHS is positive, so $x^2(x+1)>0\iff x\ge 1$. Since $2^y$ is an integer, we have $y$ is a positive integer too ($y=0$ won't give a solution).
$x,y$ are positive integers.
$x^3+x^2-16=2^y$. You see a cubic polynomial on the LHS that could easily be strictly bounded between two consecutive cubes (namely $x^3$ and $(x+1)^3$) for most values of $x$, making it impossible for it to be a cube itself. So if $2^y$ is a cube, i.e. $3\mid y$, we're done. And indeed it is a cube.
$2^y\equiv 1,2,4\pmod{7}$ for $y\equiv0,1,2\pmod 3$, respectively.
$x^3+x^2-16\equiv 5,0,3,6,1,1,5\pmod{7}$ for $x\equiv 0,1,2,3,4,5,6\pmod{7}$, respectively.
The only common residue is $1$, so $y\equiv 0\pmod{3}$. This implies $x^3+x^2-16$ is a cube.
But $x^3<x^3+x^2-16<(x+1)^3,\forall x\ge 5$, so $1\le x\le 4$, which only give $(x,y)=(4,6)$.
This is only a solution for even values of $x$:
For $x$ even, $x^2$ is even and $x+1$ is odd. So $\gcd(16, x+1) = 1$. So $16 \mid x^2$. So $4 \mid x$. So $x = 4k$ for some $k$.
$16k^2(4k+1)=16(2^z+1)$
$k^2(4k+1)=2^z+1$
If $k$ is even then the RHS is even and we have a contradiction. So $k$ must be odd. So $k = 2t+1$ for some $t$.
$(2t+1)^2 (8t+5) = 2^z + 1$
$(4t^2 + 4t + 1)(8t + 5) = 2^z + 1$
$32t^3 + 52t^2 + 28t + 4 = 2^z$
We see $z \geq 2$. Let $w = z - 2$.
$8t^3 + 13t^2 + 7t + 1 = 2^w$
For $w \geq 1$, $t^2 + t + 1 \equiv 0 \pmod{2}$ which has no solution in $t$.
So $w = 0$.
$8t^3 + 13t^2 + 7t = 0$
$t(8t^2 + 13t + 7) = 0$
Take the discriminant of the quadratic, so we get $169 - 224 < 0$.
So $t = 0$ and $w = 0$.
So for $x$ even, $x = 4k = 4(2t + 1) = 4$ and $y = z + 4 = w + 6 = 6$.
You can also bound (bind?) $x$, giving it in terms of $y$. Go back to $x^3+x^2−16=2^y$. Consider $f(x,y) = x^3 + x^2 - 16 - 2^y$. Evaluate $f(2^{y/3},y) = 2^y + 2^{2y/3} - 16 - 2^y = 2^{2y/3} - 16$ which equals $0$ when $y = 6$ and is bigger than $0$ when $y > 6$. Evaluate $f(2^{y/3} - 1, y)$ which you can check here is less than $0$ for all $y$. This shows that if $x$ is an integer and $f(x,y) = 0$ then $x = \left \lfloor 2^{y/3} \right \rfloor$.
As far as working with $x = \left \lfloor 2^{y/3} \right \rfloor$, you can write it as $x = \left \lfloor 2^q2^{r/3} \right \rfloor$ where $0 \leq r < 3$ which can be expressed as $\left \lfloor 10_2^q 10_2^{r/3} \right \rfloor$ in binary. As $q$ increases (which increases $y$ by 3), the value of $x$ either gets doubled, or is double plus one. I'm not sure if that's useful.
How to finish this?
Not my answer. Copied.
$x^2(x+1) = 2^y + 16$
Since LHS is an integer, then we must have $y \ge 0$.
Since RHS is a positive integer, then we must have $x \ge 1$. $x^2(x+1)$ is strictly increasing for $x \ge 0$. $2^y + 16$ is strictly increasing for $y \ge 0$.
$$\begin{array}{n|c|c|} \hline n & n^2(n+1) & 2^n + 16 \\ \hline 0 & 0 & 17 \\ 1 & 2 & 18 \\ 2 & 12 & 20 \\ 3 & 36 & 24 \\ 4 & 80 & 32 \\ 5 & 150 & 48 \\ 6 & 252 & 80\\ \hline \end{array}$$
Note that the table indicates that $(x,y) = (4,6)$ is a solution.
So any solution involving $y \ge 7$ will require $x \ge 5$.
We will show that there is no solution for $y \ge 7$.
So we can assume now that $x \ge 5$ and $y \ge 7$.
\begin{align} x^2(x+1) &= 2^y + 16\\ x^2(x+1) &= 16(2^{y-4} + 1)\\ \end{align}
Note that $2^{y-4}+1$ must be an odd integer.
So if $x$ is an odd integer, $\gcd(x+1,2^{y-1}+1) = 1$.
$\quad$ Hence $x+1 | 16$ $\quad$ Remembering that $x \ge 5$, we must have $x = 7$ or $x = 15$. $\quad$ Case $1: x = 7$ \begin{align} 16(2^{y-4}+1) &= 392\\ 2(2^{y-4}+1) &= 49 & \text{Has no solution.}\\ \end{align} $\quad$ Case $2: x = 15$ \begin{align} 16(2^{y-4}+1) &= 3600\\ 2^{y-4}+1 &= 225 & \\ 2^{y-4} &= 224 \\ 2^{y-4} &= 32 \times 7 & \text{Has no solution.}\\ \end{align}
So if $x$ is an even integer, $\gcd(x^2,2^{y-1}+1) = 1$.
$\quad$ So $x^2 | 16$. This can't happen since $x \ge 5$.
|
I am having trouble with figuring out what my initial conditions should be for a simple finite difference algorithm I wrote in Matlab.
Specifically, I'm trying to show that the regular 1-Dimensional wave eqt $$\frac{\partial u^2}{\partial t^2} = c^2\frac{\partial u^2}{\partial x^2}$$ Looks like diffusion with some drift, when the wave speed $c$ is chosen to randomly be $0$ or $1$ on every interval of length $\epsilon$. ( The point is to let the length of the intervals $\epsilon \to0$)
Normally for a simple equation I would simply have constant boundary conditions where $$u(x,0)=1$$ and watch the wave go on an interval $[0,1]$, but obviously if this is the case, the implicit finite difference equation becomes explicit, and I get nonsense as I mess around with $c$.
My question is, what would suitable initial conditions for this problem be?
(Also, pointing me in the direction of some literature that may help with this problem would be greatly appreciated)
|
I try to show that $A_n \times \mathbb{Z} /2 \mathbb{Z} \ \ncong \ S_n$ for $n \geq 3$.
It is not hard to show the statement for $n=3$.
We have $$ A_3 \times \mathbb{Z} /2 \mathbb{Z} \ \cong \ \mathbb{Z} /3 \mathbb{Z} \times \mathbb{Z} /2 \mathbb{Z} \ \cong \ \mathbb{Z} /6 \mathbb{Z} $$ an abalian group, while $(123)(12) = (13) \neq (23)= (12)(123)$ shows that $S_3$ is not abelian.
Here I try to solve the exercise in general. I guess that the only homomorphism$$\phi \quad : \quad A_n \times \mathbb{Z}/2\mathbb{Z} \longrightarrow S_n $$
can be made my mapping $(\sigma,1) \mapsto \sigma$ and $(\sigma,-1) \mapsto (\tau \sigma)$ for some odd permutation $\tau$. I don't know how I could prove that. Please give me a hint to go on.
|
I understand that if the Hamiltonian does not depend on the time, the Schrödinger Equation becomes separable, so you get
$$ H \psi(x) = E \psi(x) $$
and
$$ \Psi(x,t) = \psi(x)\exp\left(-\frac{\imath E}{\hbar}t\right). $$
But $-\frac{\imath E}{\hbar}t$ is a purely imaginary number, so $$ \left|\exp\left(-\frac{\imath E}{\hbar}t\right)\right| = 1 $$
If that is correct,
then how can there be any probability density flow in time? The $exp$ term is only changing the phase of $\psi$, but does not contribute anything to its absolute value.
What did I understand wrong?
|
The expectation clearly is proportional to the product of the squared scale factors $\sigma_{11}\sigma_{22}$. The constant of proportionality is obtained by standardizing the variables, which reduces $\Sigma$ to the correlation matrix with correlation $\rho = \sigma_{12}/\sqrt{\sigma_{11}\sigma_{22}}$.
Assuming bivariate normality, then according to the analysis at https://stats.stackexchange.com/a/71303 we may change variables to
$$X_1 = X,\ X_2 = \rho X + \left(\sqrt{1-\rho^2}\right) Y$$
where $(X,Y)$ has a standard (uncorrelated) bivariate Normal distribution, and we need only compute
$$\mathbb{E}\left(X^2 (\rho X + \left(\sqrt{1-\rho^2}\right) Y)^2\right) = \mathbb{E}(\rho^2 X^4 + (1-\rho^2)X^2 Y^2 + c X^3 Y)$$
where the precise value of the constant $c$ does not matter. ($Y$ is the residual upon regressing $X_2$ against $X_1$.) Using the
univariate expectations for the standard normal distribution
$$\mathbb{E}(X^4)=3,\ \mathbb{E}(X^2) = \mathbb{E}(Y^2)=1,\ \mathbb{E}Y=0$$
and noting that $X$ and $Y$ are
independent yields
$$\mathbb{E}(\rho^2 X^4 + (1-\rho^2)X^2 Y^2 + c X^3 Y) = 3\rho^2 + (1-\rho^2) + 0 = 1 + 2\rho^2.$$
Multiplying this by $\sigma_{11}\sigma_{22}$ gives
$$\mathbb{E}(X_1^2 X_2^2) = \sigma_{11}\sigma_{22} + 2\sigma_{12}^2.$$
The same method applies to finding the expectation of any polynomial in $(X_1,X_2)$, because it becomes a polynomial in $(X, \rho X + \left(\sqrt{1-\rho^2}\right)Y)$ and that, when expanded, is a polynomial in the independent normally distributed variables $X$ and $Y$. From
$$\mathbb{E}(X^{2k}) = \mathbb{E}(Y^{2k}) = \frac{(2k)!}{k!2^k} = \pi^{-1/2} 2^k\Gamma\left(k+\frac{1}{2}\right)$$
for integral $k\ge 0$ (with all odd moments equal to zero by symmetry) we may derive
$$\mathbb{E}(X_1^{2p}X_2^{2q}) = (2q)!2^{-p-q}\sum_{i=0}^q \rho^{2i}(1-\rho^2)^{q-i}\frac{(2p+2i)!}{(2i)! (p+i)! (q-i)!}$$
(with all other expectations of monomials equal to zero). This is proportional to a hypergeometric function (almost by definition: the manipulations involved are not deep or instructive),
$$\frac{1}{\pi} 2^{p+q} \left(1-\rho ^2\right)^q \Gamma \left(p+\frac{1}{2}\right) \Gamma \left(q+\frac{1}{2}\right) \, _2F_1\left(p+\frac{1}{2},-q;\frac{1}{2};\frac{\rho ^2}{\rho ^2-1}\right).$$
The hypergeometric function times $\left(1-\rho ^2\right)^q$ is seen as a multiplicative correction for nonzero $\rho$.
|
Please use this identifier to cite or link to this item:
http://buratest.brunel.ac.uk/handle/2438/570
Title: On the Gierer-Meinhardt System with Saturation Authors: Winter, M
Wei, J
Keywords: Saturation, Nonlocal Eigenvalue Problem,;Stability Issue Date: 2004 Publisher: World Scientific Citation: Comm Contemp Math 6 (2004), 403-430 Abstract: We consider the following shadow Gierer-Meinhardt system with saturation: \left\{\begin{array}{l} A_t=\epsilon^2 \Delta A -A + \frac{A^2}{ \xi (1+k A^2)} \ \ \mbox{in} \ \Omega \times (0, \infty),\\ \tau \xi_t= -\xi +\frac{1}{|\Omega|} \int_\Om A^2\,dx \ \ \mbox{in} \ (0, +\infty), \frac{\partial A}{\partial \nu} =0 \ \mbox{on} \ \partial \Omega\times(0,\infty), \end{array} \right. where \ep>0 is a small parameter, $\tau \geq 0,\, k>0 and \Omega \subset R^n is smooth bounded domain. The case k=0 has been studied by many authors in recent years. Here we give some sufficient conditions on $k$ for the existence and stability of stable spiky solutions. In the one-dimensional case we have a complete answer to the stability behavior. Central to our study are a parameterized ground-state equation and the associated nonlocal eigenvalue problem (NLEP) which is solved by functional analysis arguments and the continuation method. URI: http://bura.brunel.ac.uk/handle/2438/570 Appears in Collections: Dept of Mathematics Research Papers
Mathematical Sciences
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.
|
Learning Objective
Convert from one unit to another unit of the same type.
In Section 2.2, we showed some examples of how to replace initial units with other units of the same type to get a numerical value that is easier to comprehend. In this section, we will formalize the process.
Consider a simple example: how many feet are there in 4 yards? Most people will almost automatically answer that there are 12 feet in 4 yards. How did you make this determination? Well, if there are 3 feet in 1 yard and there are 4 yards, then there are 4 × 3 = 12 feet in 4 yards.
This is correct, of course, but it is informal. Let us formalize it in a way that can be applied more generally. We know that 1 yard (yd) equals 3 feet (ft):
\[1\, yd = 3\, ft\]
In math, this expression is called an
equality. The rules of algebra say that you can change (i.e., multiply or divide or add or subtract) the equality (as long as you do not divide by zero) and the new expression will still be an equality. For example, if we divide both sides by 2, we get
\[\dfrac{1}{2}\,yd= \dfrac{3}{2}\, ft\]
We see that one-half of a yard equals 3/2, or one and a half, feet-something we also know to be true, so the above equation is still an equality. Going back to the original equality, suppose we divide both sides of the equation by 1 yard (number
and unit):
\[\dfrac{1\,yd}{1\,yd}= \dfrac{3\,ft}{1\,yd}\]
The expression is still an equality, by the rules of algebra. The left fraction equals 1. It has the same quantity in the numerator and the denominator, so it must equal 1. The quantities in the numerator and denominator cancel, both the number
and the unit:
\[\dfrac{1\,yd}{1\,yd}= \dfrac{3\,ft}{1\,yd}\]
When everything cancels in a fraction, the fraction reduces to 1:
\[1= \dfrac{3\,ft}{1\,yd}\]
Conversion Factors
We have an expression that equals 1.
\[\dfrac{3\,ft}{1\,yd}=1\]
This is a strange way to write 1, but it makes sense: 3 ft equal 1 yd, so the quantities in the numerator and denominator are the same quantity, just expressed with different units.
The expression
is called a conversion factor and it is used to formally change the unit of a quantity into another unit. (The process of converting units in such a formal fashion is sometimes called
dimensional analysis or the factor label method.)
To see how this happens, let us start with the original quantity: 4 yd
Now let us multiply this quantity by 1. When you multiply anything by 1, you do not change the value of the quantity. Rather than multiplying by just 1, let us write 1 as
\[\dfrac{3\,ft}{1\,y}\]
\[4\,yd\times \dfrac{3\,ft}{1\,y}\]
The 4 yd term can be thought of as
yard:
That is all that we can cancel. Now, multiply and divide all the numbers to get the final answer:
\[\dfrac{4\times 3\, ft}{1}= \dfrac{12\,ft}{1}= 12\,ft\]
Again, we get an answer of 12 ft, just as we did originally. But in this case, we used a more formal procedure that is applicable to a variety of problems.
How many millimeters are in 14.66 m? To answer this, we need to construct a conversion factor between millimeters and meters and apply it correctly to the original quantity. We start with the definition of a millimeter, which is:
\[1\,mm= \dfrac{1}{1000\,m}\]
The 1/1000 is what the prefix
milli- means. Most people are more comfortable working without fractions, so we will rewrite this equation by bringing the 1,000 into the numerator of the other side of the equation:
\[1000\,mm=1\,m\]
Now we construct a conversion factor by dividing one quantity into both sides. But now a question arises: which quantity do we divide by? It turns out that we have two choices, and the two choices will give us different conversion factors, both of which equal 1:
\[\dfrac{1000\,mm}{1000\,mm}= \dfrac{1\,m}{1000\,mm} \]
or
\[\dfrac{1000\,mm}{1\,m}= \dfrac{1\,m}{1\,m}\]
\[1=\dfrac{1\,m}{1000\,mm}\]
or
\[\dfrac{1000\,mm}{1\,m}=1\]
Which conversion factor do we use? The answer is based on
what unit you want to get rid of in your initial quantity. The original unit of our quantity is meters, which we want to convert to millimeters. Because the original unit is assumed to be in the numerator, to get rid of it, we want the meter unit in the denominator; then they will cancel. Therefore, we will use the second conversion factor. Canceling units and performing the mathematics, we get:
\[14.66m\times \dfrac{1000\,mm}{1\,m}= 14660\,mm\]
Note how \(m\) cancels, leaving \(mm\), which is the unit of interest.
The ability to construct and apply proper conversion factors is a very powerful mathematical technique in chemistry. You need to master this technique if you are going to be successful in this and future courses.
Example \(\PageIndex{1}\)
Convert 35.9 kL to liters. Convert 555 nm to meters. Solution
We will use the fact that 1 kL = 1,000 L. Of the two conversion factors that can be defined, the one that will work is 1000L/ 1kL. Applying this conversion factor, we get:
\[35.9\, kL\times \dfrac{1000\,L}{1\,kL}= 35,900\, L \nonumber\]
We will use the fact that 1 nm = 1/1,000,000,000 m, which we will rewrite as 1,000,000,000 nm = 1 m, or 10 9nm = 1 m. Of the two possible conversion factors, the appropriate one has the nm unit in the denominator:
\[\dfrac{1\,m}{10^{9}\,m} \nonumber\]
Applying this conversion factor, we get
\[555\,nm\times \dfrac{1m}{10^{9}m}= 0.000000555\,m= 5.55\times 10^{-7}\,m \nonumber\]
In the final step, we expressed the answer in scientific notation.
Exercise \(\PageIndex{1}\)
Convert 67.08 μL to liters. Convert 56.8 m to kilometers. Answer a
6.708 × 10
−5L Answer b
5.68 × 10
−2km
What if we have a derived unit that is the product of more than one unit, such as m
2? Suppose we want to convert square meters to square centimeters? The key is to remember that m 2 means m × m, which means we have two meter units in our derived unit. That means we have to include two conversion factors, one for each unit. For example, to convert 17.6 m 2 to square centimeters, we perform the conversion as follows:
\[\begin{align} 17.6m^{2} &= 17.6(m\times m)\times \dfrac{100cm}{1m}\times \dfrac{100cm}{1m} \nonumber \\[4pt] &= 176000\,cm \times cm \nonumber \\[4pt] &= 1.76\times 10^{5} \,cm^2\end{align}\]
Example \(\PageIndex{2}\)
How many cubic centimeters are in 0.883 m
3? Solution
With an exponent of 3, we have three length units, so by extension we need to use three conversion factors between meters and centimeters. Thus, we have
\[0.883m^{3}\times \dfrac{100\,cm}{1\,m}\times \dfrac{100\,cm}{1\,m}= 883000\,cm^{3} = 8.83\times 10^{5}\,cm^{3}\]
You should demonstrate to yourself that the three meter units do indeed cancel.
Exercise \(\PageIndex{2}\)
How many cubic millimeters are present in 0.0923 m
3? Answer
9.23 × 10
7mm 3
Suppose the unit you want to convert is in the denominator of a derived unit; what then? Then, in the conversion factor, the unit you want to remove must be in the
numerator. This will cancel with the original unit in the denominator and introduce a new unit in the denominator. The following example illustrates this situation.
Example \(\PageIndex{3}\)
Convert 88.4 m/min to meters/second.
Solution
We want to change the unit in the denominator from minutes to seconds. Because there are 60 seconds in 1 minute (60 s = 1 min), we construct a conversion factor so that the unit we want to remove, minutes, is in the numerator: 1min/60s. Apply and perform the math:
\[\dfrac{88.4m}{min}\times \dfrac{1\,min}{60\,s}= 1.47\dfrac{m}{s}\]
Notice how the 88.4 automatically goes in the numerator. That's because any number can be thought of as being in the numerator of a fraction divided by 1.
Exercise \(\PageIndex{3}\)
Convert 0.203 m/min to meters/second.
Answer
0.00338 m/s
or
3.38 × 10
−3m/s
Sometimes there will be a need to convert from one unit with one numerical prefix to another unit with a different numerical prefix. How do we handle those conversions? Well, you could memorize the conversion factors that interrelate all numerical prefixes. Or you can go the easier route: first convert the quantity to the base unit, the unit with no numerical prefix, using the definition of the original prefix. Then convert the quantity in the base unit to the desired unit using the definition of the second prefix. You can do the conversion in two separate steps or as one long algebraic step. For example, to convert 2.77 kg to milligrams:
\[2.77\,kg\times \dfrac{1000\,g}{1\,kg}= 2770\,g\] (convert to the base units of grams)
\[2770\,g\times \dfrac{1000\,mg}{1\,g}= 2770000\,mg = 2.77\times 10^{6}\,mg\] (convert to desired unit)
Alternatively, it can be done in a single multistep process:
\[\begin{align} 2.77\, \cancel{kg}\times \dfrac{1000\,\cancel{g}}{1\,\cancel{kg}}\times \dfrac{1000\,mg}{1\,\cancel{g}} &= 2770000\, mg \nonumber \\[4pt] &= 2.77\times 10^{6}\,mg \end{align}\]
You get the same answer either way.
Example \(\PageIndex{4}\)
How many nanoseconds are in 368.09 μs?
Solution
You can either do this as a one-step conversion from microseconds to nanoseconds or convert to the base unit first and then to the final desired unit. We will use the second method here, showing the two steps in a single line. Using the definitions of the prefixes
micro- and nano-,
\[368.0\,\mu s\times \dfrac{1\,s}{1000000\,\mu s}\times \dfrac{1000000000}{1\,s}= 3.6809\times 10^{5}\,ns\]
Exercise \(\PageIndex{4}\)
How many milliliters are in 607.8 kL?
Answer
6.078 × 10
8mL
When considering the significant figures of a final numerical answer in a conversion, there is one important case where a number does not impact the number of significant figures in a final answer-the so-called
exact number. An exact number is a number from a defined relationship, not a measured one. For example, the prefix kilo- means 1,000- exactly 1,000, no more or no less. Thus, in constructing the conversion factor
\[\dfrac{1000\,g}{1\,kg}\]
neither the 1,000 nor the 1 enter into our consideration of significant figures. The numbers in the numerator and denominator are defined exactly by what the prefix
kilo- means. Another way of thinking about it is that these numbers can be thought of as having an infinite number of significant figures, such as
\[\dfrac{1000.0000000000 \dots \,g}{1.0000000000 \ldots \,kg}\]
The other numbers in the calculation will determine the number of significant figures in the final answer.
Example \(\PageIndex{5}\)
A rectangular plot in a garden has the dimensions 36.7 cm by 128.8 cm. What is the area of the garden plot in square meters? Express your answer in the proper number of significant figures.
Solution
Area is defined as the product of the two dimensions, which we then have to convert to square meters and express our final answer to the correct number of significant figures, which in this case will be three.
\[36.7\,cm\times 128.8\,cm\times \dfrac{1\,m}{100\,cm}\times \dfrac{1\,m}{100\,cm}= 0.472696\,m^{2}= 0.473\,m^{2}\]
The 1 and 100 in the conversion factors do not affect the determination of significant figures because they are exact numbers, defined by the centi- prefix.
Exercise \(\PageIndex{5}\)
What is the volume of a block in cubic meters whose dimensions are 2.1 cm × 34.0 cm × 118 cm?
Answer
0.0084 m
3
Chemistry Is Everywhere: The Gimli Glider
On July 23, 1983, an Air Canada Boeing 767 jet had to glide to an emergency landing at Gimli Industrial Park Airport in Gimli, Manitoba, because it unexpectedly ran out of fuel during flight. There was no loss of life in the course of the emergency landing, only some minor injuries associated in part with the evacuation of the craft after landing. For the remainder of its operational life (the plane was retired in 2008), the aircraft was nicknamed "the Gimli Glider."
The Gimli Glider is the Boeing 767 that ran out of fuel and glided to safety at Gimli Airport. The aircraft ran out of fuel because of confusion over the units used to express the amount of fuel. Source: Photo courtesy of Will F., Image used with permission (CC BY-SA 2.5; Aero Icarus).
The 767 took off from Montreal on its way to Ottawa, ultimately heading for Edmonton, Canada. About halfway through the flight, all the engines on the plane began to shut down because of a lack of fuel. When the final engine cut off, all electricity (which was generated by the engines) was lost; the plane became, essentially, a powerless glider. Captain Robert Pearson was an experienced glider pilot, although he had never flown a glider the size of a 767. First Officer Maurice Quintal quickly determined that the aircraft would not be able make it to Winnipeg, the next large airport. He suggested his old Royal Air Force base at Gimli Station, one of whose runways was still being used as a community airport. Between the efforts of the pilots and the flight crew, they managed to get the airplane safely on the ground (although with buckled landing gear) and all passengers off safely.
What happened? At the time, Canada was transitioning from the older English system to the metric system. The Boeing 767s were the first aircraft whose gauges were calibrated in the metric system of units (liters and kilograms) rather than the English system of units (gallons and pounds). Thus, when the fuel gauge read 22,300, the gauge meant kilograms, but the ground crew mistakenly fueled the plane with 22,300
pounds of fuel. This ended up being just less than half of the fuel needed to make the trip, causing the engines to quit about halfway to Ottawa. Quick thinking and extraordinary skill saved the lives of 61 passengers and 8 crew members-an incident that would not have occurred if people were watching their units. Key Takeaways Units can be converted to other units using the proper conversion factors. Conversion factors are constructed from equalities that relate two different units. Conversions can be a single step or multistep. Unit conversion is a powerful mathematical technique in chemistry that must be mastered. Exact numbers do not affect the determination of significant figures.
|
The (spherical)
, or the rhumb line, is the curve of constant bearing on the sphere; that is, it is loxodrome the spherical curve that cuts the meridians of the sphere at a constant angle. A more picturesque way of putting it is that if one wants to travel from one point of a (spherical) globe to the antipodal point (say, from the North Pole to the South Pole) in a fixed direction, the path one would be taking on the globe would be a loxodrome.
For a unit sphere, the loxodrome that cuts the meridians at an angle $\varphi\in\left(0,90^\circ\right]$ is given by
$$\begin{align*}x&=\mathrm{sech}(t\cot\;\varphi)\cos\;t\\y&=\mathrm{sech}(t\cot\;\varphi)\sin\;t\\z&=\tanh(t\cot\;\varphi)\end{align*}$$
While playing around with loxodromes, I noted that for certain values of $\varphi$, one can orient two identical loxodromes such that they do not intersect (that is, one can position two ships such that if both take similar loxodromic paths, they can never collide). Here for instance are two loxodromes whose constant angle $\varphi$ is $60^\circ$, oriented such that they do not cross each other:
On the other hand, for the (extreme!) case of $\varphi=90^{\circ}$, the two loxodromes degenerate to great circles, and it is well known that two great circles must always intersect (at two antipodal points).
Less extreme, but seemingly difficult, would be the problem of positioning two 80° loxodromes such that they do not intersect:
This brings me to my first question:
1)
For what values of $\varphi$ does it become impossible to orient two loxodromes such that they do not cross each other?
For simplicity, one can of course fix one of the two loxodromes to go from the North Pole to the South Pole, and try to orient the other loxodrome so that it does not cross the fixed loxodrome.
That's the simpler version of my actual problem. Some experimentation seems to indicate that it is not possible to orient
three loxodromes such that they do not cross each other. So...
2)
Is it true that for all (admissible) values of $\varphi$, one cannot position three loxodromes such that none of them cross each other?
I've tried a bit of searching around to see if the problem has been previously considered, but I have not had any luck. Any pointers to the literature will be appreciated.
|
Answer
.36 centimeters
Work Step by Step
We know that the height difference will be equal to 2 cm times $1-\frac{\rho_{oil}}{\rho_{water}}$. Thus, it follows: $h = 2 \times (1 - .82) = .36 \ cm$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
The cone frustum is the figure resulting from having made a cut parallel to the basis of a cone.
To calculate the area of the mentioned figure we have to add up the areas of two bases and the area of the lateral face: $$$A_{total}=A_{basis}+A_{minor \ basis}+A_{lateral} \\ A_{lateral}=\pi (R+r)\cdot g$$$
In many exercises we will have the height $$h$$ of the frustum but not its $$g$$ (generatrix). The Pythagoras theorem in the triangle that appears next relates $$g$$ and $$h$$.
Finally, the volume of the frustum of cone is solved with the following expression: $$$V=\dfrac{1}{3}\pi \cdot h(R^2+r^2\cdot R)$$$
|
Let's call the base of the first triangle $x$, the base of the second $\frac{8}{7}x$. The other side of the first and of the second will be $y$ and $z$. We'll have:
$$ \left\{\begin{matrix}x+2y=\frac{8}{7}x+2z\\ x\sqrt{y^2-\frac{x^2}{4}}=\frac{8}{7}x\sqrt{z^2-\frac{16x^2}{49}}\end{matrix}\right.\Rightarrow \left\{\begin{matrix}x+2y=\frac{8}{7}x+2z\\ \left(y-\frac{x}{2}\right)\left(y+\frac{x}{2}\right)=\frac{64}{49}\left(z-\frac{4x}{7}\right)\left(z+\frac{4x}{7}\right)\end{matrix}\right.$$
[The first equation is the equality of the two perimeters, the second the one of areas(in which I calculated the height with the Pythagora Theorem). Then we squared the second equation and scomposed the two differences of squares]
Now there is a little trick! Multiply both sides of the second equation by $2$:$$ \left\{\begin{matrix}x+2y=\frac{8}{7}x+2z\\ \left(y-\frac{x}{2}\right)\left(x+2y\right)=\frac{64}{49}\left(z-\frac{4x}{7}\right)\left(\frac{8x}{7}+2z\right)\end{matrix}\right.$$
Since $\left(x+2y\right)$ and $\left(\frac{8x}{7}+2z\right)$ are equal for the first equation we can simplify them and obtain a linear system:$$ \left\{\begin{matrix}x+2y=\frac{8}{7}x+2z\\ \left(y-\frac{x}{2}\right)=\frac{64}{49}\left(z-\frac{4x}{7}\right)\end{matrix}\right.$$
Now we can easily solve this for $(y,z)$ and obtain:
$$ \left\{\begin{matrix}x=x\\ y=\frac{233}{210}x \\z=\frac{109}{105}x \end{matrix}\right.$$Notice that since the sides are integer, at least $x=210$Since the perimeter is:
$$P(x)=x+2y=x+2\frac{233}{210}x=\frac{338}{105}x$$
And the minimum is:
$$P(210)=676$$
:)
|
Given that $\sin\theta = \dfrac{1}{2}$ and that $\cos\theta = -\dfrac{\sqrt{3}}{2}$ and $0^\circ \leq \theta \leq 360^\circ$, find the value of $\theta$.
Since $\sin\theta > 0$ and $\cos\theta < 0$, you have correctly concluded that $\theta$ is a second-quadrant angle. You also took the inverse cosine of $-\dfrac{\sqrt{3}}{2}$, from which you can conclude that $\theta = 150^\circ$.
Let's see why.
I will be working in radians.
The arccosine function (inverse cosine function) $\arccos: [-1, 1] \to [0, \pi]$ is defined by $\arccos x = \theta$ if $\theta$ is the unique angle in $[0, \pi]$ such that $\cos\theta = x$.
Since $\dfrac{5\pi}{6}$ is the unique angle $\theta \in [0, \pi]$ such that $\cos\theta = -\dfrac{\sqrt{3}}{2}$, $$\theta = \arccos\left(-\dfrac{\sqrt{3}}{2}\right) = \dfrac{5\pi}{6}$$Converting to degrees yields $\theta = 150^\circ$.
To reiterate, since there is only one angle $\theta$ in $[0, \pi]$ such that $\cos\theta = -\dfrac{\sqrt{3}}{2}$, we may conclude that $$\theta = \arccos\left(-\dfrac{\sqrt{3}}{2}\right) = \frac{5\pi}{6}$$
While it is not needed to solve this problem, consider the diagram below.
Two angles in standard position (vertex at the origin, initial side on the positive $x$-axis) have the same sine if the $y$-coordinates of the points where their terminal sides intersect the unit circle are equal. By symmetry, $$\sin(\pi - \theta) = \sin\theta$$ Any angle coterminal with one of these angles will also have the same sine. Hence, $\sin\theta = \sin\varphi$ if $$\varphi = \theta + 2n\pi, n \in \mathbb{Z}$$or $$\varphi = \pi - \theta + 2n\pi, n \in \mathbb{Z}$$Two angles in standard position have the same cosine if the $x$-coordinates of the points where their terminal sides intersect the unit circle are equal. By symmetry, $$\cos(-\theta) = \cos\theta$$Any angle coterminal with one of these angles will also have the same cosine. Hence, $\cos\theta = \cos\varphi$ if $$\varphi = \theta + 2n\pi, n \in \mathbb{Z}$$or $$\varphi = -\theta + 2n\pi, n \in \mathbb{Z}$$
|
Is there either a closed-form expression or fast/elegant algorithm for computing the positive root of the polynomial $$f(x)=x^q + \beta x - \beta,$$ where $\beta>0$ and $q\geq2$? How about the $q\in\mathbb R$ case with $q\geq1$?
Note there is exactly one positive root of the function $f(x)$, since $f(0)=-\beta<0$, $\lim_{x\to\infty} f(x)=\infty$, and $f(x)$ is a convex function on $\mathbb R_+$ given our bound on $q$.
Bracketing/bisection will give an estimate in linear time, and by the argument in this response I guess Newton's method will have have global quadratic convergence specifically for this function so long as $q\geq2$. Just wanted to make sure I'm not missing a more slick approach!
I'll be embarrassed if there's an obvious closed-form expression that I missed :-)
|
I am having the following equation:
\begin{equation} Q(\lambda,\hat{\lambda}) = -\frac{1}{2} P(O \mid \lambda ) \sum_s \sum_m \sum_t \gamma_m^{(s)} (t) \left( n \log(2 \pi ) + \log \left| C_m^{(s)} \right| + \left( \mathbf{o}_t - \hat{\mu}_m^{(s)} \right) ^T C_m^{(s)-1} \left(\mathbf{o}_t - \hat{\mu}_m^{(s)}\right) \right)\end{equation}
which does not very well fit on one line. How can I split this over two lines? What I have in mind is that I specify the splitting place, and that the first line is left aligned and the second line right aligned to make clear that it is still the same equation.
The linebreak
\\ does not work.
|
Here, we characterize the data compression benefits of projection onto a subset of the eigenvectors of our traffic system’s covariance matrix. We address this compression from two different perspectives: First, we consider the partial traces of the covariance matrix, and second we present visual comparisons of the actual vs. projected traffic plots.
Partial traces: From the footnote to our last post, we have $\text{Tr}(H^{-1}) \equiv $ $\sum_i e_i $$= \sum_a \left (\delta v_a \right)^2$. That is, the trace of the covariance matrix tells us the net variance in our traffic system’s speed, summed over all loops. More generally$^1$, the fraction of system variance contained within some subset of the modes is given by the eigenvalue sum over these modes, all divided by $\text{Tr}(H^{-1})$. The eigenvalues thus provide us with a simple method for quantifying the significance of any particular mode. At right, is a log-log-plot of the fractional-variance-captured for each mode, ordered from largest to smallest (we also include analogous plots for the covariance matrix associated with just one week and one month$^2$) As shown, the eigenvalues decay like one over eigenvalue index at first, but eventually begin to decay much more quickly. Only 5 modes are needed to capture 50% of the variance; 25 for 65%; 793 for 95%.
Visualizations: The above discussion suggests that the basic essence of a given set of traffic conditions is determined by only the first few modes, but that a large number might be needed to get correct all details. We tested this conclusion by visually inspecting plots of projected traffic conditions (again, for Jan 15, at 5:30 pm), and comparing across number of modes retained. The results are striking: upon projecting the 2,000+ original features to only 25, minimal error appears to be introduced. Further, the error that does occur tends to be highly localized to the particularly slow regions, where projected speeds are overestimates (
e.g., the traffic jams east/south-bound out of Oakland). Increasing the mode count to 100 or greater, these problem spots are quickly ameliorated, and the error is no longer systematic in slow regions (see insets). See below for a slideshow of these results.
//
Conclusions: The data provided by the PEMS system is highly redundant — as anticipated — in the sense that traffic conditions can be determined from far fewer measurements than it provides. If other states wanted to replicate this system, they could probably get away with reducing the number of measures by at least one order of magnitude per mile of highway. For our part, we intend to project our data onto the top 10% of the modes, or fewer: We anticipate that this will provide minimal loss, but substantial speedups.
[1]
Partial eigenvalue sums, physics perspective: Consider suppressing all modes that you don’t want to include in a projection. This can be done by setting the energies of these modes to $\infty$, which will result in their corresponding $H^{-1}$ eigenvalues going to zero. When this altered system is thermally driven, its variance will again be given its covariance trace. This altered system trace is precisely equal to the retained mode partial sum of the original matrix.
[2]
Sampling time dependence of covariance matrix: The first figure above shows the mode variance ratio for the first 1000 principal components over three time scales: 1 week, 1 month, and 2014 year to date (~9.5 months). Notice that the plots become more shallow given a longer sampling period. This is because the larger data sets exhibit a more diverse class of fluctuations, and more modes are needed to capture these.
|
Using the relation $\phi - 1 = 1/\phi$, we find that
\begin{align*}&\int_{0}^{1} \frac{(1-x)(1-2x^{\phi})+\phi(x-x^{\phi})}{(1-x)^2}\cdot\left(\frac{1-x^{\phi}}{1-x}\right)^{1/\phi} \, \mathrm{d}x \\&= \int_{0}^{1} \left( 2 - \frac{\phi^2}{1-x^{\phi}} + \frac{\phi}{1-x} \right) \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} \, \mathrm{d}x \\&= \bigg[ x \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} \bigg]_{0}^{1} \\&= \phi^\phi.\end{align*}
As a corollary, we have
$$ \int \frac{(1-x)(1-2x^{\phi})+\phi(x-x^{\phi})}{(1-x)^2}\cdot\left(\frac{1-x^{\phi}}{1-x}\right)^{1/\phi} \, \mathrm{d}x = x \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} + C. $$
Here is my line of reasoning that led to this solution:
I tried to simplify the integrand so that it minimizes the amount of cancellation as well as mimics partial fraction decomposition.
Now, the integrand in the second line looks similar to what we obtain when we apply the logarithmic differentiation$$ \frac{d}{dx}\left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} = \left( \frac{\phi}{1-x} - \frac{\phi^2 x^{\phi-1}}{1-x^\phi} \right) \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi}. \tag{1} $$Although this is not exactly the same as what we want, it hints that we might actually compute the antiderivative.
Playing a little bit, we find that$$ \frac{d}{dx}\frac{(1-x^{\phi})^{\phi}}{(1-x)^{\phi-1}} = \left( -2 -\frac{\phi^2 x^{\phi-1}}{1-x^{\phi}} + \frac{\phi^2}{1-x^{\phi}} \right) \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi}. \tag{2}$$Bingo! $\text{(1)} - \text{(2)}$ gives exactly what we want and we are done.
|
Difference between revisions of "Vopenka"
(→Generic: unless...)
(→Generic: stricter terminology, but is the rest of the information correct in the stricter language?)
Line 103: Line 103:
Definitions:
Definitions:
−
* The '''
+
* The '''''' states that for every proper class $\mathcal{C}$ of structures of the same type there are $B ≠ A$, both in $\mathcal{C}$, such that $B$ elementarily embeds into $A$ in some set-forcing extension.
* (Boldface) $gVP(\mathbf{Σ_n})$ and (lightface) $gVP(Σ_n)$ (with analogous definitions for $Π_n$ and $∆_n$) as well as $gVP(κ, \mathbf{Σ_n})$ are generic analogues of corresponding weakenings of Vopěnka's principle.
* (Boldface) $gVP(\mathbf{Σ_n})$ and (lightface) $gVP(Σ_n)$ (with analogous definitions for $Π_n$ and $∆_n$) as well as $gVP(κ, \mathbf{Σ_n})$ are generic analogues of corresponding weakenings of Vopěnka's principle.
* For transitive $∈$-structures $B$ and $A$ and elementary embedding $j : B → A$, we say that $j$ is ''overspilling'' if it has a critical point and $j(crit(j)) > rank(B)$.
* For transitive $∈$-structures $B$ and $A$ and elementary embedding $j : B → A$, we say that $j$ is ''overspilling'' if it has a critical point and $j(crit(j)) > rank(B)$.
Revision as of 08:50, 10 October 2019
Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following:
For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$.
For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there is a stationary proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, or even almost-high-jump, then $V_\kappa$ satisfies Vopěnka's principle.
Contents Formalizations
As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. A somewhat stronger alternative is to view Vopěnka's principle as an axiom in second-order set theory capable to dealing with proper classes, such as von Neumann-Gödel-Bernays set theory. This is a strictly stronger assertion. [1] Finally, one may relativize the principle to a particular cardinal, leading to the concept of a Vopěnka cardinal.
Vopěnka's principle can be formalized in first-order set theory as a schema, where for each natural number $n$ in the meta-theory there is a formula expressing that Vopěnka’s Principle holds for all $Σ_n$-definable (with parameters) classes.[1]
Vopěnka principle VP and the Vopěnka scheme VS are not equivalent, but they are equiconsistent and have the same first-order consequences (GBC+VP is conservative over GBC+VS and ZFC+VS, VP makes no sense in the context of ZFC):[2]
If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka scheme holds, but the Vopěnka principle fails. If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka principle holds.
Vopěnka cardinal is an inaccessible cardinal $δ$ such that $\langle V_δ , ∈, V_{δ+1} \rangle$ is a model of VP (and the Morse–Kelley set theory). Vopěnka-scheme cardinal is a cardinal $δ$ such that $\langle V_δ , ∈ \rangle$ is a model of ZFC+VS.[2]
Vopěnka cardinals
An inaccessible cardinal $\kappa$ is a
Vopěnka cardinal if and only if $V_\kappa$ satisfies Vopěnka's principle, that is, where we interpret the proper classes of $V_\kappa$ as the subsets of $V_\kappa$ of cardinality $\kappa$. Because of a characterization of Vopěnka's principle in terms of graphs, a cardinal $\kappa$ is Vopěnka if and only if $\kappa$ is inaccessible and any set $\kappa$-sized set $G$ of $<\kappa$-sized nonisomorphic graphs has some $g_0$ and $g_1$ with $g_0$ a proper subgraph of $g_1$. (Need to cite sources)
As we mentioned above, every almost huge cardinal is a Vopěnka cardinal.
Equivalent statements Extendible cardinals
The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters (see section "Variants”). [4]
The Vopěnka principle is equivalent over GBC to both following statements:[2]
For every class $A$, there is an $A$-extendible cardinal. For every class $A$, there is a stationary proper class of $A$-extendible cardinals. Strong Compactness of Logics
Vopěnka's principle is equivalent to the following statement about logics as well:
For every logic $\mathcal{L}$, there is a cardinal $\mu_{\mathcal{L}}$ such that for any language $\tau$ and any $\mathcal{L}(\tau)$-theory $T$, $T$ is satisfiable if and only if every $t\subseteq T$ such that $|t|<\mu_{\mathcal{L}}$ is satisfiable. [5]
This $\mu_{\mathcal{L}}$ is called the strong compactness cardinal of $\mathcal{L}$. Vopěnka's principle therefore is equivalent to every logic having a strong compactness cardinal. This is very similar in definition to the Löwenheim–Skolem number of $\mathcal{L}$, although it is not guaranteed to exist.
Here are some examples of strong compactness cardinals of specific logics:
If $\kappa\leq\lambda$ and $\lambda$ is strongly compact or $\aleph_0$, then the strong compactness cardinal of $\mathcal{L}_{\kappa,\kappa}$ is at most $\lambda$. Similarly, if $\kappa\leq\lambda$ and $\lambda$ is extendible, then for any natural number $n$, the strong compactness cardinal of $\mathcal{L}^n_{\kappa,\kappa}$ ($\mathcal{L}_{\kappa,\kappa}$ with $n+1$-th order logic) is at most $\lambda$. Therefore for any natural number $n$, the strong compactness cardinal of $n+1$-th order finitary logic is at most the least extendible cardinal. Locally Presentable Categories
Vopěnka's principle is equivalent to the axiom stating "no large full subcategory $C$ of any locally presentable category is discrete." (Sources needed). Equivalently, no large full subcategory of Graph (the category of all graphs) is discrete; that is, for any proper class of simple directed graphs, there is at least one pair of nonequal graphs $G$ and $H$ in the class such that $G$ is a subgraph of $H$. This is a $\Pi^1_1$ statement, so the least Vopěnka cardinals are not even weakly compact (although the least weakly compact cardinal is much, much, much smaller than the least Vopěnka cardinal, if it exists).
Intuitively, a "category" is just a class of mathematical objects with some notion of "morphism", "homomorphism", "isomorphism", (etc.). For example, in Set, the category of all sets, homomorphisms are just injections, and isomorphisms are bijections. In categories of groups and models, homomorphisms and isomorphisms share their actual names.
A "locally small category" $C$ is one with only set-many morphisms between any two objects of $C$. This is one where the objects of $C$ behave "set-like" in the sense that, usually, the number of morphisms between two set-sized objects is at most the number of functions between their universes (like in groups and in graphs). A "locally presentable category" is a locally small category with a couple more really nice properties; you can "generate" all of the objects from set-many objects in the category.
Vopěnka's principle intuitively states that if you have a locally presentable category $C$, then any proper class of objects of $C$ has some nonisomorphic objects $c$ and $d$ where $c$ has a morphism into $d$.
Woodin cardinals
There is a strange connection between the Woodin cardinals and the Vopěnka cardinals. In particular, Vopěnkaness is equivalent to two strengthening variants of Woodinness, namely the Woodin for Supercompactness cardinals and the $2$-fold Woodin cardinals. As a result, every Vopěnka cardinal is Woodin.
Elementary Embeddings Between Ranks
An equivalent statement to Vopěnka's principle is that for any proper class $C\subseteq ORD$, there are $\alpha\in C$, $\beta\in C$, and a nontrivial elementary embedding $j:\langle V_\alpha;\in,P\rangle\rightarrow\langle V_\beta;\in,P\rangle$. Vopěnka's principle quite obviously implies this. The reason the converse holds is because every elementary embedding can be "encoded" (in a sense) into one of these. For more information, see [6].
Other points to note
Whilst Vopěnka cardinals are very strong in terms of consistency strength, a Vopěnka cardinal need not even be weakly compact. Indeed, the definition of a Vopěnka cardinal is a $\Pi^1_1$ statement over $V_\kappa$ (Vopěnka's principle itself is $\Pi^1_1$), and $\Pi^1_1$-indescribability is one of the equivalent definitions of weak compactness. Thus, the least weakly compact Vopěnka cardinal must have (many) other Vopěnka cardinals less than it.
Variants
(Boldface) $VP(\mathbf{Σ_n})$ denotes the fragment of Vopěnka’s Principle for $Σ_n$-definable classes and (lightface) $VP(Σ_n)$ is the weaker principle, where parameters are not allowed in the definition of the class (with analogous definitions for $Π_n$ and $∆_n$).
Vopěnka-like principles $VP(κ, \mathbf{Σ_n})$ for cardinal $κ$ state that for every proper class $\mathcal{C}$ of structures of the same type that is $Σ_n$-definable with parameters in $H_κ$ (the collection of all sets of hereditary size less than $κ$), $\mathcal{C}$ reflects below $κ$, namely for every $A ∈ C$ there is $B ∈ H_κ ∩ C$ that elementarily embeds into $A$.
Results:
For every $Γ$, $VP(κ, Γ)$ for some $κ$ implies $VP(Γ)$. $VP(κ, \mathbf{Σ_1})$ holds for every uncountable cardinal $κ$. $VP(Π_1) \iff VP(κ, Σ_2)$ for some $κ \iff$ There is a supercompact cardinal. $VP(\mathbf{Π_1}) \iff VP(κ, \mathbf{Σ_2})$ for a proper class of cardinals $κ \iff$ There is a proper class of supercompact cardinals. For $n ≥ 1$, the following are equivalent: $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There is a $C(n)$-extendible cardinal. The following are equivalent: $VP(Π_n)$ for every n. $VP(κ, \mathbf{Σ_n})$ for a proper class of cardinals $κ$ and for every $n$. $VP$ For every $n$, there is a $C(n)$-extendible cardinal. Generic
(Information in this section from [7] unless noted otherwise)
Definitions:
The generic Vopěnka principle(in GBC) states that for every proper class $\mathcal{C}$ of first-order structures of the same type there are $B ≠ A$, both in $\mathcal{C}$, such that $B$ elementarily embeds into $A$ in some set-forcing extension. The generic Vopěnka scheme(in GBC or ZFC, called generic Vopěnka principle in [7]) states the same about first-order definable proper classes.[8] (Boldface) $gVP(\mathbf{Σ_n})$ and (lightface) $gVP(Σ_n)$ (with analogous definitions for $Π_n$ and $∆_n$) as well as $gVP(κ, \mathbf{Σ_n})$ are generic analogues of corresponding weakenings of Vopěnka's principle. For transitive $∈$-structures $B$ and $A$ and elementary embedding $j : B → A$, we say that $j$ is overspillingif it has a critical point and $j(crit(j)) > rank(B)$. The principle $gVP^∗(Σ_n)$ states that for every $Σ_n$-definable (without parameters) proper class $\mathcal{C}$ of transitive $∈$-structures, there are $B ≠ A$ in $\mathcal{C}$ such that there is an overspilling elementary embedding $j : B → A$ in some set-forcing extension. ($gVP^∗(Π_n)$, $gVP^∗(\mathbf{Π_n})$, and $gVP^∗(κ, \mathbf{Σ_n})$ are defined analogously.)
Results:
The following are equiconsistent: $gVP(Π_n)$ $gVP(κ, \mathbf{Σ_{n+1}})$ for some $κ$ There is an $n$-remarkable cardinal. The following are equiconsistent: $gVP(\mathbf{Π_n})$ $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$ There is a proper class of $n$-remarkable cardinals. Unless there is a transitive model of ZFC with a proper class of $n$-remarkable cardinals, if for some cardinal $κ$, $gVP(κ, \mathbf{Σ_{n+1}})$ holds, then there is an $n$-remarkable cardinal. if $gVP(Π_n)$ holds, then there is an $n$-remarkable cardinal. if $gVP(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. $κ$ is the least for which $gVP^∗(κ, \mathbf{Σ_{n+1}})$ holds. $\iff κ$ is the least $n$-remarkable cardinal. If $gVP^∗(Π_n)$ holds, then there is an $n$-remarkable cardinal. If $gVP^∗(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.[8] If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of virtually rank-into-rank cardinals.[8] If $0^\#$ exists, then $L$, equipped with only its definable classes, is a model of $gVP$. (By elementary-embedding absoluteness results. The hypothesis can be weakened, because one can chop at off the universe at any Silver indiscernible and use reflection.)[8] The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.[8]
Open problems:
Must there be an $n$-remarkable cardinal if $gVP(κ, \mathbf{Σ_{n+1}})$ holds for some $κ$? if $gVP(Π_n)$ holds?
References Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Hamkins, Joel David. The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme., 2016. www arχiv bibtex Perlmutter, Norman. The large cardinals between supercompact and almost-huge., 2010. arχiv bibtex Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex Makowsky, Johann. Vopěnka's Principle and Compact Logics.J Symbol Logic www bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex
|
Physics can be a scary ordeal for some and a very interesting subject for the rest. There are many scholars who want a program or some tips about important topics that supports and helps them in studying physics. With a proper preparation strategy, idea about the important questions, derivations and formulas in physics, a student can definitely score well in the main physics exam.
Here we would like to provide the important derivations of CBSE class 12 physics
class 12 to help you score well in boards. Check the table below to learn some important derivations for Class 12 Physics.
List of Physics Derivations
The syllabus of CBSE class 12 physics is vast. It contains a total of 10 units or 15 chapters. It is important for students to know the important topics and derivations in class 12 physics to study efficiently. Given below are the important derivations for class 12 physics.
Electrostatics(Unit 1) Coulomb’s Law in the form of vectors – \(\vec{F_{12}}=-\vec{F_{21}}\) Application of Gauss theorem in calculation of electric field Electric field intensity due to a thin infinite plane sheet of charge Electric field intensity of an electric dipole on equatorial line – \(E_{R}=\frac{1}{4\pi \varepsilon _{0}}\frac{P}{r^{3}}\) Electric field intensity of an electric dipole on axial line – \(E=\frac{1}{4\pi \varepsilon _{0}}\frac{2P}{r^{2}}\) Electric field intensity due to a uniformly charged spherical shell at various points Capacitance of a parallel plate capacitor with dielectric medium Capacitance of a parallel plate capacitor without dielectric medium Electric potential due to a point charge – \(V(r)=\frac{Q}{4\pi \varepsilon _{0}r}\)
Current Electricity(Unit 2) Application of Gauss theorem in calculation of electric field intensity due to thin infinite plane sheet of charge uniformly charged spherical sphere Relation between current and drift velocity Expression for drift velocity Series and parallel connection of resistors
Magnetic Effect of Current and Magnetism(Unit 3) Torque experienced by a current loop in a uniform magnetic field Ampere’s law and its application to infinitely long straight wire. Straight and toroidal solenoids. Magnetic dipole moment of a revolving electron Apply Biot-Savart law to derive the expression for the magnetic field on the axis of a current carrying circular loop. Force between two parallel current carrying conductors Torque on a magnetic dipole (bar magnet) in a uniform magnetic field. Current loop as a magnetic dipole and its magnetic dipole moment Force on a current carrying conductor in a uniform magnetic field Force on a moving charge in uniform magnetic and electric field.
Electromagnetic Induction And AC(Unit 4) Self and mutual induction Derivation of Emf induced in a rod moving in a uniform magnetic field. Impedance, reactance and average power in series LCR, LR, LC or CR circuit. Relation between peak and rms value of current Derivation for resonance in a series LCR Circuit
Electromagnetic Waves(Unit 5) Derivation of displacement current
Optics(Unit 6) Refraction at convex surfaces Diffraction using single slit Derivation of refractive index of minimum deviation in the prism Young Double Slit Experiment Derivation of the formula for angle of minimum deviation for the prism
Dual Nature of Radiation and Matter(Unit 7) Derivation of the De-Broglie Relation Relation between threshold frequency, frequency of the incident photon and cutoff potential
Nuclei and Atoms(Unit 8) Derive \(N=N_{0}e^{-\lambda t}\) expression of radius in hydrogen like atoms
Electronic Devices(Unit 9) Common emitter transistor amplifier Common base transistor amplifier
Communication System(Unit 10) Frequency modulation and amplitude modulation
The above listed were a few important derivation of CBSE class 12 physics that are most likely to be asked in the coming board exams. While studying for the board exam, students can plan accordingly giving priority to the above mentioned topics.
You can also download the important derivations in physics class 12 CBSE PDF to get well acquainted with all the derivations and answer related questions accurately in the class 12 board exam.
|
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
|
Circularity of Finite Groups without Fixed Points 46 Downloads Citations Abstract.
Let Φ be a fixed point free group given by the presentation \(\langle A, B\,\vert\, A^\mu=1,\, B^\nu=A^t,\, BAB^{-1}=A^\rho\rangle\) where μ and ρ are relative prime numbers,
t = μ/ s and s = gcd(ρ − 1,μ), and ν is the order of ρ modulo μ. We prove that if (1) ν = 2, and (2) Φ is embeddable into the multiplicative group of some skew field, then Φ is circular. This means that there is some additive group N on which Φ acts fixed point freely, and |(Φ( a)+ b)∩(Φ( c)+ d)| ≤ 2 whenever a,b,c,d ∈ N, a≠0≠ c, are such that Φ( a)+ b≠Φ( c)+ d. Preview
Unable to display preview. Download preview PDF.
|
I am trying to do Exercise 5.33 on page 64 of Fulton's book on algebraic curves.
5.33 Let $C$ be an irreducible cubic, $L$ a line such that $L\bullet C = P_1+ P_2 + P_3,$ $P_i$ distinct. Let $L_i$ be the tangent line to $C$ at $P_i:$ $L_i \bullet C = 2P_i + Q_i$ for some $Q_i.$ Show that $Q_1, Q_2, Q_3$ lie on a line. ($L^2$ is a conic!)
I would appreciate any help in solving this problem.
Some background information for convenience: We recall that for two projective plane curves of degree with no common component, we have defined $F\bullet G$ to be the formal sum
$$F\bullet G = \sum_{P\in \mathbb{P}^2} I(P,F\cap G) P. $$
where $I(P, F\cap G)$ is the intersection number of $F$ and $G$ at $P.$ The intersection number is $1$ if and only if $F$ and $G$ meet at $P$
transversally, which means that $P$ is a simple point on both $F$ and $G,$ and the tangent to $F$ at $P$ is distinct to the tangent to $G$ at $P.$ The first condition in our problem means that $L$ intersects $C$ transversally at $3$ distinct points $P_i.$ The simplest version of this problem is when $P_i\neq Q_i, i=1,2,3$ (i.e when each tangent $L_i$ meets $C$ again at another points $\neq P_i$) It then says: Show that the three points when the tangents again intersect $C$ lie on a line.
I'm not sure how to proceed, many things seem like they could be relevant (Max Noether's theorem has just been covered and Bezouts right before that), especially this one:
Proposition 2. Let $C, C'$ be cubics, $C'\bullet C = \sum_{i=1}^9 P_i;$ suppose $Q$ is a conic, and $Q\bullet C = \sum_{i=1}^6 P_i.$ Assume $P_1,\cdots, P_6$ are simple points on $C.$ Then $P_7, P_8, P_9$ lie on a straight line.
Can someone please help me with this question?
|
In randomized-controlled trials, interim analyses are often planned for possible early termination of the trial for claiming superiority or futility of a new therapy. Such formal interim analyses are performed, closely following the specifications in the study protocol to maintain the overall type I error rate at a nominal level. While unblinding is necessary to conduct the formal interim analysis in blinded studies, data before unblinding also have information, to some extent, about the potential treatment difference between the groups.
We develop a blinded data monitoring tool for the trials measuring binary outcome (especially considering the response rate). We assume that one interim analysis is planned for early termination for superiority or futility of the new treatment, and it can be possibly skipped when our tool suggests that early termination is unlikely based on the blinded data.
The functions in IAbin package give N-T plot, with which the investigators may decide whether or not to skip some of the planned interim analyses when the interim result at that time point unlikely supports early termination of the trial for superiority or futility of the new treatment. This package contains two functions
plotNT.sup and plotNT.fut to plot the N-T plane, which is used for expecting the early termination for superiority and futility, respectively.
The arguments in the functions are determined in the design stage of a clinical trial, in which the endpoint is a binary, say, response or non-response. Especially, it is assumed that the response rate of the control therapy (
p0) is chosen from the sufficient knowledge by the clinical experts. And the function is used for a trial expecting an interim analysis. Here is an example for the settings of the arguments.
p0 = 0.6
M = 100
q = 0.5
alpha1 = 0.01
cp1 = 0.2
p0 is a value of the expected response rate in the control therapy. If the several possible values are considered for the control efficacy,
p0 can be a vector, i.e.,
p0 = c(0.2, 0.3, 0.4).
M is an expected total sample size in both new therapy and control arms.
q is an allocation ratio of the new therapy arm \((0 < q < 1)\).
alpha1 is a critical alpha at an interim analysis.
cp1 is a critical conditional power at an interim analysis.
Also, the other arguments for graphics are also included in the functions. The default setting is
xlab = "N: Number of patients at interim analysis"
ylab = "T: Number of responders at interim analysis"
col = "blue"
main = "N-T plot"
lty = 1
If
p0 is set as a vector,
col should be the same length of vector, i.e.,
col = c("blue", "red", "green"). The default value of
lty is set 1 and 2 for
plotNT.sup and
plotNT.fut, respectively.
The function
plotNT.sup automatically draws a N-T plot for early stopping for superiority. If the length of
p0 is plural, it draws several lines on a plane. Monitoring the total number of patients and the total number of respondes with blindness maintained, if the observed \((N, T)\) plot is over the drawn line, we expect that the response rate of the new therapy would be significantly higher than the control, with the significance level
alpha1. Otherwise, the result of the test unlikely supports early termination of the trial for superiority of the new treatment, and then the investigators will skip the interim analysis.
When printing the function
plotNT.sup, it gives a matrix names
N,
T,
Z_score and
P_val. The first and second values correspond to the drawn N-T plot. The third one is the expected Z scores and the two sided p-values at the corresponding (N, T), calculated via
\(Z = \frac{\hat{p_1} - p_0}{\widehat{Var(\hat{p_1})}}\) and p-value \(= 2(1 - \Phi(Z))\),
where \(\hat{p_1} = (T - N(1 - q) p_0)/Nq\), \(\widehat{Var(\hat{p_1})} = N \hat{r} (1 - \hat{r})/(Nq)^2\), which is a variance estimator of \(\hat{p_1}\), \(\hat{r} = q\hat{p_1} + (1-q)p_0\), and \(\Phi\) is a cumulative distribution function of the standard normal distribution. If the lenght of
p0 is plural, it gives a list.
NT_s = plotNT.sup(p0, M, q, alpha1)
print(head(NT_s))
## N T Z_score P_val
## [1,] 2 2 Inf 0.0000000000
## [2,] 4 4 Inf 0.0000000000
## [3,] 6 6 Inf 0.0000000000
## [4,] 8 8 Inf 0.0000000000
## [5,] 10 9 3.162278 0.0015654023
## [6,] 12 11 3.968971 0.0000721838
The function
plotNT.fut draws a N-T plot for early stopping for futility. If the observed \((N, T)\) plot is under the drawn line, we expect that the conditional power at that time would be less than
cp1, and then unblinded analysis will be recommended for early stopping for futility of the new therapy.
When printing the function
plotNT.fut, it gives a matrix names
N,
T,
Z_score and
CP.
CP is the expected conditional power at the corresponding (N, T).
NT_f = plotNT.fut(p0, M, q, alpha1, cp1)
p0
If the study investigators want to know possibilitiy of early stopping for superiority and futility, overlap two graphs via
par(new = T).
NT_s3 = plotNT.sup(p0 = c(0.2, 0.4, 0.6), M, q, alpha1, col = c("green", "red", "blue"))
par(new = T)
NT_f3 = plotNT.fut(p0 = c(0.2, 0.4, 0.6), M, q, alpha1, cp1, col = c("green", "red", "blue"))
The tool serves a useful reference when interpreting the summary of the blinded data during the course of the trial. This can be used to determine whether or not the formal unblinded interim analysis should be conducted, without losing integrity of the study or spending any alpha. This tool will potentially save the study resource/budget by avoiding unnecessary interim analysis.
|
I'm studying from Reitz's
Foundations of Electromagnetic Theory and trying to understand how the results obtained under electrostatic conditions are affected when there is a current flowing through material.
I understand that it isn't true anymore that the electric field inside a conductor is zero if there is a current going through the conductor. And I think (please correct me if I'm mistaken) that because of that, now the electric field doesn't need to be perpendicular to the surface of the conductor. So I guess this may affect the boundary conditions, but how are they affected?
Specifically, how do the electric displacement and the polarization behave when there is a current flowing from one material with some permitivity $\epsilon_1$ and conductivity $g_1$ to another one with permitivity $\epsilon_2$ and conductivity $g_2$? Is it still true that a free charge density is accumulated at the surface between both materials?
I mean, is this boundary condition: $$(\vec{D_1}-\vec{D_2}) \hat n =\sigma$$ still valid?
I'm quite confused, because I don't get what it would mean to have a surface charge when all the charge is actually flowing.
|
I was helping my kid study for a precalculus examination and looking at her old tests from the year, and came across a question about vectors. Below is the typed up version of the question and her answer; an image from the actual test can be found at the end.
6.A vector with tail at the origin has a magnitude of $5$ and a direction angle of $\pi/3$. When trying to find the coordinates of the head, Nick says, "That's easy. It's just $r \cos \theta$ and $r \sin \theta$; we've been doing this forever now!" Olathe counters, "No, we should use the idea of components to find out how much our vector goes along each axis."
Who's correct here? Give a thorough, mathematically sound, explanation!
The student writes:
Nick is correct. As long as we have the magnitude and directional angle, and know that the tail is located at the origin, the method he proposed is suitable. (
Crossed out: Olathe is correct. Even though we have the magnitude and directional $\theta$ we can't) Components are unnecessary b/c we know the tail is at the origin, we know how far out the vector goes, and we know the $\theta$ it makes with the $x$-axis.
I'm not sure how thorough this needed to be, but I agree with the kid. Or I agree that both approaches are equivalent - maybe that was the point. Would any high school math educators care to weigh in?
|
I'm having trouble understanding the proof given in Morgan's
The Seiberg-Witten Equations that every 4-manifold $X$ admits a $Spin^c$ structure (Lemma 3.1.2). One can easily see from the exact sequence:\begin{equation*}H^1(X;Spin^c) \to H^1(X; SO(n)) \oplus H^1(X;\mathbb{Z}) \xrightarrow{c_1+w_2} H^2(X;\mathbb{Z}_2)\end{equation*}that a $Spin^c$ structure will exist iff $w_2(TX)$ lifts to an integral class, which we can check using Bockstein homomorphisms. After that, I'm lost; I'm not sure if these are theorems, or whether they are supposed to be obvious: In what sense is every $\mathbb{Z}/2^k \mathbb{Z}$ 3-class represented by a mapping from a smooth $\mathbb{Z}/2^k \mathbb{Z}$-manifold into $X$? Why are integral 2-classes that represent torsionelements necessarily represented by embedded oriented surfaces?
EDIT: Since the proof from Morgan's book is quite short, I may as well write out the whole thing here:
"We need only see that $w_2(X)$ lifts to an integral class $c \in H^2(X;\mathbb{Z})$ in order to prove the existence of a $Spin^c$ lifting. But for any class $x \in H_2(X;\mathbb{Z}/2 \mathbb{Z})$ the value of $w_2(X)$ on $x$ is given as follows: represent $x$ as an embedded (possibly non-orientable) closed surface in $X$ and take the self-intersection of this surface modulo two. To see that $w_2(X)$ lifts to an integral class, we must see that its integral Bockstein $\delta w_2(X)$ is zero. But this torsion integral class is zero if and only if it evaluates trivially on every $\mathbb{Z}/2^k \mathbb{Z}$ class of dimension three. Any such class is represented by a mapping of a smooth $\mathbb{Z}/2^k \mathbb{Z}$-manifold into $X$. The value of $\delta w_2(X)$ on such a class is equal to the value of $w_2(X)$ on the Bockstein of this $\mathbb{Z}/2^k\mathbb{Z}$-manifold. Thus, we need only see that $w_2(X)$ vanishes on integral classes which represent torsion elements in $H_2(X;\mathbb{Z})$. But this is clear, any such class is represented by a smoothly embedded oriented surface with zero self-intersection".
I suppose what I'm really asking is which statements in this proof are non-trivial theorems about the topology of 4-manifolds, and which ones are supposed to be obvious?
Sorry, I don't seem to be able to comment, so I'll just say here: Ryan Budney, I hope this makes the question less vague, and Anton Fetisov, yes, there are other proofs of this fact that I do understand, but I'm specifically trying to understand this proof, because it seems very slick.
|
According to the kinetic theory of gas,
– Gases are composed of very small molecules and their number of molecules is very large.
– These molecules are elastic. – They are negligible size compare to their container. – Their thermal motions are random.
To begin, let’s visualize a rectangular box with length L, areas of ends A
1 and A 2. There is a single molecule with speed v x traveling left and right to the end of the box by colliding with the end walls.
The time between collisions with the wall is the distance of travel between wall collisions divided by the speed.
1. \(t=\frac{2L}{v_x} \)
The frequency of collisions with the wall in collisions per second is
2. \(f=\frac{1}{t}=\frac{1}{2L/v_x}=\frac{v_x}{2L}\)
According to Newton, force is the time rate of change of the momentum
3. \(F=\frac{dp}{dt}=ma\)
The momentum change is equal to the momentum after collision minus the momentum before collision. Since we consider the momentum after collision to be
mv, the momentum before collision should be in opposite direction and therefore equal to – mv.
4. \(\Delta{p}=mv_x-(-mv_x)=2mv_x\)
According to equation #3, force is the change in momentum \(\Delta{p}\) divided by change in time \(\Delta {t}\). To get an equation of average force \(\overline{F}\) in term of particle velocity \(v_x\), we take change in momemtum \(\Delta{p}\) multiply by the frequency \(f\) from equation #2.
5. \(\overline{F}=\Delta{p}(f)=2mv(\frac{v_x}{2L})=\frac{mv_x^2}{L}\)
The pressure, P, exerted by a single molecule is the average force per unit area, A. Also V=AL which is the volume of the rectangular box.
6. \(P_{1\:Molecule}=\frac{\overline{F}}{A}=(\frac{mv_x^2}{L})/A=\frac{mv_x^2}{LA}=\frac{mv_x^2}{V}\)
Let’s say that we have N molecules of gas traveling on the x-axis. The pressure will be
7. \(P_{N\:Molecules}=\frac{m}{V}(v_{x_1}^2+v_{x_2}^2+v_{x_3}^2….+v_{x_N}^2)=\sum_{a=0}^{N}\frac{mv_{x_a}^2}{V}\)
To simplify the situation we will take the
mean square speed of N number of molecules instead of summing up individual molecules. Therefore, equation #7 will become
8. \(P_{N\:Particles}=\frac{Nm\overline{v_x^2}}{V}\)
Earlier we are trying to simplify the situation by only considering that a molecule with mass m is traveling on the x axis. However, the real world is much more complicated than that. To make a more accurate derivation we need to account all 3 possible components of the particle’s speed, v
x, v y and v z.
9. \(\overline{v^2}=\overline{v^2_x}+\overline{v^2_y}+\overline{v^2_z}\)
Since there are a large number of molecules we can assume that there are equal numbers of molecules moving in each of co-ordinate directions.
10. \(\overline{v^2_x}=\overline{v^2_y}=\overline{v^2_z}\)
Because the molecules are free too move in three dimensions, they will hit the walls in one of the three dimensions one third as often. Our final pressure equation becomes
11. \(P=\frac{Nm\overline{v^2}}{3V}\)
However to simplify the equation further, we define the temperature, T, as a measure of thermal motion of gas particles because temperature is much easier to measure than the speed of the particle. The only energy involve in this model is kinetic energy and this kinetic enery is proportional to the temperature T.
12. \(E_{kinetic}=\frac{mv^2}{2}\propto{T}\)
To combine the equation #11 and #12 we solve kinetic energy equation #12 for mv
2.
13. \(mv^2=2E_{kinetic}\Rightarrow\frac{mv^2}{3}=\frac{2E_{kinetic}}{3}\)
Since the temperature can be obtained easily with simple daily measurement like a thermometer, we will now replace the result of kinetic equation #13 with with a constant R times the temperature, T. Again, since T is proportional to the kinentic energy it is logical to say that T times k is equal to the kinetic energy E. k, however, will currently remains unknown.
14. \(kT=\frac{mv^2}{3}=\frac{2E_{kinetic}}{3}\)
Combining equation #14 with #11, we get:
15. \(P=\frac{N}{V}\frac{m\overline{v^2}}{3}=\frac{N}{V}\frac{2E_{kinetic}}{3}=\frac{N}{V}kT=\frac{NkT}{V}\)
Because a molecule is too small and therefore impractical we will take the number of molecules, N and divide it by the Avogadro’s number, N
A= 6.0221 x 10 23/mol to get n (the number of moles)
16. \(n=\frac{N}{N_a}\)
Since N is divided by N
a, k must be multiply by N a to preserve the original equation. Therefore, the constant R is created.
17. \(R=N_ak\)
Now we can achieve the final equation by replacing N (number of melecules) with n (number of moles) and k with R.
17. \(P=\frac{nRT}{V}\Rightarrow{PV=nRT}\) Calculation of R & k
According to numerous tests and observations, one mole of gas is a 22.4 liter vessel at 273K exerts a pressure of 1.00 atmosphere (atm). From the ideal gas equation above:
A. \(R = \frac{PV}{nT}\)
B. \(R=\frac{(1 atm)(22.4L)}{(1 mol)(273K)}=0.082\frac{Latm}{molK}\)
C. \(k=\frac{R}{N_a}\Rightarrow{k=\frac{0.082 Latm/molK}{6.0221 x 10^{23}/mol}}=1.3806504 x 10^{-25}\frac{Latm}{K}\)
|
A Gentle Introduction to AI A Gentle Introduction to AI Get a simple introduction to Artificial Intelligence is and why it should be important to you, with a specific focus on Machine Learning.
Join the DZone community and get the full member experience.Join For Free
Looking at the latest Google and Apple conventions, it is clear to all: If in the past years the main buzzwords in the information technology field were IoT and Big Data, the catch-all word of this year is without a doubt Machine Learning. What does this word mean exactly? Are we talking about Artificial Intelligence? Is somebody trying to build a Skynet to ruin the world? Will machines steal my job in the future? “Know your enemy,” they say. So let’s try to understand what Machine Learning really means.
Definitions
The task we are going to accomplish is not simple. So, let’s start from the beginning: What does
Machine Learning refer to?
Machine Learning is a discipline of Artificial Intelligence that is responsible for studying and developing algorithms that allow machines to learn information. In detail, the learning process is done using an inductive approach that tries to extract rules and behavioral patterns from huge amounts of data.
The types of information used by the algorithms (learners) to learn to identify the following categorization of Machine Learning algorithms are:
Supervised learning. The learner a set of given couples (input, output) to learn a function
\(f\)that maps input to output. The above couples are called supervisions and using them, the learner tries to find the function that approximately fits, like
\(f\). Supervisions must be available at the beginning of the learning process.
Unsupervised learning. In this kind of learning process, the function
\(f\)is learned by the learner using solely the given inputs. There is no
a prioriinformation relative to the output of the function
\(f\). In this type of learning process, the learner tries to approximate the probability distribution of the given inputs.
Reinforcement learning. Given an environment that an agent can interact with and given a set of positive and negative returns that the environment can give to the agent, the objective, in this case, is to find a policy of action of the agent that maximizes the values of the above returns.
Let's look at some examples. We have some food, i.e. pasta, oranges, apples, and chocolate. Using a supervised approach, we give to the learner the whole set of foods, specifying which of these is a fruit and which is not. Using this information, the learner tries to understand what features fruits have. Given a new food, the algorithm will try to guess if it is a fruit or not.
Using an unsupervised approach, we do not know the type of each food. Our task is to group each piece with something that seems to be similar. The learner will try to build these groups (or clusters) looking at the information it has.
In reinforcement learning, the food is all scattered on the floor of a room. Each type of food smells in a different way and has a caloric intake. An agent that can recognize smells is free to move inside the room. It does not know the caloric intake of each food until it eats it. The algorithm will learn how to move inside the room trying to maximize the caloric intake of the eaten food.
Given the above definitions, we can now move on. In this post, I will focus on supervised learning.
Supervised Learning
We said that in this kind of learning we need a set of couples of inputs and outputs that we called
supervisions. Someone has to give us this information; otherwise, it is not possible to learn anything using this approach. We can call this “someone” the oracle.
Let’s try to define the process of learning now. We need a set of items we want to categorize in some way. Let’s call this set
\(X\) and the items belonging to it
\(x \in X\). We define a supervisor
\(\mathcal{S}\) that, given a
\(x \in X\), gives a label
\(\hat{y} \in \mathcal{Y}\). Finally, we call
\(\mathcal{H}\) a set of functions
\(h\) (a.k.a. hypothesis space) that maps an
\(x\) into an
\(y\), such that
y=h(x).
Then, the learning process can be viewed as the choice of a
\(h \in \mathcal{H}\) such that the differences between
\(y = h(x)\) and the labels chosen by the supervisor
\(\mathcal{S}\) are minimized. In other words, we want to minimize cases in which
\(y \neq \hat{y}\).
Just to be a little bit formal, supervisions
\(x\) belong to an unknown probability distribution
\(\mathcal{D}(x)\). Also, labels
\(\hat{y}\) given by the Supervisor belong to a conditional probability distribution
\(\mathcal{D}(\hat{y} \mid x)\). If you did not understand these last sentences, it does not matter: there is a world full of white unicorns out of there.
Data Representation and Features Selection
So far, so good. We came up with a set of examples
\(x \in X\) that we want to classify in some way. How can we do that? The key concept is how we are going to represent our data, how we are going to explain to the computer how to treat this data.
A computer is simply a machine that is able to do some arithmetical operation over a representation of numbers, isn’t it? Then, we need to transform the examples
\(x \in X\) in something that a computer is able to understand. This phase is called features selection.
The number of features that characterize example
\(X\) is clearly infinite. Think to an apple: it has a color, a volume, a radius, a quality, an age…but also a number of molecules, atoms, and so on. Then, the first operation we need to do on
\(X\) is to choose some of these features, mapping them into a new space
\(\mathcal{X}\), called the features space.
The best choice for
\(\mathcal{X}\) is a space in which each feature is represented by a number. Doing so, an example
\(\mathbf{x} \in \mathcal{X}\) becomes a vector of numbers
\((x_1,\dots, x_n)\): every feature
\(x_i\) is a possible dimension in this space.
Let’s call
\(\phi : X \to \mathcal{X}\) the function that maps examples from the input space to the features space.
As an example, suppose we want to associate a set of emails the relative author. This task is called authorship attribution. The first step we need to do is to map each email
\(x\) in the relative vector
\(\mathbf{x}\) in some features space. Obviously, the selection process of the features space is one of the core processes of machine learning: representing the examples with the wrong set of features could mean to fail the entire learning process.
For our example, some possible features can be the following:
The number of words used in the email. The number of adjectives. The number of adverbs. The number of occurrences of each single punctuation character. And so on…
An email
\(x\) is represented in the features space with something like this:
\(\mathbf{x} = (34, 7, 10, 3, 6)\). As you can see, it’s a simple vector in
\(\mathbb{N}^5\)! This kind of representation opens up a lot of useful considerations that will complete our introduction to Machine Learning.
Vector Similarity and the Concept of Distance
Given a Supervisor
\(\mathcal{S}\) that gives us supervisions
\((\mathbf{x}_1, \hat{y}_1), \dots,
(\mathbf{x}_m, and
\hat{y}_m)\), the learning process is equal to find the degree of similarity between a new example
\(\mathbf{x}_{m+1}\) and all the previous.
Speaking of vectors in some numeric features space, the similarity concept is equal to the concept of distance between two vectors. Defining as
\(d: \mathcal{X} \times \mathcal{X} \to \mathbb{R}^+\) , the distance between vectors
\(\mathbf{x}\) and
\(\mathbf{z}\) , a good choice of distance between two vectors is the dot product or inner product.
We have just defined a good way to understand how vectors are related to each other inside the features space.
Eureka! Let’s try to make a step beyond and to close the first circle of this introduction to Machine Learning. It’s All About Hyperplanes
Using the framework we just built, we can imagine our examples as vectors inside an
n-dimensional features space
\(\mathbb{R}^d\). If every vector is associated with a label for each class
\(\hat{y}\), the classification problem is equal to find an hyperplane that divides into different groups the vectors.
Ok, wait. I think I lost many of you in this last step. Let’s take a step back.
In the above image, a set of supervisions is represented in a two-dimensional features space, a.k.a.
\(\mathbb{R}^2\). Each vector
\(\mathbf{x}_i\) is colored either blue or orange. The colors represent the classes.
As you can see, using this features space, supervisions are naturally distributed in two different sets inside
\(\mathbb{R}^2\). So, a learner in such space is represented by a line
\(\mathbf{w}\) that divides the two sets of supervisions — orange-colored and blue-colored. Orange-colored supervisions are above
\(\mathbf{w}\); blue-colored supervisions are below
\(\mathbf{w}\).
For this problem, the hypothesis space is the set of all hyperplanes in
\(\mathbb{R}\). The learning process tries to discover the hyperplane
\(h\) that divides supervisions of their classes.
Conclusion
We just reduced the problem of supervised learning into a simpler problem, which is the selection process of an hyperplane in
\(\mathbb{R}^d\). We can identify different types of supervised learning (i.e. neural networks, support vector machines, and so on) that differ by the algorithm that is used to find the hyperplane.
As we saw, the selection process of the features space is a core process. If we choose a features space in which the supervisions are not separable, the learning process will become harder and an optimal solution cannot exist.
In the next part of this series of posts, we are going to explore some other interesting features of Machine Learning, i.e. training, validation and testing processes, generalization, overfitting, and many others.
Published at DZone with permission of Riccardo Cardin , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
|
(adding translation markup)
(29 intermediate revisions by 2 users not shown) Line 2: Line 2:
<translate>
<translate>
−
=Siril processing tutorial=
+
=Siril processing tutorial=
+
* [[Siril:Tutorial_import|Convert your images in the FITS format Siril uses (image import)]]
* [[Siril:Tutorial_import|Convert your images in the FITS format Siril uses (image import)]]
* [[Siril:Tutorial_sequence|Work on a sequence of converted images]]
* [[Siril:Tutorial_sequence|Work on a sequence of converted images]]
* [[Siril:Tutorial_preprocessing|Pre-processing images]]
* [[Siril:Tutorial_preprocessing|Pre-processing images]]
−
* [[Siril:Tutorial_manual_registration|Registration (
+
* [[Siril:Tutorial_manual_registration|Registration (alignment)]]
* → '''Stacking'''
* → '''Stacking'''
−
==Stacking==
+
==Stacking==
−
The final
+ +
The final to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed.
+
Sum Stacking
+
is the stack . The
+
Stacking
+
is to
+
,
+ +
and
+
is very .
− +
:
+
.
− + + + +
the in the
+
, is .
+ +
to
+
the .
+ +
is
+
the the .
+ + + + + + + + + + + + + + + + + + + + + + +
[[File:Siril stacking result.png|700px]]
[[File:Siril stacking result.png|700px]]
+ + + +
[[File:Siril inal_result.png|700px]]
[[File:Siril inal_result.png|700px]]
−
The images above
+ − +
The images above the result in Siril the . Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous [[Siril:Tutorial_preprocessing|step]] (take a look to the sigma value). of and to this .
+
[[File:Siril_Comparison_sigma.png|700px]]
[[File:Siril_Comparison_sigma.png|700px]]
− + +
the crop
+
of
+
.
− +
:
+ +
End of the [[Siril:Manual#Tutorial_for_a_complete_sequence_processing|processing tutorial]]. Return to the [[Siril:Manual|main documentation page]] for more illustrated tutorials.
End of the [[Siril:Manual#Tutorial_for_a_complete_sequence_processing|processing tutorial]]. Return to the [[Siril:Manual|main documentation page]] for more illustrated tutorials.
</translate>
</translate>
Latest revision as of 10:34, 13 September 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking
The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation.
Sum Stacking
This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing.
Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations.
These algorithms are very efficient to remove satellite/plane tracks.
Median Stacking
This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math].
Pixel Maximum Stacking
This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater.
Pixel Minimum Stacking
This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower.
In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]).
The output console thus gives the following result:
14:33:06: Pixel rejection in channel #0: 0.181% - 1.184% 14:33:06: Pixel rejection in channel #1: 0.151% - 1.176% 14:33:06: Pixel rejection in channel #2: 0.111% - 1.118% 14:33:06: Integration of 12 images: 14:33:06: Pixel combination ......... average 14:33:06: Normalization ............. additive + scaling 14:33:06: Pixel rejection ........... Winsorized sigma clipping 14:33:06: Rejection parameters ...... low=4.000 high=3.000 14:33:07: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 14:33:07: Execution time: 9.98 s. 14:33:07: Background noise value (channel: #0): 9.538 (1.455e-04) 14:33:07: Background noise value (channel: #1): 5.839 (8.909e-05) 14:33:07: Background noise value (channel: #2): 5.552 (8.471e-05)
After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 12 files.
The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]21/5.1 = 4.11 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math].
Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril:
Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
|
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
|
I would like to solve the following exercise but there is something I am not clear about:
Let $A$ be the Banach algebra of $C^1([0,1])$ endowed with the norm $\|f\|=\|f\|_\infty + \|f'\|_\infty$. Show that the Gelfand representation is not surjective.
I believe the Gelfand representation is the map $f \mapsto \widehat{f}$ where $\widehat{f}$ is the map $\Omega (A) \to \mathbb C$ , $\tau \mapsto \tau (f)$.
Is this so?
If it is then I am not clear how it is possible that not every map $\Omega (A) \to \mathbb C$ is an evaluation map (this is, as I understand, what the exercise is saying)
Or can one use a non zero constant function $\Omega (A) \to \mathbb C$ as a counterexample? For some reason I think maps $\Omega (A) \to \mathbb C$ in this exercise are understood to be linear.
|
On Constructing Functions, Part 1
Given a sequence of real-valued functions $\{f_n\}$, the phrase, "$f_n$ converges to a function $f$" can mean a few things:
$f_n$ converges uniformly $f_n$ converges pointwise $f_n$ converges almost everywhere (a.e.) $f_n$ converges in $L^1$ (set of Lebesgue integrable functions) and so on...
Other factors come into play if the $f_n$ are required to be continuous, defined on a compact set, integrable, etc.. So since I do
not have the memory of an elephant (whatever that phrase means...), I've decided to keep a list of different sequences that converge (or don't converge) to different functions in different ways. With each example I'll also include a little (and hopefully) intuitive explanation for why. Having these sequences close at hand is especially useful when analyzing the behavior of certain functions or constructing counterexamples.
The first sequence we'll look at is one which converges almost everywhere, but does not converge in $L^1$ (the set of Lebesgue integrable functions).
Also in this series: Example 2: converges uniformly but not in $L^1$ Example 3: converges in $L^1$ but not uniformly Example 4: $f_n$ are integrable and converge uniformly to $f$, yet $f$ is not integrable Example 5: converges pointwise but not in $L^1$ Example 6: converges in $L^1$ but does not converge anywhere
Example 1 A sequence of functions $\{f_n:\mathbb{R}\to\mathbb{R}\}$ which converges to 0 almost everywhere but does not converge to 0 in $L^1$.
This works because: Recall that to say $f_n\to 0$ almost everywhere means $f_n\to 0$ pointwise on $\mathbb{R}$ except for a set of measure 0. Here, the set of measure zero is the singleton set $\{0\}$ (at $x=0$, $f_n(x)=n$ and we can't make this less than $\epsilon$ for any $\epsilon >0$). So $f_n$ converges to 0 pointwise on $(0,1]$. This holds because if $x< 0$ or $x>1$ then $f_n(x)=0$ for all $n$. Otherwise, if $x\in(0,1]$, we can choose $n$ appropriately: The details: Let $\epsilon>0$ and $x\in (0,1]$ and choose $N\in \mathbb{N}$ so that $N>\frac{1}{x}$. Then whenever $n>N$, we have $n>\frac{1}{x}$ which implies $x>\frac{1}{n}$ and so $f_n(x)=0$. Hence $|f_n{x}-0|=0< \epsilon$.
Further*, $f_n\not\to 0$ in $L^1$ since $$\int_{\mathbb{R}}|f_n|=\int_{[0,\frac{1}{n}]}n=n\lambda([0,\frac{1}{n}])=1.$$
Remark: Notice that Egoroff's theorem applies here! We just proved that $f_n\to 0$ pointwise a.e. on $\mathbb{R}$, but Egoroff says that we can actually get uniform convergence a.e. on a bounded subset of $\mathbb{R}$, say $(0,1]$.
In particular for each $\epsilon >0$ we are guaranteed the existence of a subset $E\subset (0,1]$ such that $f_n\to 0$ uniformly and $\lambda((0,1]\smallsetminus E)<\epsilon$. In fact, it should be clear that that subset must be something like $(\frac{\epsilon}{2},1]$ (the "zero region" in the graph above). Then no matter where $x$ is in $(0,1]$, we can always find $n$ large enough - namely all $n$ which satisfy $\frac{1}{n}<\frac{\epsilon}{2}$ - so that $f_n(x)=0$, i.e. $f_n\to f$ uniformly. And indeed, $\lambda((0,1]\smallsetminus (\frac{\epsilon}{2},1]=\epsilon/2<\epsilon$ as claimed.
On the notation above: For a measurable set $X\subset \mathbb{R}$, denote the set of all Lebesgue integrable functions $f:X\to\mathbb{R}$ by $L^1(X)$. Then a sequence of functions $\{f_n\}$ is said to
converge in $L^1$ to a function $f$ if $\displaystyle{\lim_{n\to\infty}}\int|f_n-f|=0$.
|
De Bruijn-Newman constant
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
See asymptotics of H_t for asymptotics of the function [math]H_t[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
On Constructing Functions, Part 2
This post is the second example in an ongoing list of various sequences of functions which converge to different things in different ways.
Also in this series:
Example 1: converges almost everywhere but not in $L^1$
Example 3: converges in $L^1$ but not uniformly Example 4: $f_n$ are integrable and converge uniformly to $f$, yet $f$ is not integrable Example 5: converges pointwise but not in $L^1$ Example 6: converges in $L^1$ but does not converge anywhere
Example 2 A sequence of functions $\{f_n:\mathbb{R}\to\mathbb{R}\}$ which converges to 0 uniformly but does not converge to 0 in $L^1$. This works because: The sequence tends to 0 as $n\to \infty$ since the height of each function tends to 0 and the the region where $f_n$ is taking on this decreasing height is tending towards all of $\mathbb{R}^+$ ($(0,n)$ as $n\to \infty$) (and it's already 0 on $\mathbb{R}^-\cup\{0\}$). The convergence is uniform because the number of times we have to keep "squishing" the rectangles until their height is less than $\epsilon$ does not depend on $x$. The details: Let $\epsilon>0$ and choose $N\in \mathbb{N}$ so that $N>\frac{1}{\epsilon}$ and let $n>N$. Fix $x\in \mathbb{R}$. Case 1 ($x\leq 0$ or $x\geq n$) Then $f_n(x)=0$ and so $|f_n(x)-0|=0< \epsilon$. Case 2 ($0< x < n$ ) Then $f_n(x)=\frac{1}{n}$ and so $|f_n(x)-0|=\frac{1}{n}< \frac{1}{N}<\epsilon$
Finally, $f_n\not\to 0$ in $L^1$ since $$\int_{\mathbb{R}}|f_n|=\int_{(0,n)}\frac{1}{n}=\frac{1}{n}\lambda((0,n))=1.$$
Remark: Here's a question you could ask: wouldn't $f_n=n\chi_{(0,\frac{1}{n})}$ work here too? Both are tending to 0 everywhere and both involve rectangles of area 1. The answer is "kinda." The problem is that the convergence of $n\chi_{(0,\frac{1}{n})}$ is pointwise. BUT Egoroff's Theorem gives us a way to actually "make" it uniform! We've seen this before in a previous example.
On the notation above: For a measurable set $X\subset \mathbb{R}$, denote the set of all Lebesgue integrable functions $f:X\to\mathbb{R}$ by $L^1(X)$. Then a sequence of functions $\{f_n\}$ is said to
converge in $L^1$ to a function $f$ if $\displaystyle{\lim_{n\to\infty}}\int|f_n-f|=0$.
|
Next, we tackle the one-dimensional diffusion equation:$$ \frac{\partial u}{\partial t} = \nu \frac{\partial^2 u}{\partial x^2} $$
First obvious difference is that — unlike our previous two simple equations — this equation has a second-order derivative. Before continuing we must learn how to discretize it.
Any second order derivative can be understood geometrically as the line tangent to the curve given by a first derivative. To discretize this second order derivative we can use a Central difference scheme, a special combination of the two methods presented earlier Forward Difference and Backward Difference.
Consider the Taylor series expansions of $ u_{i+1} $ and $ u_{i-1}$ around $ u_i $:$$ u_{i+1} = u_i + \Delta x \frac{\partial u}{\partial x}\biggr\rvert_i + \frac{\Delta x^2}{2} \frac{\partial^2 u}{\partial x^2}\biggr\rvert_i + \frac{\Delta x^3}{3!} \frac{\partial^3 u}{\partial x^3}\biggr\rvert_i + O(\Delta x^4) $$$$ u_{i-1} = u_i - \Delta x \frac{\partial u}{\partial x}\biggr\rvert_i + \frac{\Delta x^2}{2} \frac{\partial^2 u}{\partial x^2}\biggr\rvert_i - \frac{\Delta x^3}{3!} \frac{\partial^3 u}{\partial x^3}\biggr\rvert_i + O(\Delta x^4) $$
If we add together both these expansions, the odd ordered derivative terms will cancel out. If we neglect any terms of $O(\Delta x^4) $ or higher because they are so small we can rearrange the sum of these two expansions to solve for our second-derivative.$$ u_{i+1} + u_{i-1} = 2u_i + \Delta x^2 \frac{\partial^2 u}{\partial x^2}\biggr\rvert_i $$$$ \frac{\partial^2 u}{\partial x^2} = \frac{u_{i+1} - 2u_i + u_{i-1}}{\Delta x^2} $$
We now have all the tools necessary to write the discretized version of the diffusion equation in 1D:$$ \frac{u^{n+1}_i - u^n_i}{\Delta t} = \nu \frac{u^{n}_{i+1} -2u^n_i + u^n_{i-1}}{\Delta x^2} $$
With only one unknown $ u^{n+1}_i $ we can now rearrange this equation and obtain our final result.$$ u^{n+1}_i = u^n_i + \frac{\nu \Delta t}{\Delta x^2}(u^{n}_{i+1} - 2u^n_i + u^n_{i-1}) $$
The above equation will now allow us to write a program to advance our solution in time and perform our simulation. As before, we need initial conditions, and we shall continue to use the one we obtained in the previous two steps.
# Adding inline command to make plots appear under commentsimport numpy as npimport matplotlib.pyplot as pltimport time, sys%matplotlib inline #Same initial conditions as in step 1 with courant number and viscosity addedgrid_length = 2grid_points = 41dx = grid_length / (grid_points - 1) nt = 500nu = 0.3 # viscosity of the systemsig= 0.2 #courant numberdt = sig * dx**2 / nu #Dynamically scaling dt based on grid size to ensure convergence#Initiallizing the shape of the wave to the same one from step 1 and displaying itu = np.ones(grid_points)u[int(.5/ dx):int(1 / dx + 1)] = 2plt.plot(np.linspace(0,grid_length,grid_points), u);plt.ylim(1,2);plt.xlabel('x')plt.ylabel('u')plt.title('1D Diffusion t=0');
Now we apply the discretization as outlined above and check out the final results.
un = np.ones(grid_points)for n in range(nt): #Runs however many timesteps you set earlier un = u.copy() #copy the u array to not overwrite values for i in range(1,grid_points-1): u[i] = un[i] + nu * (dt/dx**2) * (un[i+1]- 2*un[i] + un[i-1])
plt.plot(np.linspace(0,grid_length,grid_points), u);plt.ylim(1,2);plt.xlabel('x')plt.ylabel('u')plt.title('1D Diffusion t=10');
#Imports for animation and display within a jupyter notebookfrom matplotlib import animation, rc from IPython.display import HTML#Generating the figure that will contain the animationfig, ax = plt.subplots()fig.set_size_inches(9, 5)ax.set_xlim(( 0, grid_length))ax.set_ylim((1, 2))line, = ax.plot([], [], lw=2)plt.xlabel('x')plt.ylabel('u')plt.title('1D Diffusion time history from t=0 to t=10');#Resetting the U wave back to initial conditionsu = np.ones(grid_points)u[int(.5/ dx):int(1 / dx + 1)] = 2
#Initialization function for funcanimationdef init(): line.set_data([], []) return (line,)
#Main animation function, each frame represents a time step in our calculationdef animate(j): x = np.linspace(0, grid_length, grid_points) un = u.copy() #copy the u array to not overwrite values for i in range(1,grid_points-1): u[i] = un[i] + nu * (dt/dx**2) * (un[i+1]- 2*un[i] + un[i-1]) line.set_data(x, u) return (line,)
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nt, interval=20)#anim.save('../gifs/1dDiff.gif',writer='imagemagick',fps=60)HTML(anim.to_jshtml())
This cool PDE simulates the behaviour of a quantity —a gas, energy, temperature etc—diffusing unifformly through an environment. It's evolution over a large period of time shows how it reaches an equilibrium where most of the values are spread out.
Next we will take a look at the Burgers' equation with a new initial condition and establish some boundary condiitons.
|
Suppose we have largangian of QCD axion below PQ scale:
$$
L_{a} = \frac{1}{2}\partial_{\mu}a\partial^{\mu}a - C_{G}\frac{a}{f_{a}}G \wedge G + C_{\gamma}\frac{a}{f_{a}}F_{EM}\wedge F_{EM} + L_{SM}, $$
where $G$ denotes gluon field strength, $\wedge$ denotes contraction with Levi-Civita tensor, $L_{SM}$ is SM lagrangian, $C_{G/\gamma}$ denote constants which depend on properties of underlying theory.
People say that axion acquires mass during QCD phase transition. For demonstrating that they redefine quark fields via local chiral rotation,
$$
q \to e^{iC_{G}\gamma_{5}\frac{a}{6f_{a}} }q, \qquad (0)
$$
Which eliminates $aG\wedge G$ coupling, but obtain "modified" mass and kinetic terms for quark fields,
$$
L_{SM} \to \in L_{aq} = \bar{q}^{i}_{L}e^{iC_{G}\gamma_{5}\frac{a}{3f_{a}}}M_{ij}q^{j}_{R} + h.c. + L_{\text{kinetic}} \qquad (1) $$
In a time of QCD phase transition, $\bar{q}^{i}_{L}q^{j}_{R} = v\delta^{ij}$, so from Eq. $(1)$ we obtain axion potential $V_{a} = -m_{a}^2f_{a}^2\left( 1 - cos\left(\frac{a}{f_{a}}\right)\right)$, which contains axion mass term.
My question is following. Redefinition $(0)$ also generates axion interaction with EW sector. EW sector also has spontaneously symmetry breaking scale. So why axion mass doesn't arise at EW phase transition? I.e., why there is no term
$$
L_{aHH} = e^{ic\frac{a}{f_{a}}}H^{\dagger}H + h.c., $$
which generates axion mass at EW scale?
|
De Bruijn-Newman constant
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Polymath 15, eleventh thread: Writing up the results, and exploring negative t, Terence Tao, Dec 28, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup
Here are the Polymath15 grant acknowledgments.
Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
On Constructing Functions, Part 4
This post is the fourth example in an ongoing list of various sequences of functions which converge to different things in different ways.
Also in this series:
Example 1: converges almost everywhere but not in $L^1$
Example 2: converges uniformly but not in $L^1$ Example 3: converges in $L^1$ but not uniformly Example 5: converges pointwise but not in $L^1$ Example 6: converges in $L^1$ but does not converge anywhere
Example 4 A sequence of (Lebesgue) integrable functions $f_n:\mathbb{R}\to[0,\infty)$ so that $\{f_n\}$ converges to $f:\mathbb{R}\to[0,\infty)$ uniformly, yet $f$ is not (Lebesgue) integrable.
Our first observation is that "$f$ is not (Lebesgue) integrable" can mean one of two things: either $f$ is not measurable or $\int f=\infty$. The latter tends to be easier to think about, so we'll do just that. Now
what function do you know of such that when you "sum it up" you get infinity? How about something that behaves like the divergent geometric series? Say, its continuous cousin $f(x)=\frac{1}{x}$? That should work since we know $$\int_{\mathbb{R}}\frac{1}{x}=\int_{1}^\infty \frac{1}{x}=\infty.$$Now we need to construct a sequence of integrable functions $\{f_n\}$ whose uniform limit is $\frac{1}{x}$. Let's think simple: think of drawring the graph of $f(x)$ one "integral piece" at a time. In other words, define:
This works because: It makes sense to define the $f_n$ as $f(x)=\frac{1}{x}$ "chunk by chunk" since this way the convergence is guaranteed to be uniform. Why? Because how far out we need to go in the sequence so that the difference $f(x)-f_n(x)$ is less than $\epsilon$ only depends on how small (or large) $\epsilon$ is. The location of $x$ doesn't matter!
Also notice we have to define $f_n(x)=0$ for all $x< 1$ to avoid the trouble spot $\ln(0)$ in the integral $\int f_n$. This also ensures that the area under each $f_n$ is finite, guaranteeing integrability.
The details: Each $f_n$ is integrable since for a fixed $n$,$$\int_{\mathbb{R}}f_n=\int_1^n\frac{1}{x}=\ln(n).$$To see $f_n\to f$ uniformly, let $\epsilon >0$ and choose $N$ so that $N>1/\epsilon$. Let $x\in \mathbb{R}$. If $x\leq 1$, any $n$ will do, so suppose $x>1$ and let $n>N$. If $1< x \leq n$, then we have $|f_n(x)-f(x)|=0< \epsilon$. And if $x> n$, then $$\big|\frac{1}{x}\chi_{[1,\infty)}(x)-\frac{1}{x}\chi_{[1,n]}(x)\big|=\big|\frac{1}{x}-0\big|=\frac{1}{x}<\frac{1}{n}< \frac{1}{N}< \epsilon.$$
Voila!
|
2019-07-18 17:03
Precision measurement of the $\Lambda_c^+$, $\Xi_c^+$ and $\Xi_c^0$ baryon lifetimes / LHCb Collaboration We report measurements of the lifetimes of the $\Lambda_c^+$, $\Xi_c^+$ and $\Xi_c^0$ charm baryons using proton-proton collision data at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3.0 fb$^{-1}$, collected by the LHCb experiment. The charm baryons are reconstructed through the decays $\Lambda_c^+\to pK^-\pi^+$, $\Xi_c^+\to pK^-\pi^+$ and $\Xi_c^0\to pK^-K^-\pi^+$, and originate from semimuonic decays of beauty baryons. [...] arXiv:1906.08350; LHCb-PAPER-2019-008; CERN-EP-2019-122; LHCB-PAPER-2019-008.- 2019-08-02 - 12 p. - Published in : Phys. Rev. D 100 (2019) 032001 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Record dettagliato - Record simili 2019-07-02 10:45
Observation of the $\Lambda_b^0\rightarrow \chi_{c1}(3872)pK^-$ decay / LHCb Collaboration Using proton-proton collision data, collected with the LHCb detector and corresponding to 1.0, 2.0 and 1.9 fb$^{-1}$ of integrated luminosity at the centre-of-mass energies of 7, 8, and 13 TeV, respectively, the decay $\Lambda_b^0\to \chi_{c1}(3872)pK^-$ with $\chi_{c1}\to J/\psi\pi^+\pi^-$ is observed for the first time. The significance of the observed signal is in excess of seven standard deviations. [...] arXiv:1907.00954; CERN-EP-2019-131; LHCb-PAPER-2019-023; LHCB-PAPER-2019-023.- 2019-09-03 - 21 p. - Published in : JHEP 1909 (2019) 028 Article from SCOAP3: PDF; Fulltext: LHCb-PAPER-2019-023 - PDF; 1907.00954 - PDF; Related data file(s): ZIP; Supplementary information: ZIP; Record dettagliato - Record simili 2019-06-21 17:31
Updated measurement of time-dependent CP-violating observables in $B^0_s \to J/\psi K^+K^-$ decays / LHCb Collaboration The decay-time-dependent {\it CP} asymmetry in $B^{0}_{s}\to J/\psi K^{+} K^{-}$ decays is measured using proton-proton collision data, corresponding to an integrated luminosity of $1.9\,\mathrm{fb^{-1}}$, collected with the LHCb detector at a centre-of-mass energy of $13\,\mathrm{TeV}$ in 2015 and 2016. Using a sample of approximately 117\,000 signal decays with an invariant $K^{+} K^{-}$ mass in the vicinity of the $\phi(1020)$ resonance, the {\it CP}-violating phase $\phi_s$ is measured, along with the difference in decay widths of the light and heavy mass eigenstates of the $B^{0}_{s}$-$\overline{B}^{0}_{s}$ system, $\Delta\Gamma_s$. [...] arXiv:1906.08356; LHCb-PAPER-2019-013; CERN-EP-2019-108; LHCB-PAPER-2019-013.- Geneva : CERN, 2019-08-22 - 42 p. - Published in : Eur. Phys. J. C 79 (2019) 706 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Record dettagliato - Record simili 2019-06-21 17:07
Measurement of $C\!P$ observables in the process $B^0 \to DK^{*0}$ with two- and four-body $D$ decays / LHCb Collaboration Measurements of $C\!P$ observables in $B^0 \to DK^{*0}$ decays are presented, where $D$ represents a superposition of $D^0$ and $\bar{D}^0$ states. The $D$ meson is reconstructed in the two-body final states $K^+\pi^-$, $\pi^+ K^-$, $K^+K^-$ and $\pi^+\pi^-$, and, for the first time, in the four-body final states $K^+\pi^-\pi^+\pi^-$, $\pi^+ K^-\pi^+\pi^-$ and $\pi^+\pi^-\pi^+\pi^-$. [...] arXiv:1906.08297; LHCb-PAPER-2019-021; CERN-EP-2019-111.- Geneva : CERN, 2019-08-07 - 30 p. - Published in : JHEP 1908 (2019) 041 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Record dettagliato - Record simili 2019-05-16 14:53 Record dettagliato - Record simili 2019-05-16 14:31
Measurement of $CP$-violating and mixing-induced observables in $B_s^0 \to \phi\gamma$ decays / LHCb Collaboration A time-dependent analysis of the $B_s^0 \to \phi\gamma$ decay rate is performed to determine the $CP$-violating observables $S_{\phi\gamma}$ and $C_{\phi\gamma}$, and the mixing-induced observable $\mathcal{A}^{\Delta}_{\phi\gamma}$. The measurement is based on a sample of $pp$ collision data recorded with the LHCb detector, corresponding to an integrated luminosity of 3 fb$^{-1}$ at center-of-mass energies of 7 and 8 TeV. [...] arXiv:1905.06284; LHCb-PAPER-2019-015; CERN-EP-2019-077; LHCb-PAPER-2019-015; CERN-EP-2019-077; LHCB-PAPER-2019-015.- 2019-08-28 - 10 p. - Published in : Phys. Rev. Lett. 123 (2019) 081802 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Record dettagliato - Record simili 2019-04-10 11:16
Observation of a narrow pentaquark state, $P_c(4312)^+$, and of two-peak structure of the $P_c(4450)^+$ / LHCb Collaboration A narrow pentaquark state, $P_c(4312)^+$, decaying to $J/\psi p$ is discovered with a statistical significance of $7.3\sigma$ in a data sample of $\Lambda_b^0\to J/\psi p K^-$ decays which is an order of magnitude larger than that previously analyzed by the LHCb collaboration. The $P_c(4450)^+$ pentaquark structure formerly reported by LHCb is confirmed and observed to consist of two narrow overlapping peaks, $P_c(4440)^+$ and $P_c(4457)^+$, where the statistical significance of this two-peak interpretation is $5.4\sigma$. [...] arXiv:1904.03947; LHCb-PAPER-2019-014 CERN-EP-2019-058; LHCB-PAPER-2019-014.- Geneva : CERN, 2019-06-06 - 11 p. - Published in : Phys. Rev. Lett. 122 (2019) 222001 Article from SCOAP3: PDF; Fulltext: PDF; Fulltext from Publisher: PDF; Related data file(s): ZIP; Supplementary information: ZIP; External link: SYMMETRY Record dettagliato - Record simili 2019-04-03 11:16
Measurements of $CP$ asymmetries in charmless four-body $\Lambda^0_b$ and $\Xi_b^0$ decays / LHCb Collaboration A search for $CP$ violation in charmless four-body decays of $\Lambda^0_b$ and $\Xi_b^0$ baryons with a proton and three charged mesons in the final state is performed. To cancel out production and detection charge-asymmetry effects, the search is carried out by measuring the difference between the $CP$ asymmetries in a charmless decay and in a decay with an intermediate charmed baryon with the same particles in the final state. [...] arXiv:1903.06792; LHCb-PAPER-2018-044; CERN-EP-2019-13; LHCb-PAPER-2018-044 and CERN-EP-2019-13; LHCB-PAPER-2018-044.- 2019-09-07 - 30 p. - Published in : Eur. Phys. J. C 79 (2019) 745 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Record dettagliato - Record simili 2019-04-01 11:42
Observation of an excited $B_c^+$ state / LHCb Collaboration Using $pp$ collision data corresponding to an integrated luminosity of $8.5\,\mathrm{fb}^{-1}$ recorded by the LHCb experiment at centre-of-mass energies of $\sqrt{s} = 7$, $8$ and $13\mathrm{\,Te\kern -0.1em V}$, the observation of an excited $B_c^+$ state in the $B_c^+\pi^+\pi^-$ invariant-mass spectrum is reported. The state has a mass of $6841.2 \pm 0.6 {\,\rm (stat)\,} \pm 0.1 {\,\rm (syst)\,} \pm 0.8\,(B_c^+) \mathrm{\,MeV}/c^2$, where the last uncertainty is due to the limited knowledge of the $B_c^+$ mass. [...] arXiv:1904.00081; CERN-EP-2019-050; LHCb-PAPER-2019-007.- Geneva : CERN, 2019-06-11 - 10 p. - Published in : Phys. Rev. Lett. 122 (2019) 232001 Article from SCOAP3: PDF; Fulltext: PDF; Fulltext from Publisher: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Record dettagliato - Record simili 2019-04-01 09:46
Near-threshold $D\bar{D}$ spectroscopy and observation of a new charmonium state / LHCb Collaboration Using proton-proton collisiondata, corresponding to an integrated luminosity of 9 fb$^{-1}$, collected with the~LHCb detector between 2011 and 2018, a new narrow charmonium state, the $X(3842)$ resonance, is observed in the decay modes $X(3842)\rightarrow D^0\bar{D}^0$ and $X(3842)\rightarrow D^+D^-$. The mass and the natural width of this state are measured to be \begin{eqnarray*} m_{X(3842)} & = & 3842.71 \pm 0.16 \pm 0.12~ \text {MeV}/c^2\,, \\ \Gamma_{X(3842)} & = & 2.79 \pm 0.51 \pm 0.35 ~ \text {MeV}\,, \end{eqnarray*} where the first uncertainty is statistical and the second is systematic. [...] arXiv:1903.12240; CERN-EP-2019-047; LHCb-PAPER-2019-005; LHCB-PAPER-2019-005.- Geneva : CERN, 2019-07-08 - 23 p. - Published in : JHEP 1907 (2019) 035 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Record dettagliato - Record simili
|
Defining parameters
Level: \( N \) = \( 6 = 2 \cdot 3 \) Weight: \( k \) = \( 2 \) Character orbit: \([\chi]\) = 6.a (trivial) Character field: \(\Q\) Newforms: \( 0 \) Sturm bound: \(2\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_0(6))\).
Total New Old Modular forms 3 0 3 Cusp forms 0 0 0 Eisenstein series 3 0 3
|
If $X$ is a complex space, not necessarily a manifold nor even reduced,
any cover $\mathcal U=(U_i)_{i\in I}$ by open subsets $U$ which are Stein is acyclic for any coherent sheaf. Indeed any finite intersection $U=U_1\cap\dots\cap U_n $ of sets $U\in \mathcal U$ is Stein : this is trivial if you define "Stein" as holomorphically convex and holomorphically separable. But is it legitimate to define "Stein" that way? Sure! That's what the brothers Ludger and Burchard Kaup do in their extremely pedagogical book Holomorphic functions of Several Variables One may say that the basic theory of Stein spaces consists of proving (in about 100 pages!) the equivalence of several definitions of Stein spaces, in particular the one that says that a Stein space is a complex space for which theorem B holds!
Reminders/complements 1) A complex space $Y$ is said to be holomorphically convex if, given a compact subset $K\subset Y$, its holomorphic hull $h(K)$ is also compact. That hull is defined as the set of $y\in Y$ such that for all $f\in \mathcal O(Y)$ we have $\mid f(y)\mid\leq \sup_{k\in K} \mid f(k)\mid$ A (non-trivially!) equivalent definition of holomorphically convex is: Given a discrete closed subset $D\subset Y$, there exists a holomorphic function $h\in \mathcal O(Y)$ such that $\sup_{d\in D} \mid h(d)\mid=\infty$.
2) A complex space $Y$ is said to be holomorphically separable if, given $y\neq y'\in Y$, there exists $g \in \mathcal O(Y)$ with $g(y)\neq g(y')$ .
3) Very roughly one might divide complex analysis into two parts:
$\bullet$ The kingdom of partial differential equations with its emphasis on plurisubharmonicity, pseudoconvexity, $L^2$-estimates, $\bar {\partial}$-operator, $\dots$ The basic book here is Hörmander's. $\bullet \bullet$ The kingdom of geometry and algebra with its emphasis on sheaves, homological and commutative algebra, non-reduced spaces, algebraic topology, $\dots$ The Kaups's book and the above answer are written in the spirit of this second point of view (I used holomorphic convexity instead of pseudoconvexity) , which is more adapted to the study of the general non reduced complex spaces introduced by Grauert under the influence of Grothendieck. The definitive reference on Stein spaces written by Grauert-Remmert, the masters themselves, Theory of Stein spaces doesn't even mention plurisubharmonicity and pseudoconvexity (although, needless to say, these are extremely useful tools in many contexts).
4) How do I know this? Because in my youth I betrayed the first Kingdom for the second and then I betrayed the second for the Scheming Empire of Algebraic Geometry...
|
A vector field defines a situation where the magnitude and direction of vectors are a function of location only. We can best understand this on a rotating rigid body where the linear velocity of each particle $\boldsymbol{v}$ depends on its location $\boldsymbol{r}$.
$$ \boldsymbol{v} = \boldsymbol{\omega} \times \boldsymbol{r} $$
To describe such vector field we define an "axial vector" $\boldsymbol{\omega}$ which is placed on the origin to describe the rotation of the object. This rotation has the following properties
Magnitude, $\omega = \| \boldsymbol{\omega} \|$ Direction, $\boldsymbol{k} = \frac{ \boldsymbol{\omega}}{\omega} $ Location, the origin.
Now, lets flip our perspective around in order to draw a parallel with forces later. Consider a rigid body rotating with a vector $\boldsymbol{\omega}$, about a specified location $\boldsymbol{r}$ and we measure the linear velocity $\boldsymbol{v}$
at the origin.
$$ \boldsymbol{v} = \boldsymbol{r} \times \boldsymbol{\omega} $$
Does this describe a vector field? Yes, since we have not changed the nature of the problem, but shifted our perspective. Even though we are moving around the axial vector $\boldsymbol{\omega}$ at different locations we measure the effect at the origin. Here $\boldsymbol{v}$ still represents a vector field, and $\boldsymbol{\omega}$ is the axial vector. The properties are now
Magnitude of rotation, $\omega = \| \boldsymbol{\omega} \|$ Direction of rotation, $\boldsymbol{k} = \frac{ \boldsymbol{\omega}}{\omega} $ Location of axis (recovered from velocity), $\boldsymbol{r} = \frac{ \boldsymbol{\omega} \times \boldsymbol{v}}{ \omega^2 }$
Proof: $\require{cancel} \frac{ \boldsymbol{\omega} \times \boldsymbol{v}}{ \omega^2 } = \frac{ \boldsymbol{\omega} \times (\boldsymbol{r} \times \boldsymbol{\omega} )}{ \omega^2 } = \frac{ \boldsymbol{r} \omega^2 - \boldsymbol{\omega} \cancel{(\boldsymbol{r}\cdot \boldsymbol{\omega})}}{\omega^2} = \boldsymbol{r} $ with the rule that $\boldsymbol{r}$ is the location on the axis closest to the origin.
Now let us look at the entire similar situation and consider a rigid body with a force $\boldsymbol{F}$ applied through a location $\boldsymbol{r}$ and we measure the equipollent torque $\boldsymbol{\tau}$
at the origin.
$$ \boldsymbol{\tau} = \boldsymbol{r} \times \boldsymbol{F} $$
Does this describe a vector field? Yes, for the same reason(s) as above. Here $\boldsymbol{\tau}$ still represents a vector field, and $\boldsymbol{F}$ is the axial vector. The properties are now
Magnitude of force, $F = \| \boldsymbol{F} \|$ Direction of force, $\boldsymbol{e} = \frac{ \boldsymbol{F}}{F} $ Location of axis (recovered from troque), $\boldsymbol{r} = \frac{ \boldsymbol{F} \times \boldsymbol{\tau}}{ F^2 }$
So the geometry of mechanics dictates the following definitions.
$$\begin{array}{l|c|c}\mbox{quantitity} & \mbox{axial vector} & \mbox{moment vector} \\\hline \mbox{motion} & \boldsymbol{\omega} & \boldsymbol{v} = \boldsymbol{r}\times \boldsymbol{\omega} \\\hline \mbox{momentum} & \boldsymbol{p} & \boldsymbol{L} = \boldsymbol{r}\times \boldsymbol{p} \\\hline \mbox{loading} & \boldsymbol{F} & \boldsymbol{\tau} = \boldsymbol{r}\times \boldsymbol{F}\end{array}$$
Note: I prefer the term
moment vector from vector field because it is more descriptive of the specific situation.
There is some room for confusion here because of how liner momentum is defined.
Linear momentum (the axial vector) is defined from the velocity (the vector field) at a specific point (the center of mass).
$$ \boldsymbol{p} = m ( \boldsymbol{\omega} \times \boldsymbol{r}_{\rm com} ) $$
The equations of motion relate the force (axial vector $\boldsymbol{F}$) with the rate of change of momentum (axial vector $\boldsymbol{p}$). The rotational equations relate torque at the center of mass (vector field $\boldsymbol{\tau}_{\rm com}$) to rate of change of angular momentum at the center of mass (vector field $\boldsymbol{L}_{\rm com}$).
As you can see the equations of motion are consistent with the geometrical interpretation of mechanics.
$$ \begin{aligned} \boldsymbol{F} & = m (\boldsymbol{\alpha} \times \boldsymbol{r}_{\rm com}) + \boldsymbol{\omega} \times \boldsymbol{p} = m \boldsymbol{a}_{\rm com} \\ \boldsymbol{\tau}_{\rm com} & = \mathtt{I}_{\rm com} \boldsymbol{\alpha} + \boldsymbol{\omega} \times \boldsymbol{L}_{\rm com} = \mathtt{I}_{\rm com} \boldsymbol{\alpha} + \boldsymbol{\omega} \times \mathtt{I}_{\rm com} \boldsymbol{\omega}\end{aligned} $$
|
On Constructing Functions, Part 5
This post is the fifth example in an ongoing list of various sequences of functions which converge to different things in different ways.
Example 1: converges almost everywhere but not in $L^1$
Example 2: converges uniformly but not in $L^1$ Example 3: converges in $L^1$ but not uniformly Example 4: converges uniformly, but limit function is not integrable Example 6: converges in $L^1$ but does not converge anywhere
Example 5 A sequence of functions $\{f_n:\mathbb{R}\to\mathbb{R}\}$ which converges to 0 pointwise but does not converge to 0 in $L^1$. This works because: The sequence tends to 0 pointwise since for a fixed $x\in \mathbb{R}$, you can always find $N\in \mathbb{N}$ so that $f_n(x)=0$ for all $n$ bigger than $N$. (Just choose $N>x$!)
The details: Let $x\in \mathbb{R}$ and fix $\epsilon >0$ and choose $N\in \mathbb{N}$ so that $N>x$. Then whenever $n>N$, we have $|f_n(x)-0|=0<\epsilon$.
Of course, $f_n\not \to 0$ in $L^1$ since $$\int_{\mathbb{R}}|f_n|=\int_{(n,n+1)}f_n=1\cdot\lambda((n,n+1))=1.$$
|
At the one side, you can always calculate the percentages of the acid or it's ions in solution:$$\ce{\alpha(H_{n-m}A^{m-})} = \frac{x^{n-m} \prod_{j=1}^{m}k_{a,j}}{\sum_{i=0}^{n}x^{n-i} \prod_{j=1}^i k_{a,j}}$$with $x = [\ce{H+}]$, $n>0$ as the number of protons and $0 \le m \le n$ as the number of dissociated protons.
But on the other hand there are some special points during titrations. One of those points did you choose, which is where $\mathrm{pH} = \mathrm{pK}_{a,i}$.
For $\ce{H3PO4}$ there exist three different acidity constants, which describe the three possible dissociation steps. For the first step, the reaction is as you described: $$\ce{H3PO4 + H2O \xrightleftharpoons{k_{a,1}} H2PO4- + H3O+}$$
A little work on it gives you the Henderson-Hasselbalch-equation, which usually is good approximation to buffer regions: $$\mathrm{pH} = \mathrm{pK}_a - \lg\left(\frac{[HA]}{[A^-]}\right)$$
If you enter the same values for pH and pKa you have to solve $$-\lg(x)=0$$ which is true for $x=1$. So you know that you have a 1:1 ratio at this point.
This means for phosphoric acid:
At $\mathrm{pH=pK}_{a1}$, you've got exactly $50~\%~\ce{H3PO4}$ and $50~\%~\ce{H2PO4-}$ (blue, yellow). The same procedure can be applied to the other dissociation steps. Therefor at $\mathrm{pH=pK}_{a2}$ you've got $50~\%~\ce{H2PO4-}$ and $50~\%~\ce{HPO4^2-}$ (yellow, green) and at the last possible $\mathrm{pH=pK}_{a3}$ there are $50~\%~\ce{HPO4^2-}$ and $50~\%~\ce{PO4^3-}$ (green, red).
This can be applied to most acid-base dissociation steps but you have to pay attention for close successive $\mathrm{pK}_a$ values, as if the gap gets too small, the 1:1 ratio doesn't hold.
|
I met this question says how to price a vanilla call option $C(St,t,T,K) = \frac{1}{S_T}$which pays the inverse of a stock $V_{t} = \frac{1}{S_{t}}$ at maturity if the stock price follows a geometric Brownian motion $dS_{t}=\mu S_{t}dt+\sigma S_{t}dB_{s}$? I tried to use the risk-neutral measure approach, however, I cannot prove that if the option is discounted by a risk-free bond it becomes a martingale i.e. $\frac{V_{t}}{B_{0}e^{rt}}$ does not have a drift term. Is this a correct change of numeraire?
Let $dB_t = rB_t dt$. Now
\begin{equation} d\Big(\frac{1}{B_t S_t}\Big) = -\frac{dS_t}{B_t S_t^2} -\frac{dB_t}{B_t^2S_t} +\frac{2}{2}\frac{(dS_t)^2}{B_t S_t^3} = (-\mu-r+\sigma^2)\frac{1}{B_tS_t}dt-\sigma\frac{1}{B_tS_t} dW_t \end{equation}
Using the EMM given by $dW_t = \frac{r-\mu}{\sigma}dt +dW_t^\mathbb{Q}$ we get the $\mathbb{Q}$-dynamics
\begin{equation} d\Big(\frac{1}{B_t S_t}\Big) = (\sigma^2-2r)\frac{1}{B_tS_t}dt-\sigma\frac{1}{B_tS_t} dW_t^\mathbb{Q} \end{equation}
This is only a martingale in special case when $2r = \sigma^2$, hence unless that holds $V_t = \frac{1}{S_t}$ cannot be the price of a traded asset. But the price of a contingent claim $V_T = \frac{1}{S_T}$ at some maturity date $T$ is still $e^{-r(T-t)}E^\mathbb{Q}\Big[\frac{1}{S_T}\Big|\mathcal{F_t}\Big]$ which is obviously a martingale.
|
A photon is a tiny particle that comprises waves of electromagnetic radiation. As shown by Maxwell, photons are just electric fields traveling through space. Photons have no charge, no resting mass, and travel at the speed of light. Photons are emitted by the action of charged particles, although they can be emitted by other methods including radioactive decay. Since they are extremely small particles, the contribution of wavelike characteristics to the behavior of photons is significant. In diagrams, individual photons are represented by a squiggly arrow.
Description
Photons are often described as energy packets. This is a very fitting analogy, as a photon contains energy that cannot be divided. This energy is stored as an oscillating electric field. These fields may oscillate at almost any frequency. Although they have never been observed, the longest theoretical wavelength of light is the size of the universe, and some theories predict the shortest possible at the Planck length. These packets of energy can be transmitted over vast distances with no decay in energy or speed. Photons travel at the speed of light, 2.997x10
8 m/s in empty space. The speed of a photon through space can be directly derived from the speed of an electric field through free space. Maxwell unveiled this proof in 1864. Even though photons have no mass, they have an observable momentum which follows the de Broglie equation. The momentum of photons leads to interesting practical applications such as optical tweezers.
Generally speaking, photons have similar properties to electromagnetic waves. Each photon has a wavelength and a frequency. The wavelength is defined as the distance between two peaks of the electric field with the same vector. The frequency of a photon is defined as how many wavelengths a photon propagates each second.
Unlike an electromagnetic wave, a photon cannot actually be of a color. Instead, a photon will correspond to light of a given color. As color is defined by the capabilities of the human eye, a single photon cannot have color because it cannot be detected by the human eye. In order for the retina to detect and register light of a given color, several photons must act on it. Only when many photons act in unison on the retina, as an electromagnetic wave, can color be perceived.
As Described by Maxwell's Equations
The most accurate descriptions we have about the nature of photons are given by Maxwell's equations. Maxwell's equations mathematically predict how photons move through space. Fundamentally, an electric field undergoing flux will create an orthogonal magnetic field. The flux of the magnetic field then recreates the electric field. The creation and destruction of each corresponding wave allows the wave pair to move through space at the speed of light. Maxwell's equations correctly describe the nature of individual photons within the framework of quantum dynamics.
Creation of Photons
Photons can be generated in many different ways. This section will discuss some of the ways photons may be emitted. As photons are electric field propagating through space, the emission of photons requires the movement of charged particles.
Blackbody Radiation
As a substance is heated, the atoms within it vibrate at higher energies. These vibrations rapidly change the shape and energies of electron orbitals. As the energy of the electrons changes, photons emitted and absorbed at energies corresponding to the energy of the change. Blackbody radiation is what causes light bulbs to glow, and the heat of an object to be felt from a great distance. The simplification of objects as blackbodies allows indirect temperature calculation of distant objects. Astronomers and kitchen infrared thermometers use this principle every day.
Spontaneous Emission
Photons may be spontaneously emitted when electons fall from an excited state to a lower energy state(usually the ground state). The technical term for this drop in energy is a relaxation. Electrons undergoing this type of emission will produce a very distinctive set of photons based on the available energy levels of their environment. This set of possible photons is the basis for an emission spectrum.
Flourescence
Florescence is special case of spontaneous emission. In florescence, the energy of a photon emitted does not match the energy used to excite the electron. An electron will fluoresce when it loses a considerable amount of energy to its surroundings before undergoing a relaxation. Generally florescence is employed in a laboratory setting to visualize the presence of target molecules. UV light is used to excite electrons, which then emit light at visible wavelengths that researchers can see.
Stimulated Emission
An excited electron can be artificially caused to relax to a lower energy state by a photon matching the difference between these energy states. The electric field's phase and orientation of the resultant photon, as well as its energy and direction will be identical to that of the incident photon. The light produced by stimulated emission is said to be coherent as it is similar in every way to the photon that caused it. Lasers produce coherent electromagnetic radiation by stimulated emission.
Synchrotrons (electron bending)
Electrons with extremely high kinetic energy, such as those in particle accelerators, will produce high energy photons when their path is altered. This alteration is accomplished by a strong magnetic field. All free electrons will emit light in this manner, but synchrotron radiation has special practical implications. Synchrotron radiation is currently the best technology available for producing directional x-ray radiation at precise frequencies. Synchrotrons, such as the Advanced Light Source (ALS) at Lawrence Berkeley Labs and Stanford Synchrotron Radiation Light Source (SSRL) are hotbeds of x-ray spectroscopy due to the excellent quality of x-rays produced.
Nuclear Decay
Certain types of radioactive decay can involve the release of high energy photons. One such type of decay is a nuclear isomerization. In an isomerization, a nucleus rearranges itself to a more stable configuration and emits a gamma ray. While it is only theorized to occur, proton decay will also emit extremely high energy photons.
The Photoelectric Effect
Light incident on a metal plate may cause electrons to break loose from the plate surface (Fig. 1). This interaction between light and electrons is called the photoelectric effect. The photoelectric effect provided the first conclusive evidence that beams of light was made of quantized particles. The energy required to eject an electron from the surface of the metal is usually on the same order of magnitude as the ionization energy. As metals generally have ionization energies of several electron-volts, the photoelectric effect is generally observed using visible light or light of even higher energy.
Fig. 1, The photoelectric effect.
At the time this phenomenon was studied, light was thought to travel in waves. Contrary to what the wave model of light predicted, an increase in the intensity of light resulted in an increase in current, not an increase in the kinetic energy of the emitted electron. Einstein later explained this difference by showing that light was comprised of quantized packets of energy called photons. His work on the photoelectric effect earned him the Nobel Prize.
The photoelectric effect has many practical applications, as current may be generated from a light source. Generally, the photoelectric effect is used as a component in switches that respond to light. Some examples are nightlights and photomultipliers. Usually the current is so small that it must be amplified in order to be an effective switch
Energy of a Photon
The energy of a photon is a discrete quantity determined by its frequency. This result can be determined experimentally by studying the photoelectric effect. The kinetic energy of an emitted electron varies directly with the frequency of the incident light. If the experimental values of these energies are fitted to a line, the slope of that line is Planck's constant. The point at which electrons begin to be emitted from the surface is called the threshold frequency, and is denoted by \(\nu_0\). The principle of conservation of energy dictates that the energy of a photon must all go somewhere. Assuming that the energy \(h\nu_0\) is the initial energy requirement to pry an electron from its orbital, the kinetic energy of a photon is equal to the kinetic energy of the emitted electron plus the ionization energy. Therefore the energy of a free photon becomes \(E = h\nu\) where nu is the frequency of the photon and h is Planck's constant.
Fig. 2, Photoelectric effect results
The results from a photoelectric experiment are shown in Figure 2. \(\nu_0\) is the minimum frequency at which electrons start to be detected. The solid lines represent the actual observed kinetic energies of released electrons. The dotted red line shows how a linear result can be obtained by tracing back to the y axis. Electrons cannot actually have negative kinetic energies.
Photon Interference
Whereas the double slit experiment initially indicated that a beam of light was a wave, more advanced experiments confirm the electron as a particle with wavelike properties. The diffraction of a beam of light though a double slit is observed to diffract producing constructive and destructive interference. Modern technology allows the emission and detection of single photons. In an experiment conducted by Philippe Grangier, a single photon is passed through a double slit. The photon then is detected on the other side of the slits. Across a large sample size, a trend in the final position of the photons can be determined. Under the wave model of light, an interference pattern will be observed as the photon splits over and over to produce a pattern. However, the results disagree with the wave model of light. Each photon emitted corresponds with a single detection on the other side of the slits(Fig. 3). With a certain probability, each photon is be detected at 100% strength. Over a series of measurements, photons produce the same interference pattern expected of a beam of photons. When one slit is closed, no interference pattern is observed and each photon travels in a linear path through the open slit.
Fig 3, Proof for the particle-nature of photons. One possible result is shown.
This interference has a profound implication which is that photons do not necessarily interact with each other to produce an interference pattern. Instead, they interact and interfere with
themselves. Furthermore, this shows that the electron does not pass through one slit or the other, but rather passes through both slits simultaneously. Richard Feynman's theory of quantum electrodynamics explains this phenomenon by asserting that a photon will travel not in a single path, but all possible paths in the universe. The interference between these paths will give the probability of the photon taking any given path, as the majority of the paths cancel with each other. He has used this theory to explain the nature of wide ranges of the actions of photons, such as reflection and refraction, with absolute precision. References Feynman, R. P. (1988). QED: The Strange Theory of Light and Matter, Princeton University Press. Einstein, A. . "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung." Physikalische Zeitschrift(10): 817-825 (1909). P. Grangier, G. Roger, and A. Aspect, "Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences," Europhys. Lett. 1, 173-179 (1986). J. J. Thorn, M. S. Neel, V. W. Donato, G. S. Bergreen, R. E. Davies, and M. Beck, "Observing the quantum behavior of light in an undergraduate laboratory", Am. J. Phys. 72, 1210-1219 (2004). Maxwell, James (1865). "A dynamical theory of the electromagnetic field." Philosophical Transactions of the Royal Society of London 155: 459–512. Problems
1) The peak wavelength of a light bulb is 500 nm. Calculate the energy of a single photon at this wavelength.
Solution
\(E = h\nu\)
\(\nu = \dfrac{c}{\lambda}\)
\(E = h*\dfrac{c}{\lambda}\)
\( = 6.626x10^{-34}m^2kg/s^2*\dfrac{3.00x10^{8}m/s}{500x10^{-9}m}\)
\( = 3.97x10^{-19}J\)
2) The work function of an metal surface is 9.4eV. What is the frequency of a photon which ejects an electron from this surface at 420km/s?
Solution
\(h\nu_0 = 9.4eV x 1.6x10^{-19}J/eV = 1.51x10^{-18} J\)
\(KE = \dfrac{1}{2}mv^2 = h\nu-h\nu_0\)
\(\dfrac{1}{2} ( 9.11x10^{-31}kg)/{(420,000km/s)}^2 = 6.626x10^{-34}m^2kg/s*\nu - 1.51x10^{-18}J\)
\(\nu = 2.28x10^{15}hz\)
3) A single photon passes through a double slit 20nm apart. A photomultiplier detects at least one particle in the 20 nm directly behind the slit. What fraction of the photon is detected here?
Solution
The entire photon is detected. Protons are quantized particles. Although they can pass through both slits, it is still a single particle and will be detected accordingly.
4) A photon removes an electron from an atom. The kinetic energy of the exiting electron is found to be less than that of the photon that removed it. Why isn't the energy the same?
Solution
Recall the photoelectric equation : \(KE = h\nu-h\nu_0\). This equation relates the energies of photons and electrons from an ejection. The second term of the equation, \(-h\nu_0\) is the amount of energy required to remove an electron from its orbital. The extra energy goes into breaking the association of an electron with a nucleus. Keep in mind that for a metal this is not the ionization energy due to the delocalization of electrons involved in metallic bonding.
5) Keeping in mind the relationship between the energy and frequency of light, design an experiment to determine if photons lose energy as they travel through space.
Solution
One possible experiment utilizes the photoelectric effect. A light source is shone on a piece of metal, and the kinetic energy of ejected electrons is calculated. By shining the light at different distances from the metal plate, individual photons may be shown to undergo lossless transmission. The experiment will show that while the number of electrons ejected may decrease as a function of distance, their kinetic energy will remain the same.
Contributors Michael Kennedy
|
MathRevolution wrote:
[Math Revolution
GMAT math practice question]
If the \(2\) roots of the equation \(x^2+px+q=0\) are \(-3\) and \(2\), where \(p\) and \(q\) are constants, what is the value of \(p + q?\)
\(A. -5\)
\(B. -3\)
\(C. -1\)
\(D. 0\)
\(E. 1\)
\(? = p + q\)
\(\left\{ \matrix{
\, - 1 = - 3 + 2 = {\rm{sum}}\,\,\,\mathop = \limits^{\left( * \right)} \,\,\, - p \hfill \cr
\, - 6 = - 3 \cdot 2 = {\rm{product}}\,\,\,\mathop = \limits^{\left( * \right)} \,\,\,q \hfill \cr} \right.\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,? = p + q = 1 + \left( { - 6} \right) = - 5\)
\(\left( * \right)\,\,\,\left\{ \matrix{
a{x^2} + bx + c = 0\,\,,\,\,\,a \ne {\rm{0}} \hfill \cr
\Delta \ge 0\,\,\,,\,\,\,{\rm{roots}}\,\,{x_1}\,\,{\rm{and}}\,\,{x_2} \hfill \cr} \right.\,\,\,\,\, \Rightarrow \,\,\,\,\left\{ \matrix{
\,{x_1} + {x_2} = - {b \over a} \hfill \cr
\,{x_1} \cdot {x_2} = {c \over a} \hfill \cr} \right.\)
We follow the notations and rationale taught in the GMATH method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH
method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
|
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
|
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub...
Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th...
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower...
How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both.
I want to construct a nfa from this, but I'm struggling with the regex part
|
Motivated by the public's desire to learn as much as possible following the occurrence of damaging events, we have developed a methodology to spatially map the probability of earthquake occurrence in the next 24 hours. We start with the simple aftershock model of Reasenberg and Jones (1989, 1990, 1994):
\[\ {\lambda}(t,M)=\frac{10^{a^{{^\prime}}+b(M_{m}-M)}}{(t+c)^{P}},{\ }M{\geq}M_{c}\]
(1)
where λ(
t, M) is the rate of aftershocks larger than a magnitude threshold, M c, and occurring at time t. The constants á and bare derived from the Gutenberg-Richter relationship (Gutenberg and Richter 1944), and p...
|
I'm currentely discussing pigeon hole principle (simple and generalised) in class. However, when solving problems, I get the idea, but I never know the proper way to write it, I always feel like I'm either writing too much or not enough. Here is an example of a very simple problem and 2 ways I wrote the proof and if you could comment and give me tips it would be very appreciated!!
Problem
A man in a dark room has a box with 12 brown and 12 black socks. How many times, if picking one at a time, does the man have to pick a sock to have 2 of the same color?
(Here I think of the two colors as the "boxes" and the picks as the "objects". Since there's k+1 objects (or picks) and k boxes (or type of socks) then the pigeon hole principle apply) Solution 1
There are $2$ different type of socks the man can pick . By the pigeon hole, principle if the man picks $3$ socks, it is garanteed that he will have $2$ socks of the same color $\blacksquare$.
Solution 2
The generalised pigeon hole principle says that, for $N,k \in \Bbb{Z}$, if $N$ objects are placed in $k$ boxes, then at least $1$ boxe will have $\lceil{\frac{N}{k}}\rceil $ objects.
Let $k = 2$ be the the colors of the socks and $N$ be the number of picks
The minimum amount of picks the man has to do to garantee he will pick $ 2$ socks of the same color is the smalles integer such that $\lceil{\frac{N}{2}}\rceil = 2$
Therefore, we need the minimum integer $N$ picks such that $1\lt \lceil{\frac{N}{2}}\rceil \le 2 \Rightarrow N = 3$ since $3 = 1*2 + 1$
So, by the generalised pigeon principle, to guarantee the man picks 2 socks of the same color, he has to pick 3 socks$\blacksquare$.
Conlcusion
I know the problem is very simple but I really need to know how to write that kind of proof in a way such that I say enough but not too much. I mean I feel like in the first solution I'm not convincing and in the second one I blabla for no reasons..
As usual, thank you everyone for your help!!!!
|
I took a shot in the dark and assumed that this is similar to solving $\int e^{x}\sin{x}\ dx$, but wolfram is giving me a different answer than what I got, and on top of that, I tried to differentiate my result and am not getting back what I started with. It's putting into question whether I was doing previous questions right or not..
First step of my attempt:
let $u=\cos(2x),\ du=-2\sin(2x)\ dx$ let $dv=\cos(3x)\ dx,\ v=\frac{\sin(3x)}{3}$
$$\int\cos(2x)\cos(3x)\ dx=\frac{\cos(2x)\sin(3x)}{3}+\frac{2}{3}\int\sin(2x)\sin(3x)\ dx $$
Then I did IBP again:
let $u=\sin(2x),\ du=2\cos(2x)\ dx$ let $dv=\sin(3x)\ dx, v=-\frac{cos(3x)}{3}$
$$=\frac{\cos(2x)\sin(3x)}{3}+\frac{2}{3}\left[-\frac{\cos(3x)\sin(2x)}{3}+\frac{2}{3}\int\cos(2x)\cos(3x)\ dx\right]$$
From there, I simplify and re-arrange to get
$$\frac{1}{3}\int\cos(2x)\cos(3x)\ dx=\frac{3\cos(2x)\sin(3x)-2\cos(3x)\sin(2x)}{9}$$ $$\int\cos(2x)\cos(3x)\ dx=\frac{3\cos(2x)\sin(3x)-2\cos(3x)\sin(2x)}{3}+C$$
So where did I go wrong? Wolfram says the answer should be
$$\int\cos(2x)\cos(3x)\ dx=\frac{1}{10}5\sin(x)+\sin(5x)+C$$
|
I'm currently analysing some spatial point patterns that come from some fluid dynamics simulations and I'm having some difficulty computing the structure factor, $S(\pmb{k})$, from
both the positions of the points and the radial distribution function, $g(\pmb{r})$. I have been following the structure factor Wikipedia page and also this paper on arxiv but I seem to be getting results that don't make sense.
Below is my class object to compute the radial distribution function for said spatial point patterns (I've omitted the help data and assertions from the code for clarity). The class takes in a numpy array or list of numpy arrays, finds the locations of the nonzero values, computes the distances between each nonzero value and then computes the radial distribution. Note that here, we consider an array cell a point if it has a nonzero value
import numpy as npimport bisectclass pair_correlation_function(): def __init__(self,data,annulus_width,boundary=None): self.data = [data] if type(data) is np.ndarray else data self.dr = annulus_width self.boundary = boundary def get_positions(self): positions = list() for item in self.data: rows,columns = np.nonzero(item) positions.append(np.array(list(zip(rows,columns)))) return positions def RDF(self): positions = self.get_positions() Lx,Ly = np.shape(self.data[0]) area = Lx*Ly radii = list(x for x in range(int(Lx/(2*self.dr)))) all_radial_distributions = list() for item in positions: if self.boundary == 'periodic': item_new = np.vstack(np.array([np.abs(item[k]-item[k+1:]) for k in range(len(item)-1)])) item_new[item_new > Lx/2] = Lx-item_new[item_new > Lx/2] norms = [np.sqrt((position**2).sum(axis=0)) for position in item_new] norms = sorted(norms) else: norms = [np.sqrt(((item[k]-item[k+1:])**2).sum(axis=1)) for k in range(len(item)-1)] norms = sorted(np.hstack(norms)) number_particles = len(item) item_rdf = list() for r in radii: i = bisect.bisect_left(norms,r*self.dr) j = bisect.bisect_left(norms,(r+1)*self.dr) if i != len(norms) and j != len(norms): particle_count = len(norms[i:j]) normalisation = (2*r+self.dr)*np.pi*self.dr*number_particles**2 bin_value = 2*area*particle_count/normalisation item_rdf.append(bin_value) all_radial_distributions.append(item_rdf) radii = np.array(radii)*self.dr return radii,all_radial_distributions
Below is a sample array (a triangular lattice) with which to play around (we are assuming periodic boundary conditions here because my actual data comes from a simulation with periodic boundaries)
tri_period = np.array([[0,0,0,0],[0,1,0,1],[0,0,0,0],[1,0,1,0]])triangular_lattice = np.tile(tri_period,(16,16))dr = .025radii, RDF = pair_correlation_function(triangular_lattice,dr,'periodic').RDF()
and plotting gives
Here, we see a series of Bragg peaks as expected (though I'm not sure why the peaks segment into smooth, monotone decreasing bands? My actual data doesn't do this though and looks correct.) Now, if I try to compute the structure factor given the positions (see arxiv paper above, equation $(2)$)
def sf_from_positions(positions,box_dimension): sf = list() modes = list(x for x in range(1,int(box_dimension))) dk = 2*np.pi/box_dimension for h in modes: k_vec = np.array([1,1])*dk*h summation = 0 for position in positions: summation += np.exp(-1j*k_vec.dot(position)) sf.append(abs(summation)**2/len(positions)) return sf, modesrows,columns = np.nonzero(triangular_lattice)positions = list(zip(rows,columns))dimension = np.shape(triangular_lattice)[0]factor, wave_modes = sf_from_positions(positions,dimension)
and plotting gives
which looks off, because the Fourier transform of Bragg peaks should also be Bragg peaks (see this arxiv paper, page 12).
So, my first question is, what is going wrong in my computation of the structure factor? Can anyone see what I have done wrong? And secondly, I was wondering how I would integrate the radial distribution function to get the structure factor, through the formula $$S(\pmb{k}) = 1 + 4 \pi \rho \int_{\mathbb{R^{n}}} [g(\pmb{r})-1] e^{-i \pmb{k} \cdot \pmb{r}} d \pmb{r} $$ I tried to use the cumtrapz module in scipy but it gave me a structure factor that was negative (impossible) and that oscillated with an increasing wave packet.
Sorry if the question is overly involved. Thanks for your help.
|
Answer
The angles of the triangle are as follows: $A = 64.6^{\circ}, B = 74.8^{\circ},$ and $C = 40.6^{\circ}$ The lengths of the sides are as follows: $a = 8.92~in, b = 9.53~in,$ and $c = 6.43~in$
Work Step by Step
We can use the law of cosines to find $b$: $b^2 = a^2+c^2-2ac~cos~B$ $b = \sqrt{a^2+c^2-2ac~cos~B}$ $b = \sqrt{(8.92~in)^2+(6.43~in)^2-(2)(8.92~in)(6.43~in)~cos~74.8^{\circ}}$ $b = \sqrt{90.835~in^2}$ $b = 9.53~in$ We can use the law of cosines to find $C$: $c^2 = a^2+b^2-2ab~cos~C$ $2ab~cos~C = a^2+b^2-c^2$ $cos~C = \frac{a^2+b^2-c^2}{2ab}$ $C = arccos(\frac{a^2+b^2-c^2}{2ab})$ $C = arccos(\frac{8.92^2+9.53^2-6.43^2}{(2)(8.92)(9.53)})$ $C = arccos(0.759)$ $C = 40.6^{\circ}$ We can find angle $A$: $A+B+C = 180^{\circ}$ $A = 180^{\circ}-B-C$ $A = 180^{\circ}-74.8^{\circ}-40.6^{\circ}$ $A = 64.6^{\circ}$
|
We consider how the GDP or utility output of a city depends on the number of people living within it. From this, we derive some interesting consequences that can inform both government and individual attitudes towards newcomers.
Follow @efavdb Follow us on twitter for new submission alerts! The utility function and benefit per person
In this post, we will consider an idealized town whose net output $U$ (the GDP) scales as a power law with the number of people $N$ living within it. That is, we’ll assume,
\begin{eqnarray} \tag{1} \label{1} U(N) = a N^{\gamma}. \end{eqnarray} We’ll assume that the average benefit captured per person is their share of this utility, \begin{eqnarray} \tag{2} \label{2} BPP(N) = U(N) / N = a N^{\gamma -1}. \end{eqnarray} What can we say about the above $a$ and $\gamma$? Well, we must have $a> 0$ if the society is productive. Further, because we know that cities allow for more complex economies as the number of occupants grow, we must have $\gamma > 1$. These are the only assumptions we will make here. Below, we’ll see that these assumptions imply some interesting consequences. Marginal benefits
When a new person immigrates to a city, its $N$ value goes up by one. Here, we consider how the utility and benefit per person changes when this occurs. The increase in net utility is simply
\begin{eqnarray}\tag{3} \label{3} \partial_N U(N) = a \gamma N^{\gamma -1}. \end{eqnarray} Notice that because we have $\gamma > 1$, (\ref{3}) is a function that increases with $N$. That is, cities with larger populations benefit more (as a collective) per immigrant newcomer than those cities with smaller $N$ would. This implies that the governments of large cities should be more enthusiastic about welcoming of newcomers than those of smaller cities.
Now consider the marginal benefit per person when one new person moves to this city. This is simply
\begin{eqnarray}\tag{4} \label{4} \partial_N BPP(N) = a (\gamma – 1) N^{\gamma -2}. \end{eqnarray} Notice that this is different from the form (\ref{3}) that describes the marginal increase in total city utility. In particular, while (\ref{4}) is positive, it is not necessarily increasing with $N$: If $\gamma < 2$, (\ref{4}) decreases with $N$. Cities having $\gamma$ values like this are such that the net new wealth captured per existing citizen -- thanks to each new immigrant -- quickly decays to zero. The consequence is that city governments and existing citizens can have a conflict of interest when it comes to immigration. Equilibration
In a local population that has freedom of movement, we can expect the migration of people to push the benefit per person to be equal across cities. In cases like this, we should then have
\begin{eqnarray}\tag{5} \label{5} a_i N^{\gamma_i -1} \approx a_j N^{\gamma_j -1}, \end{eqnarray} for each city $i$ and $j$ for which there is low mutual migration costs. We point out that this is not the same result required to maximize the net, global output. This latter score is likely that which an authoritarian government might try to maximize. To maximize net utility, we need to have the marginal utility per city equal across cities, which means \begin{eqnarray}\tag{6} \label{6} \partial_N U_i(N) = \partial_N U_j(N) \end{eqnarray} or, \begin{eqnarray}\tag{7} \label{7} a_i \gamma_i N^{\gamma_i -1} = a_j \gamma_j N^{\gamma_j -1}. \end{eqnarray} We see that (\ref{5}) and (\ref{7}) differ in that there are $\gamma$ factors in (\ref{7}) that are not present in (\ref{5}). This implies that as long as the $\gamma$ values differ across cities, there will be a conflict of interest between the migrants and the government.
|
Author Message Topic: small caps zedler Replies: 6 Views: 30036 Forum: MathTime Pro II Beta Posted: Sun Apr 23, 2006 11:37 pm Subject: Re: small caps But will this package now break Y&Y? There are issues of font naming and font encoding involved, which of course are handled differently by Y&Y vs.
You asked for a solution to use the Adobe ... Topic: small caps zedler Replies: 6 Views: 30036 Forum: MathTime Pro II Beta Posted: Sun Apr 23, 2006 12:44 pm Subject: Re: small caps http://home.vr-web.de/was/metrics/ptms.txt
http://home.vr-web.de/was/metrics/ptms.zip
Michael Topic: spacing zedler Replies: 3 Views: 20174 Forum: MathTime Pro II Beta Posted: Tue Apr 18, 2006 7:44 am Subject: Re: spacing Hello,
what do you think about the following, is it worth to introduce kerning pairs with the comma?
\documentclass{minimal}
\usepackage{mtpro2}
%\usepackage{MinionPro}
%\usepackage{lucimatx} ... Topic: bad \sim in subscript, mtp2 beta 0.991 zedler Replies: 10 Views: 41767 Forum: MathTime Pro II Beta Posted: Fri Apr 07, 2006 5:18 am Subject: Re: bad subscript \sim persists in MiKTeX Still a _subscript_ \sim does not preview correctly in Yap nor print properly from Yap: The left and right ends of the character are very thick.
You need to delete the old mtpro2-related pk files.
... Topic: bad \sim in subscript, mtp2 beta 0.991 zedler Replies: 10 Views: 41767 Forum: MathTime Pro II Beta Posted: Mon Apr 03, 2006 7:07 am Subject: Re: bad \sim in subscript, mtp2 beta 0.991 With the first beta version of MTPro2 I can reproduce the error with Miktex (Fontlab also showed an erraneous glyph), with the current beta everything is fine.
Michael Topic: \bulletdashcirc and \circdashbullet zedler Replies: 1 Views: 15762 Forum: MathTime Pro II Beta Posted: Fri Mar 31, 2006 2:08 am Subject: \bulletdashcirc and \circdashbullet Hi,
the new symbols \bulletdashcirc and \circdashbullet are too long, in addition the dash penetrates the inside of the circle.
I'd also advocate to scale down the circle and bullet slightly.
M ... Topic: spacing of mathrm zedler Replies: 7 Views: 34333 Forum: MathTime Pro II Beta Posted: Wed Mar 29, 2006 3:25 pm Subject: Re: spacing of mathrm (Is this Lucida or Lucida-Bright?)
Pctex's Lucida fonts.
By the way, what is
Excerpt from a paper (Deadline tomorrow Mar 30, Hawai time ;-))
\begin{document}\let\mathbf\mbf
... Topic: spacing of mathrm zedler Replies: 7 Views: 34333 Forum: MathTime Pro II Beta Posted: Wed Mar 29, 2006 10:00 am Subject: Re: spacing of mathrm Sorry that
I can apply manual spacings, the "\mathrm j" is stored in a macro anyway and the empty brackets I need only once. Perhaps you're interested, I've put together a collection showing how d ... Topic: spacing of mathrm zedler Replies: 7 Views: 34333 Forum: MathTime Pro II Beta Posted: Wed Mar 29, 2006 5:23 am Subject: Re: spacing of mathrm There's not much I can do about that---if you are using Times as the
text font, then in text (j also touches!
Yes, I really want to typeset $\exp(\mathrm j\omega\tau=$ ;-)
I suppose this can o ... Topic: spacing of mathrm zedler Replies: 7 Views: 34333 Forum: MathTime Pro II Beta Posted: Mon Mar 27, 2006 11:53 am Subject: spacing of mathrm Hello,
\documentclass{book}
\usepackage{times,mtpro2}
\begin{document}
$(\mathrm j$
\end{document}
gives touching glyphs.
Michael Topic: subscriptcorrection zedler Replies: 14 Views: 54570 Forum: MathTime Pro II Beta Posted: Fri Mar 24, 2006 9:56 am Subject: Re: subscriptcorrection No, and this shouldn't have anything to do with using the subscriptcorrection option. Whether or not it is used, _ will apply to the next token or group, and $a_\text$ should give an error
Omittin ... Topic: subscriptcorrection zedler Replies: 14 Views: 54570 Forum: MathTime Pro II Beta Posted: Fri Mar 24, 2006 6:38 am Subject: subscriptcorrection Hello,
\documentclass{book}
\usepackage{times,amsmath}
\usepackage[subscriptcorrection]{mtpro2}
\begin{document}
$a_\text{eff}$
\end{document}
The above example gives an error, can this be ... Topic: fine tuning kerns zedler Replies: 2 Views: 16738 Forum: MathTime Pro II Beta Posted: Mon Mar 20, 2006 5:48 am Subject: Re: fine tuning kerns I find the pairs involving \Theta, \Omega, \Phi quite tight. On the other hand, comparing mtpro2 with Y&Y's mathtime the equations take quite a bit of extra space.
Michael Topic: first impressions zedler Replies: 15 Views: 61632 Forum: MathTime Pro II Beta Posted: Mon Mar 20, 2006 5:09 am Subject: first impressions I don't think it is necessary to break compatibility. If the hypothetical option rmdefault is not used, the behaviour could be the same as it is now.
How about this one?
\documentclass{book}
... Topic: spacing zedler Replies: 3 Views: 20174 Forum: MathTime Pro II Beta Posted: Thu Mar 09, 2006 9:20 am Subject: spacing Hello,
Please have a look at the spacing of these equations
- $[]_{\langle n\times m\rangle}$ (the brackets touch)
- $Z_{F1}$ (F and 1 spacing)
- $\frac{L_L}{L_R}$ (the subscript L too close ...
|
In Proposition 7.6 of his paper "Higher Direct Images of Dualizing Sheaves", Kollár shows that if $X,Y$ are smooth complex projective varieties and $f:X\rightarrow Y$ is a proper surjective morphism with connected fibers, then there is an isomorphism $R^df_{\ast}\omega_X \simeq \omega_Y$, where $d=\dim X-\dim Y$.
I was wondering whether this is true in positive characteristic. By Grothendieck duality one has $$Rf_{\ast}\mathcal{O}_X \simeq Rf_{\ast}R\mathcal{H}om(\omega_X^{\bullet},\omega_X^{\bullet}) \simeq R\mathcal{H}om(Rf_{\ast}\omega_X^{\bullet}, \omega_Y^{\bullet})[-d]$$ so taking 0-th cohomology we get $$f_{\ast}\mathcal{O}_X \simeq \mathcal{E}xt^{-d}(Rf_{\ast}\omega_X^{\bullet}, \omega_Y^{\bullet})$$
From the spectral sequence $$\mathcal{E}xt^p(\mathcal{H}^{-q}(\mathcal{F}^{\bullet}), \mathcal{G}^{\bullet}) \Longrightarrow \mathcal{E}xt^{p+q}(\mathcal{F}^{\bullet},\mathcal{G}^{\bullet})$$ it follows that the RHS is isomorphic to $\mathcal{H}om(R^df_{\ast}\omega_X,\omega_Y)$ and the LHS is just $\mathcal{O}_Y$ by the connected fibers assumption so we have $$\mathcal{O}_Y \simeq \mathcal{H}om(R^df_{\ast}\omega_X,\omega_Y)$$
Finally since $R^df_{\ast}\omega_X$ is locally free we conclude that $R^df_{\ast}\omega_X \simeq \omega_Y$.
|
There is a remark (Remark IV.4.3.2) in Shrawan Kumar's book* that says it is unknown to the author that the set of smooth points of an ind-variety is open.I was wondering if this has been answered ...
Let $\mathbb{k}$ be an algebraically closed field and let $A$ be a $\mathbb{k}$-algebra which is a free module of rank $r$ over some central subalgebra $Z_0$. If $Z_0$ is affine and $r$ is a finite ...
Let $R$ be an affine $\mathbb C-$algebra with a linear involution $x\rightarrow \bar x=\iota(x)$, let $S=R/\iota$ and $\psi:R^n\times R^n\rightarrow R$ be an $R/S-$hermitian form. Finaly let $$SU_n(R)=...
Let $(X_n)_{n \geq 0}$ be a family of schemes. Let $$X_0 \to X_1 \to X_2 \to \dotsc$$ be a sequence of closed immersions (which therefore gives rise to an ind-scheme). Under which (necessarly and/or ...
What is the correct definition of an ind-scheme?I ask this because there are (at least) two definitions in the literature, and they really differ.Definition 1. An ind-scheme is a directed colimit ...
Let $G$ be either a compact, simple, simply-connected Lie group, or a simply-connected complex reductive group (so either $SU(n)$ or $SL(n,\mathbb{C})$, for instance), or even complex affine. I don't ...
(The question of the type "how to define?")Let $(R,\mathfrak{m})$ be a local ring over a field $k$ of zero characteristic. Consider the matrices over this ring, $Mat(m,R)$. I think of $Mat(m,R)$ as ...
Let $X$ a ind-scheme of ind-finite type and ind-affine. (e.g, take a k- smooth, affine scheme of finte type $T$, $C$ a smooth projective curve over $k$ and $x$ a closed point, then $X=T(C-x)$ verifies ...
Is I consider an ind scheme such as $G(k((t)))$ for a reductive connected group over $k=\bar{k}$I have the conjugacy action of $G(k[[t]])$.In what category can I make the quotient $[G(k((t))/ad(G(...
Say I have an ind scheme $X = \cup_i X_i$ over a field $k$. I have its tangent bundle $\hom_k(k[\epsilon], X)$ which I can think of as ind scheme via $\cup_i \hom_k(k[\epsilon],X_i)$. The problem is ...
I've learned when you have a integral smooth scheme line bundles are the same as Cartier divisors are the same as Weil divisors. My question is to what extent does this continue to hold (if at all) ...
It is well known that Serre [FAC] gave us a nice categorical description for quasi coherent sheaves on projective scheme, it is a proj-category.(graded modules category localized by Serre subcategory)...
|
Seminar
Parent Program: Location: MSRI: Baker Board Room
Thomas McConville, Preparation for “Circuits and Hurwitz action in finite root systems”
Joseph Doolittle, Nerves of Simplicial Complexes and Reconstruction
The standard nerve of a covering $U=\{U_1, \ldots U_j\}$ of some topological space $X$ is the simplicial complex whose faces are subsets $\sigma$ of $U$ such that $\bigcap_{U_i \in \sigma} U_i \neq \emptyset$. In this talk, we introduce a generalization of this definition, and investigate some of its properties when we restrict $X$ to be a simplicial complex, and let $U$ be the set of its facets. We then follow a generalization of a construction given by Grunbaum for the standard nerve to obtain a simplicial complex whose nerve is some desired complex. This generalized construction will give a reconstruction result for simplicial complexes. The generalized nerve is a very powerful tool, and many results have already been discovered. It will have applications in many problems across topological combinatorics.No Notes/Supplements Uploaded No Video Files Uploaded
|
Difference between revisions of "Vopenka"
(→Variants: gVP so?)
Line 67: Line 67:
==Variants==
==Variants==
−
(Information in this section from <cite>Bagaria2012:CnCardinals</cite>
+
(Information in this section from <cite>Bagaria2012:CnCardinals</cite><cite>BagariaGitmanSchindler2017:VopenkaPrinciple</cite>)
(Boldface) $VP(\mathbf{Σ_n})$ denotes the fragment of Vopěnka’s Principle for $Σ_n$-definable classes and (lightface) $VP(Σ_n)$ is the weaker principle, where parameters are not allowed in the definition of the class (with analogous definitions for $Π_n$ and $∆_n$).
(Boldface) $VP(\mathbf{Σ_n})$ denotes the fragment of Vopěnka’s Principle for $Σ_n$-definable classes and (lightface) $VP(Σ_n)$ is the weaker principle, where parameters are not allowed in the definition of the class (with analogous definitions for $Π_n$ and $∆_n$).
Line 87: Line 87:
** $VP$
** $VP$
** For every $n$, there exists a $C(n)$-extendible cardinal.
** For every $n$, there exists a $C(n)$-extendible cardinal.
+ + + + + + + +
==External links==
==External links==
Revision as of 11:35, 17 September 2019
Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following:
For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$.
For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there is a stationary proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, or even almost-high-jump, then $V_\kappa$ satisfies Vopěnka's principle.
Contents 1 Formalizations 2 Vopěnka cardinals 3 Equivalent statements 4 Other points to note 5 Variants 6 External links 7 References Formalizations
As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. A somewhat stronger alternative is to view Vopěnka's principle as an axiom in second-order set theory capable to dealing with proper classes, such as von Neumann-Gödel-Bernays set theory. This is a strictly stronger assertion. [1] Finally, one may relativize the principle to a particular cardinal, leading to the concept of a Vopěnka cardinal.
Vopěnka's principle can be formalized in first-order set theory as a schema, where for each natural number $n$ in the meta-theory there is a formula expressing that Vopěnka’s Principle holds for all $Σ_n$-definable (with parameters) classes.[1]
Vopěnka cardinals
An inaccessible cardinal $\kappa$ is a
Vopěnka cardinal if and only if $V_\kappa$ satisfies Vopěnka's principle, that is, where we interpret the proper classes of $V_\kappa$ as the subsets of $V_\kappa$ of cardinality $\kappa$. Because of a characterization of Vopěnka's principle in terms of graphs, a cardinal $\kappa$ is Vopěnka if and only if $\kappa$ is inaccessible and any set $\kappa$-sized set $G$ of $<\kappa$-sized nonisomorphic graphs has some $g_0$ and $g_1$ with $g_0$ a proper subgraph of $g_1$. (Need to cite sources)
As we mentioned above, every almost huge cardinal is a Vopěnka cardinal.
Equivalent statements $C^{(n)}$-extendible cardinals
The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters (see section "Variants”). [3]
Strong Compactness of Logics
Vopěnka's principle is equivalent to the following statement about logics as well:
For every logic $\mathcal{L}$, there is a cardinal $\mu_{\mathcal{L}}$ such that for any language $\tau$ and any $\mathcal{L}(\tau)$-theory $T$, $T$ is satisfiable if and only if every $t\subseteq T$ such that $|t|<\mu_{\mathcal{L}}$ is satisfiable. [4]
This $\mu_{\mathcal{L}}$ is called the strong compactness cardinal of $\mathcal{L}$. Vopěnka's principle therefore is equivalent to every logic having a strong compactness cardinal. This is very similar in definition to the Löwenheim–Skolem number of $\mathcal{L}$, although it is not guaranteed to exist.
Here are some examples of strong compactness cardinals of specific logics:
If $\kappa\leq\lambda$ and $\lambda$ is strongly compact or $\aleph_0$, then the strong compactness cardinal of $\mathcal{L}_{\kappa,\kappa}$ is at most $\lambda$. Similarly, if $\kappa\leq\lambda$ and $\lambda$ is extendible, then for any natural number $n$, the strong compactness cardinal of $\mathcal{L}^n_{\kappa,\kappa}$ ($\mathcal{L}_{\kappa,\kappa}$ with $n+1$-th order logic) is at most $\lambda$. Therefore for any natural number $n$, the strong compactness cardinal of $n+1$-th order finitary logic is at most the least extendible cardinal. Locally Presentable Categories
Vopěnka's principle is equivalent to the axiom stating "no large full subcategory $C$ of any locally presentable category is discrete." (Sources needed). Equivalently, no large full subcategory of Graph (the category of all graphs) is discrete; that is, for any proper class of simple directed graphs, there is at least one pair of nonequal graphs $G$ and $H$ in the class such that $G$ is a subgraph of $H$. This is a $\Pi^1_1$ statement, so the least Vopěnka cardinals are not even weakly compact (although the least weakly compact cardinal is much, much, much smaller than the least Vopěnka cardinal, if it exists).
Intuitively, a "category" is just a class of mathematical objects with some notion of "morphism", "homomorphism", "isomorphism", (etc.). For example, in Set, the category of all sets, homomorphisms are just injections, and isomorphisms are bijections. In categories of groups and models, homomorphisms and isomorphisms share their actual names.
A "locally small category" $C$ is one with only set-many morphisms between any two objects of $C$. This is one where the objects of $C$ behave "set-like" in the sense that, usually, the number of morphisms between two set-sized objects is at most the number of functions between their universes (like in groups and in graphs). A "locally presentable category" is a locally small category with a couple more really nice properties; you can "generate" all of the objects from set-many objects in the category.
Vopěnka's principle intuitively states that if you have a locally presentable category $C$, then any proper class of objects of $C$ has some nonisomorphic objects $c$ and $d$ where $c$ has a morphism into $d$.
Woodin cardinals
There is a strange connection between the Woodin cardinals and the Vopěnka cardinals. In particular, Vopěnkaness is equivalent to two strengthening variants of Woodinness, namely the Woodin for Supercompactness cardinals and the $2$-fold Woodin cardinals. As a result, every Vopěnka cardinal is Woodin.
Elementary Embeddings Between Ranks
An equivalent statement to Vopěnka's principle is that for any proper class $C\subseteq ORD$, there are $\alpha\in C$, $\beta\in C$, and a nontrivial elementary embedding $j:\langle V_\alpha;\in,P\rangle\rightarrow\langle V_\beta;\in,P\rangle$. Vopěnka's principle quite obviously implies this. The reason the converse holds is because every elementary embedding can be "encoded" (in a sense) into one of these. For more information, see [5].
Other points to note
Whilst Vopěnka cardinals are very strong in terms of consistency strength, a Vopěnka cardinal need not even be weakly compact. Indeed, the definition of a Vopěnka cardinal is a $\Pi^1_1$ statement over $V_\kappa$ (Vopěnka's principle itself is $\Pi^1_1$), and $\Pi^1_1$-indescribability is one of the equivalent definitions of weak compactness. Thus, the least weakly compact Vopěnka cardinal must have (many) other Vopěnka cardinals less than it.
Variants
(Boldface) $VP(\mathbf{Σ_n})$ denotes the fragment of Vopěnka’s Principle for $Σ_n$-definable classes and (lightface) $VP(Σ_n)$ is the weaker principle, where parameters are not allowed in the definition of the class (with analogous definitions for $Π_n$ and $∆_n$).
Vopěnka-like principles $VP(κ, \mathbf{Σ_n})$ for cardinal $κ$ state that for every proper class $\mathcal{C}$ of structures of the same type that is $Σ_n$-definable with parameters in $H_κ$ (the collection of all sets of hereditary size less than $κ$), $\mathcal{C}$ reflects below $κ$, namely for every $A ∈ C$ there is $B ∈ H_κ ∩ C$ that elementarily embeds into $A$.
Results:
For every $Γ$, $VP(κ, Γ)$ for some $κ$ implies $VP(Γ)$. $VP(κ, \mathbf{Σ_1})$ holds for every uncountable cardinal $κ$. $VP(Π_1) \iff VP(κ, Σ_2)$ for some $κ \iff$ There exists a supercompact cardinal. $VP(\mathbf{Π_1}) \iff VP(κ, \mathbf{Σ_2})$ for a proper class of cardinals $κ \iff$ There is a proper class of supercompact cardinals. For $n ≥ 1$, the following are equivalent: $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There exists a $C(n)$-extendible cardinal. The following are equivalent: $VP(Π_n)$ for every n. $VP(κ, \mathbf{Σ_n})$ for a proper class of cardinals $κ$ and for every $n$. $VP$ For every $n$, there exists a $C(n)$-extendible cardinal. Generic
(Information in this section from [6])
The
Generic Vopěnka’s Principle states that for every proper class $\mathcal{C}$ of structures of the same type there are $B ≠ A$, both in $\mathcal{C}$, such that $B$ elementarily embeds into $A$ in some set-forcing extension.
(Boldface) $gVP(\mathbf{Σ_n})$ denotes the fragment of Generic Vopěnka’s Principle for $Σ_n$-definable classes in some set-forcing extension and (lightface) $gVP(Σ_n)$ is the weaker principle, where parameters are not allowed in the definition of the class (with analogous definitions for $Π_n$ and $∆_n$). (?) ......
References Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Perlmutter, Norman. The large cardinals between supercompact and almost-huge., 2010. arχiv bibtex Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex Makowsky, Johann. Vopěnka's Principle and Compact Logics.J Symbol Logic www bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex
|
Let us construct an axiom schema that declares that every set is definable from previous sets. For any formulas $\phi$, $\pi$, $\tau$, the following is instance of this axiom schema:
$$\forall x\forall y.\left( \begin{array} {rl} & \phi(\emptyset) \\ \land & \phi(\mathbb N) \\ \land & (\phi(x) \land \phi(y) \implies \phi(\{x,y\}) \\ \land & (\phi(x) \implies \phi (\bigcup x)) \\ \land & (\phi(x) \implies \phi (P(x))) \\ \land & (\phi(x) \implies \phi(\{z \in x: \pi(z)\})) \\ \land & (\text{$\tau$ is a function formula} \land \phi(x) \implies \phi(\tau[x]))\end{array} \right) \implies \forall x. \phi(x)$$
It essentially goes through each of the ZF axioms, constructing a statement saying that $\phi$ satisfies the set is defines. For example, $(\text{$\tau$ is a function} \land \phi(x) \implies \phi(pi[x]))$ represents the axiom schema of replacement. The axiom then states that this statement implies that
all sets satisfy $\phi$.
Note in particular that this is more restrictive than $V = L$, since it asserts that ordinals need to be definable as well.
In particular, it implies there are no inaccessible cardinals.
Is this axiom schema consistient with ZF? (Also does it have a name?)
EDIT: The motivation for this axiom is to be comparable to the induction scheme on peano arithmetic. Essentially, its asking if we make ZF more like peano arithmetic, is it still consistient? We could even replace this with a second order axiom, and then ask if like peano arithmetic, it becomes categorical! Of course, this would be a different set theory then the normally envisioned one, but it is still of interest.
|
I am currently reading through a proof of Proposition 6 in
Chernoff's theorem and discrete time approximations of Brownian motion on manifolds OG Smolyanov, H Weizsäcker, O Wittich - Potential Analysis
which is relating the geodesic distance in a Riemannian manifold $L$ to the geodesic distance in a Riemannian manifold $M$ with $L$ embedded in $M$ via $\phi:L\rightarrow M$.
Let $\xi$ denote Riemannian normal coordinates in $L$ and $\eta$ denote Riemannian normal coordinates in $M$. Throughout, all indexing variables using latin characters (e.g. $a,b,u,v$) will range from $1$ to $\dim(L)$ and all greek characters (e.g. $\alpha,\beta, \rho,\mu$) range from $1$ to $\dim(M)$.
In one of the final lines of the proof, we are trying to show that $$\left(\frac{\partial^2g_{ab}^L}{\partial\xi^u\partial\xi^v}-\frac{\partial^2g_{\alpha\beta}^M}{\partial\xi^\rho\partial\xi^\mu}\frac{\partial\phi^\rho}{\partial\xi^u} \frac{\partial\phi^\mu}{\partial\xi^v} \frac{\partial\phi^\alpha}{\partial\xi^a} \frac{\partial\phi^\beta}{\partial\xi^b}\right)(0)\xi^a\xi^b \xi^u\xi^v=0$$.
It is stated that by relating the partial derivatives of the metric tensor to curvature using the Taylor Expansion: $$ g_{ab}(\xi)=\delta_{ab}+\frac{1}{3} R_{auvb}(0)\xi^u\xi^v +O(|\xi|^3)$$
we obtain $$\left(\frac{\partial^2g_{ab}^L}{\partial\xi^u\partial\xi^v}-\frac{\partial^2g_{\alpha\beta}^M}{\partial\xi^\rho\partial\xi^\mu}\frac{\partial\phi^\rho}{\partial\xi^u} \frac{\partial\phi^\mu}{\partial\xi^v} \frac{\partial\phi^\alpha}{\partial\xi^a} \frac{\partial\phi^\beta}{\partial\xi^b}\right)(0)\xi^a\xi^b \xi^u\xi^v=2\left(R_{auvb}-R_{\alpha\rho\mu\beta}\frac{\partial\phi^\rho}{\partial\xi^u} \frac{\partial\phi^\mu}{\partial\xi^v} \frac{\partial\phi^\alpha}{\partial\xi^a} \frac{\partial\phi^\beta}{\partial\xi^b}\right)(0)\xi^a\xi^b \xi^u\xi^v.$$
I see that $2R_{auvb}$ is the second order term in the Taylor expansion of $g^L_{ab}$ and thus should be the same as $\frac{\partial^2g_{ab}^L}{\partial\xi^u\partial\xi^v}(0)$.
However, this seems strange to me, given the classical formula for the components of the curvature tensor (where Christoffel symbols vanish): $$ R_{auvb}=\frac{1}{2}\left(\frac{\partial^2g_{ab}}{\partial\xi^u\partial\xi^v}+ \frac{\partial^2g_{uv}}{\partial\xi^a\partial\xi^b}- \frac{\partial^2g_{av}}{\partial\xi^b\partial\xi^u}- \frac{\partial^2g_{ub}}{\partial\xi^a\partial\xi^v}\right)$$
This should suggest that the final three terms cancel each other out, but I can't see a reason why that would occur.
|
I currently try to design my first transistor circuit to switch an LED on and off. So I came up with this circuit diagram:
I have to use the following components:
What I need to figure out are the values of R1 and R2. So first I picked a desired current of 70mA from the 5V source to GND. According to the LED datasheet, 70mA of current induces a voltage drop of 1.3V. Figure 11 of the transistor datasheet shows that the saturation voltage at 70mA is roughly 0.06V. Now we can calculate R2 using Ohm's Law:
$$R2 = \frac{5V - V_{D1} - V_{CE(sat)}}{I_{C}} = \frac{5V - 1.3V - 0.06V}{0.07A} = 52\Omega$$
In order to get the base current \$I_{B}\$, I looked for DC current gain in the datasheet. The lowest value is \$\beta = 10\$ as shown in Figure 11. Therefore
$$I_{B} = \frac{I_{C}}{\beta} = \frac{0.07A}{10} = 7mA$$
Which is definitely a problem. I don't want to draw more than 2mA from the logic source as it can possibly be damaged. To stay on the safe side, I need to find a way to increase the lowest possible value of \$\beta\$ to reduce the base current.
Is this the way to go? And if yes, how would I accomplish an increase of \$\beta\$ ?
Please don't tell me to buy other transistors with higher DC current gain, although I will definitely do that later.
|
A topological space $X$ is said to be irreducible if for any decomposition of $X$ with $X=A \cup B$ where $A,B$ are closed subsets of $X$ we either have $X=A$ or $X=B.$
Theorem$:$
Let $X$ be a topological space such that any non-empty open subset of $X$ is dense in $X.$ Then show that $X$ is irreducible.
My attempt $:$
Suppose $X=A \cup B$ with $A$ and $B$ closed in $X.$ If either $X=A$ or $X=B$ we are through. So WLOG let us assume that $A \subsetneq X$ and $B \subsetneq X.$ Then $X \setminus A$ and $X \setminus B$ are non-empty open subsets of $X.$ So by the given hypothesis they are both dense in $X.$ Therefore $$\begin{align} \overline {X \setminus A} & = \overline {X \setminus B} = X. \\ \implies X \setminus \operatorname {int} (A) & = X \setminus \operatorname {int} (B) = X. \\ \implies \operatorname {int} (A) & = \operatorname {int}(B) = \emptyset. \end{align}$$
So $A$ and $B$ are nowhere dense subsets of $X.$ Therefore $X$ is meagre or a set of first category.
Now I got stuck at this stage. Would anybody please help me to proceed further?
Thank you very much.
|
I was watching Geoff Hinton's lecture from May 2013 about the history of deep learning and his comments on the rectified linear units (ReLU's) made more sense that my previous reading on them had. Essentially he noted that these units were just a way of approximating the activity of a large number of sigmoid units with varying biases.
"Is that true?" I wondered. "Let's try it and see ... "
%pylab inline
Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'.
Let's first define a logistic sigmoid unit. This looks like
\[ f(x;\alpha) = \frac{1}{1+ \exp(-x + \alpha)} \]
where $ \alpha $ is the offset paramter to set the value at which the logistic evaluates to 0. Programatically this looks like:
def logistic(x,offset): # X is an array of numbers at which to evaluate the logistc unit, offset is the offset value return (1/(1 + np.exp(-x+offset)))
When evaluated for a number of values we see a distinctive 's'-shape. As the limits of the function evaluation expand this shape becomes much more 'squashed.' This is one of the difficulties of using such a function to limit the input values to a learning system. If you're unsure of the range of values to be inputted then the output can easily saturate for very large or very small values. Input normalization can help this, but sometimes this isn't practical (e.g. if you have a bimodal input distribution with heavy tails.) The $ \alpha $ parameter lets you adjust for this.
x = np.linspace(-10,10,200)y = logistic(x,0)fig = plot(x,y)xlabel('input x')ylabel('output value of logistic')gcf().set_size_inches(6,5)
So what happens if you were to sum many of these functions, all with a different bias?
N_sum = 10 # Number of logistics to sumoffsets = np.linspace(0,np.max(x),100)y_sum = np.zeros(shape(x))for offset in offsets: y_sum += logistic(x,offset)y_sum = y_sum / N_sum
plot(x,y_sum)xlabel('Input value')ylabel('Logistc Unit Summation output')
<matplotlib.text.Text at 0x10699bed0>
Yep, that definitely is starting to look like the ReLU's.
So it turns out that you can approximate this summation using the equation
\[ f(x) = \log(1+exp(x)) \]
x = np.linspace(-10,10,200)inp = np.linspace(-10,10,200)inp[inp < 0] = 0f2 = plot(x,inp)xlabel('Input value')ylabel('ReLU output')relu_approx = np.log(1 + np.exp(x))f3 = plot(x,relu_approx)
From here Hinton said, "do you really need the 'log' and the 'exp', or could I just take $ max(0,input)$? And that works fine", thus giving you the ReLU.
Hinton's discussion is embedded below. He starts talking about different learning units at 27 minutes, 10 seconds.
from IPython.display import YouTubeVideoYouTubeVideo('vShMxxqtDDs?t=27m10s')
Join my mailing list for topics on scientific computing.
|
Filtering time-series data can be a tricky business. Many times what seems to obvious to our eyes is difficult to mathematically put into practice. A recent technique I've been reading about is called "total variance minimization" filtering or another name would be "$ \ell_1 $ trend filtering." Another member of this family is the Hodrick-Prescott (HP) trend filter. In all three of these types of filters, we're presented with a set of noisy data, $ y = x + z $ (where $ x $ is the real signal and $ z $ is some noise) and we're trying to fit a more simple model or set of lines to this data. For the TV filter, the type of lines we're fitting to the data are lines that don't have many large "jumps", or mathematically, $ \vert (x_{n-1} - x_{n}) \vert $, the difference between two of the fitted values, is small. In HP and $ \ell_1 $ trend filtering, the goal is to make sure that the 2nd derivative is small. Mathematically this looks like minimizing the term$$ (x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}) . $$
This makes sense, because the only time that the term above goes to zero is if the data is in a straight line - no "kinks" in the trendline. In HP filtering this term is squared and in $ \ell_1 $ filtering this term is just the absolute value (the $ l_1 $ norm). HP filtering fits a kind of spline to the data, while the $ \ell_1 $ filtering fits a piecewise linear function (straight lines that join at the knots).
The actual cost function looks a bit like this:$$ c(x) = \frac{1}{2}\sum_{i=1}^{n-1} (y_i - x_i)^2 + \lambda P(x) $$
where $ P(x) $ is one of the penalty terms we talked about above and $ \lambda $ controls the strength of the denoising. Again, remember that $ y $ is our observed signal and $ x $ is our projected or estimated signal - the trend. Setting $ \lambda $ to infinity will cause the fitted line to be a straight line through the data, and setting $ \lambda $ to zero will fit a line that looks identical to the original data. In HP filtering the whole function we're minimizing looks like this$$ c(x) = \frac{1}{2}\sum_{i=1}^{n-1} (y_i - x_i)^2 + \lambda\sum_{i = 1}^{n-1} ((x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}))^2 $$
and so $ \hat{x} = \arg \min_x c(x) $.
In order to find the minimum value of $ \hat{x} $ we can take the derivative of $ c(x) $ with respect to $ x $ and solve for zero. This isn't too difficult, and having both the terms in the HP filter squared makes this a bit easier - $ \ell_1 $ filtering is trickier.
Before we compute the derivative, let's rewrite the ending penalty term as a matrix multiplication. In order to compute $ ((x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}))^2 $ for each value of $ n $, we can multiply our array of $ x $ values, $ x = (x_1, x_2, ..., x_n)^T \in \mathbb R^{n x 1} $, by a matrix that takes the difference. It will look like this$$ D = \begin{bmatrix} \\ 1 & -2 & 1 & & & & \\ & 1 & -2 & 1 & & & \\ & & \ddots & \ddots & \ddots & & \\ & & & 1 & -2 & 1 & \\ & & & & 1 & -2 & 1 \\ \end{bmatrix} $$
So now, $ D \in \mathbb R^{N-2 \times N} $ and $ Dx $ will now be a $ \mathbb R^{N-2 \times 1} $ matrix descring the difference penalty. The cost function can now be rewritten as$$ c(x) = \frac{1}{2}\Vert y - x \Vert^2 + \lambda\Vert Dx\Vert^2 $$
Notice that both $ \Vert y - x \Vert^2 $ and $ \Vert Dx \Vert^2 $ are single values, since both of the inner quantaties are vectors ($ N \times 1 $ and $ N-2 \times 1 $) and so taking the $ \ell_2 $ norm sums the square of the elements, yielding one value.
If we take the derivative of $ c(x) $ w.r.t $ x $, we have$$ \frac{\partial c(x)}{\partial x} = -y + x + 2\lambda D^TDx = 0 $$
The first part is taking the derivative of $ (y - x)(y - x)^T $ (which is a different way of writing the $ \ell_2 $ norm), and the latter part comes out of the tedious computation of the derivative of the $ \ell_2 $ norm of $ Dx $.
Rearranging this equation yields$$ (x - 2\lambda D^TDx) = y $$
Solving for $ x $ gives$$ (I - 2\lambda D^TD)^{-1}y = \hat x $$
This is super handy because it gives us an analytical way of solving for $ \hat x $ by just multiplying our $ y $ vector by a precomputed transformation matrix - $ (I - 2\lambda D^TD) $ is a fixed matrix of size $ N \times N $.
An Example
Let's take the stock price of Apple over time. We can import this via the matplotlib (!) module:
from matplotlib import finance%pylab inlinepylab.rcParams['figure.figsize'] = 12, 8 # that's default image size for this interactive sessionimport statsmodels.api as sm
Populating the interactive namespace from numpy and matplotlib
d1 = datetime.datetime(2011, 01, 01)d2 = datetime.datetime(2012, 12, 01)sp = finance.quotes_historical_yahoo('AAPL',d1,d2,asobject=None)
plot(sp[:,2])title('Stock price of AAPL from Jan. 2011 to Dec. 2012')
<matplotlib.text.Text at 0x1106d438>
Now that we have the historic data we want to fit a trend line to it. The Statsmodels package has this filter already built in. It comes installed with an Anaconda installation. You can give the function a series of data and the $ \lambda $ parameter and it will return the fitted line.
xhat = sm.tsa.filters.hpfilter(sp[:,2], lamb=100000)[1]plot(sp[:,2])hold(True)plot(xhat,linewidth=4.)
[<matplotlib.lines.Line2D at 0x1539ba90>]
The green line nicely flows through our data. What happens when we adjust the regularization? Since we don't really know what value to pick for $ \lambda $ we'll try a range of values and plot them.
lambdas = numpy.logspace(3,6,5) # Generate a logarithmicly spaced set of lambdas from 1,000 to 1 Millionxhat = []for i in range(lambdas.size): xhat.append(sm.tsa.filters.hpfilter(sp[:,2],lambdas[i])[1]) # Return the 2nd argument and apped
plot(sp[:,2])hold(True)plot(transpose(xhat),linewidth=2.)
[<matplotlib.lines.Line2D at 0x1580f908>, <matplotlib.lines.Line2D at 0x1580fef0>, <matplotlib.lines.Line2D at 0x158110b8>, <matplotlib.lines.Line2D at 0x15811240>, <matplotlib.lines.Line2D at 0x158113c8>]
You can see we move through a continuum of highly fitted to loosly fitted data. Pretty nifty, huh? You'll also notice that the trend line doesn't have any sharp transitions. This is due to the squared penalty term. In $ \ell_1 $ trend filtering you'll end up with sharp transitions. Maybe this is good, maybe not. For filtering data where there's no guarantee that the data will be semi-continuous (maybe some sensor reading), perhaps the TV or $ \ell_1 $ filter is better. The difficulty is that by adding the $ \ell_1 $ penalty the function becomes non-differentialable and makes the solution a little more difficult - generally an iterative process.
|
Learning Objectives
Define density. Use density as a conversion factor.
Density (\(\rho\)) is a physical property found by dividing the mass of an object by its volume. Regardless of the sample size, density is always constant. For example, the density of a pure sample of tungsten is always 19.25 grams per cubic centimeter. This means that whether you have one gram or one kilogram of the sample, the density will never vary. The equation, as we already know, is as follows:
\[\text{Density}=\dfrac{\text{Mass}}{\text{Volume}}\]
or just
\[\rho =\dfrac{m}{V}\]
Based on this equation, it's clear that density can, and does, vary from element to element and substance to substance due to differences in the relationship of mass and volume. Pure water, for example, has a density of 0.998 g/cm
3 at 25° C. The average densities of some common substances are in Table \(\PageIndex{1}\). Notice that corn oil has a lower mass to volume ratio than water. This means that when added to water, corn oil will “float.”
Substance Density at 25°C (g/cm3) blood 1.035 body fat 0.918 whole milk 1.030 corn oil 0.922 mayonnaise 0.910 honey 1.420
Density can be measured for all substances, solids, liquids and gases. For solids and liquids, density is often reported using the units of g/cm
3. Densities of gases, which are significantly lower than the densities of solids and liquids, are often given using units of g/L.
Example \(\PageIndex{1}\): Ethyl Alcohol
Calculate the density of a 30.2 mL sample of ethyl alcohol with a mass of 23.71002 g
SOLUTION
\[\rho = \dfrac{23.71002\,g}{30.2\,mL} = 0.785\, g/mL \]
Exercise \(\PageIndex{1}\)
Find the density (in kg/L) of a sample that has a volume of 36.5 L and a mass of 10.0 kg. If you have a 2.130 mL sample of acetic acid with mass 0.002234 kg, what is the density in kg/L? Answer a \(0.274 \,kg/L\) Answer b \(1.049 \,kg/L\) Density as a Conversion Factor
Conversion factors can also be constructed for converting between different kinds of units. For example, density can be used to convert between the mass and the volume of a substance. Consider mercury, which is a liquid at room temperature and has a density of 13.6 g/mL. The density tells us that 13.6 g of mercury have a volume of 1 mL. We can write that relationship as follows:
13.6 g mercury = 1 mL mercury
This relationship can be used to construct two conversion factors:
\[\dfrac{13.6\:g}{1\:mL} = 1\]
and
\[\dfrac{1\:mL}{13.6\:g} = 1\]
Which one do we use? It depends, as usual, on the units we need to cancel and introduce. For example, suppose we want to know the mass of 16 mL of mercury. We would use the conversion factor that has milliliters on the bottom (so that the milliliter unit cancels) and grams on top so that our final answer has a unit of mass:
\[\mathrm{2.0\:\cancel{mL}\times\dfrac{13.6\:g}{1\:\cancel{mL}}=27.2\:g=27\:g}\]
In the last step, we limit our final answer to two significant figures because the volume quantity has only two significant figures; the 1 in the volume unit is considered an exact number, so it does not affect the number of significant figures. The other conversion factor would be useful if we were given a mass and asked to find volume, as the following example illustrates.
Density can be used as a conversion factor between mass and volume.
Example \(\PageIndex{2}\): Mercury Thermometer Steps for Problem Solving
A mercury thermometer for measuring a patient’s temperature contains 0.750 g of mercury. What is the volume of this mass of mercury?
SOLUTION
Steps for Problem Solving Unit Conversion
Identify the "given"information and what the problem is asking you to "find."
Given: 0.750 g
Find: mL
List other known quantities
13.6 g/mL (density of mercury)
Prepare a concept map
Calculate
\[ 0.750 \; \cancel{\rm{g}} \times \dfrac{1\; \rm{mL}}{13.6 \; \cancel{\rm{g}}} = 0.055147 ... \; \rm{mL} \approx 0.0551\; \rm{mL} \nonumber\]
We have limited the final answer to three significant figures.
Exercise \(\PageIndex{2}\)
What is the volume of 100.0 g of air if its density is 1.3 g/L?
Answer \(77 \, L\) Summary Density is defined as the mass of an object divided by its volume. Density can be used as a conversion factor between mass and volume.
|
Suppose $p$ is a prime number and $G$ is a finite group, such that $\Phi(G) = D_4 = \langle a \rangle_4 \rtimes \langle b \rangle_2$, where $\Phi$ denotes the Frattini subgroup. Is it always true, that $32$ divides $|G|$?
To solve that problem I tried to apply the method from the answer to the following question: A question about Frattini subgroup of specific form
That is, what I got:
Let $\Phi(G) = D_4$, and $\Phi(G) \le P \in {\rm Syl}_2(G)$.
Now $\Phi(G)$ cannot have a complement in $G$, since otherwise that complement would be contained in a maximal subgroup that did not contain $\Phi(G)$. So by the theorem of Gaschütz, $\Phi(G)$ does not have a complement in $P$. So $\Phi(G) < P$, and we only have to consider the case when $P=16$. Then, elements $g \in P \setminus N$ must have order dividing $8$, with $g^2 \in \Phi(G)$.
Now the conjugation action of $G$ on $\Phi(G)$ induces a subgroup $\bar{G} = \frac{G}{C_G(\Phi(G))}$ of ${\rm Aut}(\Phi(G)) \cong D_4$. If the image $\bar{P}$ of $P$ in $\bar{G}$ is not normal in $\bar{G}$, then $\bar{G}$ has more than one Sylow $2$-subgroup. But any subgroup of $D_4$ is a $2$-group, and thus $\bar{G}$ has a unique Sylow $2$-subgroup.
So $\bar{P} \unlhd \bar{G}$. Suppose $M = \langle g^2 \mid g \in P \rangle$. $M$ is a characteristic subgroup of $G$ and is contained in $\Phi(G)$. If $|P| \leq 16$, then there are three cases:
First one is, when $M$ is the proper characteristic subgroup of $\Phi(G)$ of order $4$. Then the image $\frac{\Phi(G)}{M}$ of $\Phi(G)$ has a complement in $\frac{P}{M}$, and hence, by Gaschütz's theorem again, $\frac{\Phi(G)}{M}$ has a complement $\frac{H}{M}$ in $\frac{G}{M}$. Then $|G:H| = 2$. So $H$ is a maximal subgroup of $G$ not containing $\Phi(G)$, contradiction.
Second one is, when $M$ is the proper characteristic subgroup of $\Phi(G)$ of order $2$.
Third one is, when $M=\Phi(G)$.
And here I am stuck, not knowing, what to do with the second and the third cases.
|
I would appreciate help with a valuation of a fixed income derivative, with an embedded exit option.
Summary:Goal is to provide valuation of a fixed schedule of quarterly cash flows with an option to exit at any quarter (3 years from today) for a lump sum exit payment (to dispose by refinancing). The lump sum payment is calculated for each quarter based on the spot level of a basket of spreads, and there are no cash flows after the option is executed. INPUTS:
$CF_T$ - array of quarterly cash flows assuming no exit,
$r_{k \times 12}$ - matrix containing 12 month history of $k$ discount margins,
$\operatorname{ExitCF}(r[1..k][\![t]\!],t)$ - function for calculation of the exit cash flow which depends on k spot-spreads at time t when the exit option is executed,
$\operatorname{DiscRate}$ - being the discount rate for the valuation.
Main Questions
Assuming there is only one spread in the basket, how to model the value of the option?
Does it make sense to use Vasicek model for modelling spreads as interest rates?
Does it make sense to normalize all spreads and model their Z-score through Vasicek model?
How to calibrate Vasicek model on normalized Z-scores of a basket of spreads?
Spreads:The basket of spreads is dominated by the first item which makes some 75%, first two are 85%. It is also highly correlated where last item has 60% correlation to first, second is some 82% to first. Having said that, I would normalize all spreads and look at their Z-score:
$$ z_{k,t+1} = \frac{r_{k,t+1}-\operatorname{Mean}(r_{k,1 \cdots t+1})}{\operatorname{StDev}(r_{k,1 \cdots t+1}) \cdot \sqrt{4} /\sqrt{t+1}}, $$ where standard deviation is annualized by multiplying by $\sqrt{4}$ and divided to adjust for sample size, as the sample grows in the future.
Modelling Spreads: Given that the first and most important spread in the basket is not a measure of credit risk (a very senior credit instrument), I would like to model it as an interest rate. I also don't want to apply any credit migration provisions. My long term reversion rate should be the present rate at $(t+1)$. I would like to model $z$ variable and derive all rates from there. I initially thought about Cox-Ingersoll-Ross model, but given that the standard deviation factor (containing $\sqrt{r_t}$ was making issues, I think it is more appropriate to use Vasicek model instead.
Assume $z$ is a Vasicek process ($k$ assumed to be $1$, and omitted):
$$ d z_t = a(b - z_t) d t + \sigma_z d W_t, $$
then by replacing $z_t=\frac{r_t-\bar{r_t}}{\sigma_t}$ and $\sigma_z=1$, this is equivalent to $r_t$ being also a Vasicek process:
$$ d r_t = a([b \sigma_r + \bar{r} ( 1 + \frac{1}{\sigma_r}) ] - r_t ) d t + \sigma_r d W_t . $$
I would then fit the Vasicek model on spread $k=1$ (the most important one), and solve for $a$ and $b$, based on the terms for $d r_t$ expression.
|
Definition 1:Let $X$ be a countably infinite set and let $B$ be the set of one-to-one maps $\alpha: X\to X$ with the property that $X\setminus X\alpha$ is infinite. Then $B$ with composition is the Baer-Levi semigroup.
Definition 2:We call $S$ right simpleif $\mathcal R=S\times S$, and left simpleif $\mathcal L=S\times S$, where $\mathcal R$ and $\mathcal L$ are Green's $R$- and $L$-relation, respectively.
Definition 3:We call $S$ right cancellativeif $(\forall a, b, c\in S) ac=bc\implies a=b$, and left cancellativeif $(\forall a, b, c\in S) ca=cb\implies a=b$. The Question:
Show that the Baer-Levi semigroup $B$ is right simple and right cancellative, but is neither left simple nor left cancellative.
Let $\alpha, \beta \in B$. Then $\alpha \mathcal R \beta$ iff there exist $\gamma, \delta \in B$ such that $$\alpha = \beta \gamma \; , \; \beta = \alpha \delta$$
we need to show that $ \mathcal R = B \times B$. Let $\alpha, \beta \in B$, then define $\gamma : X \rightarrow X$ such that
$$ y \gamma = \begin{cases} x \alpha & \text{if} \; y\; \in \text{im}\beta ; x\beta = y \\ a_y & \text{otherwise} \end{cases}$$ where $a_y \notin$ im$\beta$ is fixed such that $a_y = a_{y'}$ iff $y = y'$.
Cleary $\gamma$ is a one-one mapping. How to prove $X\backslash X\gamma$ is infinite.
|
What is a Category? Definition and Examples
As promised, here is the first in our triad of posts on basic category theory definitions: categories, functors, and natural transformations. If you're just now tuning in and are wondering
what is category theory, anyway? be sure to Category theory takes a bird's eye view of mathematics. From high in the sky, details become invisible, but we can spot patterns that were impossible to detect from ground level. - Tom Leinster
What is a category?
A
category $\mathsf{C}$ consists of some data that satisfy certain properties: The Data a class of objects, $x,y,z,\ldots$ a set* of morphismsbetween pairs of objects; $x\overset{f}{\longrightarrow} y$ means "$f$ is a morphism from $x$ to $y$," and the set of all such morphisms is denoted $\text{hom}_{\mathsf{C}}(x,y)$ a composition rule:whenever the codomain of one morphism matches the domain of another, there is a morphism that is their composition, i.e. given $x\overset{f}{\longrightarrow} y$ and $y\overset{g}{\longrightarrow} z$ there is a morphism $x\overset{g\:\circ f}{\longrightarrow} z$. The Properties Each object $x$ has an identity morphism$x\overset{\text{id}_x}{\longrightarrow} x$ which satisfies $\text{id}_y\circ f=f=f\circ\text{id}_x$ for any $x\overset{f}{\longrightarrow} y$. The composition is associative: $(h\circ g)\circ f = h\circ(g\circ f)$ whenever $$x\overset{f}{\longrightarrow}y\overset{g}{\longrightarrow}z\overset{h}{\longrightarrow}w.$$
Here's a picture of a category with four objects--depicted as bold dots instead of letters--and some morphisms between them. The gray arrow at each object indicates its identity morphism.
Last time I listed a few examples of categories but never provided the proofs. To make sure you're comfortable with the definition, try proving a few of them. For example, to form the category of groups, we declare the objects to be groups and the morphisms to be group homomorphisms. But we should check: is the composition of two group homomorphisms another group homomorphism? (Or is it merely a function?) Is this composition associative? And is the identity function from a group to itself a group homomorphism? (Or, again, is it merely a function?)
I also mentioned briefly that every poset (a set with a binary relation that is reflexive, transitive, and antisymmetric) is a category. This turns out to be a useful example, so let's be explicit.
Example #1: a poset
Every poset $P$ forms a category. The objects are the elements of $P$ and there is a morphism $x\to y$ whenever $x\leq y$.
Composition holds because of transitivity: if $x\leq y$ and $y\leq z$ then $x\leq z$.
The identity-morphism property holds because of reflexivity: $x\leq x$ is always true. Therefore there is an arrow $x\overset{\text{id}_x}{\longrightarrow} x$ satisfying $f\circ\text{id}_x=f$ for any morphism $f$,
and similarly $\text{id}_y\circ f=f$.
Associativty holds too. (What would the picture for this look like?) In particular, we've just shown that $\mathbb{Z},\mathbb{Q}$, and $\mathbb{R}$ are all categories! We could have deduced this from the next example as well.
Example #2: a group
Every group $G$ can be viewed as a category---called $\mathsf{B}G$ (for cool reasons)---with a single object which we'll denote by $\bullet$. There is a morphism $\bullet\overset{g}{\longrightarrow}\bullet$ for each element $g\in G$, and composition holds since $G$ is closed under the group operation. That is, if $g,h\in G$ so that $\bullet\overset{g}{\longrightarrow}\bullet\overset{h}{\longrightarrow}\bullet$ are two composable morphisms, then there is a third morphism $\bullet\overset{hg}{\longrightarrow}\bullet$ that is their composition, namely the one corresponding to the element $hg\in G$. Finally, the group identity $e\in G$ serves as the identity morphism for $\bullet$, and associativity holds because the group operation is associative.
Example #3: new categories from old ones
A very natural question that you might ask upon discovering new mathematical ideas is
How can I construct new objects from given ones?** In particular one could ask, How can I make a new category from old ones? There are several ways to go about this. Given two categories $\mathsf{C}$ and $\mathsf{D}$, you might try defining their product $\mathsf{C}\times\mathsf{D}$. (This paves the way for monoidal categories.) What do you think the objects in $\mathsf{C}\times\mathsf{D}$ should be? The morphisms?
Additionally, every category $\mathsf{C}$ has an
opposite category $\mathsf{C}^{op}$. The only difference between a category and its opposite is the direction in which the arrows/morphisms point. That is, the objects in $\mathsf{C}^{op}$ are exactly the same as objects in $\mathsf{C}$, but there is a morphism $x\to y$ in $\mathsf{C}^{op}$ whenever there is morphism $y\to x$ in $\mathsf{C}$. (Check that $\mathsf{C}^{op}$ really does satisfy the definition of a category.) In other words,$$\text{hom}_{\mathsf{C}^{op}}(x,y)=\text{hom}_{\mathsf{C}}(y,x).$$
Another example of a category formed from existing ones is $\mathsf{Cat}$. Here, the objects are categories*** $\mathsf{C},\mathsf{D},\mathsf{E}, \mathsf{F},\ldots,$ and the morphisms between them are functors. I'm jumping the gun a bit on this one since I haven't defined functors on the blog yet, but I think it's worth a mention. As we discussed last time, it's helpful to think of a morphism/arrow as a relationship between its domain and codomain. So naturally, we might take interest in relationships, i.e.
functors, between categories.
You can also construct a new category $\mathsf{D}^\mathsf{C}$ from given categories $\mathsf{C}$ and $\mathsf{D}$ by declaring the objects to be functors $F:\mathsf{C}\to\mathsf{D}$. (The notation $\mathsf{D}^\mathsf{C}$ is quite suggestive.) Here a morphism between two functors $F:\mathsf{C}\to\mathsf{D}$ and $G:\mathsf{C}\to\mathsf{D}$ is a natural transformation, often denoted by a double arrow $F\Longrightarrow G$. In the special case when $\mathsf{D}=\mathsf{Set}$, the objects of $\mathsf{Set}^{\mathsf{C}^{op}}$ are given the name
presheaves. Again, I'm jumping the gun here, but I expect these examples to resurface later on the blog, so now is a good time to mention them.
We'll chat about functors next week and natural transformations the following week. I encourage you to revisit Example #3 once you're familiar with these notions.
Until next week!
*A category $\mathsf{C}$ where for all objects $x,y$, $\text{hom}_{\mathsf{C}}(x,y)$ is small enough to be an honest-to-goodness
set is called a locally small category. In other words, a category is locally small if there aren't too many morphisms between the objects. Most of the categories you're familiar with--$\mathsf{Set},\mathsf{Top},\mathsf{Group}$, etc.--are locally small. In future posts, let's always assume our categories have this property, unless otherwise stated.
**For instance, the intersection of sets is another set, the direct product of groups is another group, the tensor product of vector spaces is another vector space, the product of compact topological spaces is another compact topological space, and so on.
***For size considerations, the categories in $\mathsf{Cat}$ are assumed to be small. A category $\mathsf{C}$ is called
small if it doesn't contain too many objects and if there aren't too many morphisms between those objects, i.e. if both $\text{ob}(\mathsf{C})$ (all the objects in $\mathsf{C}$) and $\text{hom}_{\mathsf{C}}(x,y)$ are honest-to-goodness sets.
|
Show that $C_{c}^{\infty}(\Omega)$ is dense in $L^{\infty}(\Omega)$ with respect to the topology $\sigma(L^{\infty},L^{1})$, where $\Omega$ is an open subset of $\mathbb{R^n}$.
My try: Let $$\Omega_n=\{x \in \Omega: d(x,\partial \Omega) \gt \frac{2}{n},||x|| \lt n\}$$
For $u \in L^{\infty}(\Omega)$, define $\bar{u}$ by extending $u$ by $0$ outside $\Omega$. Let $\xi_n=\chi_{\Omega_n},\xi_=\chi_{\Omega}$, where $\chi$ is the usual characteristic function. Then $\Omega_{n} \subset \Omega_{n+1},\forall n$ and $\xi_n \to \xi$ in $\mathbb{R^n}$. Let $\rho_n$ be a sequence of mollifiers with $\rho_n \ge 0$ and $\int \rho_n=1$.
Define $v_n=\rho_n *(\xi_n\bar{u}), v=\xi \bar{u}$. It is then clear that $v_n \in C_c^{\infty}(\Omega), v_n \rightharpoonup v$ in $L^{\infty}(\Omega)$ weak$^* \sigma(L^{\infty},L^1)$ and $\int_{B}|v_n-v| \to 0$ for every ball $B$. From here I can extract a subsequence of $v_n$ which converges to $v$ almost everywhere in $B$.
I need to do diagonalization argument to get hold of a subsequence which converges to $v$ a.e in $\Omega$ which I am unable to do. Choosing the open sets $\Omega_n$ should do the job but I am little in doubt.
Thanks for the help!!
|
Difference between revisions of "Rank into rank"
(→$C^{(n)}$ variants: I wrote what I could)
(→$C^{(n)}$ variants: headers)
Line 77: Line 77:
(section from <cite>Bagaria2012:CnCardinals</cite>)
(section from <cite>Bagaria2012:CnCardinals</cite>)
−
Assumming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-[[superstrong]], $C^{(n)}$-[[extendible]] and $C^{(n)}$-$m$-[[huge]] in $V_δ$, for all $n$ and $m$.
+ +
Assumming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-[[superstrong]], $C^{(n)}$-[[extendible]] and $C^{(n)}$-$m$-[[huge]] in $V_δ$, for all $n$ and $m$.
−
Definitions:
+
Definitions :
* $κ$ is called '''[[correct|$C^{(n)}$-$\mathrm{I3}$ cardinal]]''' if it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $j(κ) ∈ C^{(n)}$.
* $κ$ is called '''[[correct|$C^{(n)}$-$\mathrm{I3}$ cardinal]]''' if it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $j(κ) ∈ C^{(n)}$.
* $κ$ is called '''$C^{(n)+}$-$\mathrm{I3}$ cardinal''' if it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $δ ∈ C^{(n)}$.
* $κ$ is called '''$C^{(n)+}$-$\mathrm{I3}$ cardinal''' if it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $δ ∈ C^{(n)}$.
Revision as of 10:50, 17 September 2019
A nontrivial elementary embedding $j:V_\lambda\to V_\lambda$ for some infinite ordinal $\lambda$ is known as a
rank into rank embedding and the axiom asserting that such an embedding exists is usually denoted by $\text{I3}$, $\text{I2}$, $\text{I1}$, $\mathcal{E}(V_\lambda)\neq \emptyset$ or some variant thereof. The term applies to a hierarchy of such embeddings increasing in large cardinal strength reaching toward the Kunen inconsistency. The axioms in this section are in some sense a technical restriction falling out of Kunen's proof that there can be no nontrivial elementary embedding $j:V\to V$ in $\text{ZFC}$). An analysis of the proof shows that there can be no nontrivial $j:V_{\lambda+2}\to V_{\lambda+2}$ and that if there is some ordinal $\delta$ and nontrivial rank to rank embedding $j:V_\delta\to V_\delta$ then necessarily $\delta$ must be a strong limit cardinal of cofinality $\omega$ or the successor of one. By standing convention, it is assumed that rank into rank embeddings are not the identity on their domains.
There are really two cardinals relevant to such embeddings: The large cardinal is the critical point of $j$, often denoted $\mathrm{crit}(j)$ or sometimes $\kappa_0$, and the other (not quite so large) cardinal is $\lambda$. In order to emphasize the two cardinals, the axiom is sometimes written as $E(\kappa,\lambda)$ (or $\text{I3}(\kappa,\lambda)$, etc.) as in [1]. The cardinal $\lambda$ is determined by defining the
critical sequence of $j$. Set $\kappa_0 = \mathrm{crit}(j)$ and $\kappa_{n+1}=j(\kappa_n)$. Then $\lambda = \sup \langle \kappa_n : n <\omega\rangle$ and is the first fixed point of $j$ that occurs above $\kappa_0$. Note that, unlike many of the other large cardinals appearing in the literature, the ordinal $\lambda$ is not the target of the critical point; it is the $\omega^{th}$ $j$-iterate of the critical point.
As a result of the strong closure properties of rank into rank embeddings, their critical points are huge and in fact $n$-huge for every $n$. This aspect of the large cardinal property is often called $\omega$-hugeness and the term
$\omega$-huge cardinal is sometimes used to refer to the critical point of some rank into rank embedding. Contents The $\text{I3}$ Axiom and Natural Strengthenings
The $\text{I3}$ axiom asserts, generally, that there is some embedding $j:V_\lambda\to V_\lambda$. $\text{I3}$ is also denoted as $\mathcal{E}(V_\lambda)\neq\emptyset$ where $\mathcal{E}(V_\lambda)$ is the set of all elementary embeddings from $V_\lambda$ to $V_\lambda$, or sometimes even $\text{I3}(\kappa,\lambda)$ when mention of the relevant cardinals is necessary. In its general form, the axiom asserts that the embedding preserves all first-order structure but fails to specify how much second-order structure is preserved by the embedding. The case that
no second-order structure is preserved is also sometimes denoted by $\text{I3}$. In this specific case $\text{I3}$ denotes the weakest kind of rank into rank embedding and so the $\text{I3}$ notation for the axiom is somewhat ambiguous. To eliminate this ambiguity we say $j$ is $E_0(\lambda)$ when $j$ preserves only first-order structure.
The axiom can be strengthened and refined in a natural way by asserting that various degrees of second-order correctness are preserved by the embeddings. A rank into rank embedding $j$ is said to be $\Sigma^1_n$ or
$\Sigma^1_n$ correct if, for every $\Sigma^1_n$ formula $\Phi$ and $A\subseteq V_\lambda$ the elementary schema holds for $j,\Phi$, and $A$: $$V_\lambda\models\Phi(A) \Leftrightarrow V_\lambda\models\Phi(j(A)).$$The more specific axiom $E_n(\lambda)$ asserts that some $j\in\mathcal{E}(V_\lambda)$ is $\Sigma^1_{2n}$.
The ``$2n$" subscript in the axiom $E_n(\lambda)$ is incorporated so that the axioms $E_m(\lambda)$ and $E_n(\lambda)$ where $m<n$ are strictly increasing in strength. This is somewhat subtle. For $n$ odd, $j$ is $\Sigma^1_n$ if and only if $j$ is $\Sigma^1_{n+1}$. However, for $n$ even, $j$ being $\Sigma^1_{n+1}$ is
significantly stronger than a $j$ being $\Sigma^1_n$[2]. The $\text{I2}$ Axiom
Any $j:V_\lambda\to V_\lambda$ can be extended to a $j^+:V_{\lambda+1}\to V_{\lambda+1}$ but in only one way: Define for each $A\subseteq V_\lambda$ $$j^+(A)=\bigcup_{\alpha < \lambda}(j(V_\alpha\cap A)).$$ $j^+$ is not necessarily elementary. The $\text{I2}$ axiom asserts the existence of some elementary embedding $j:V\to M$ with $V_\lambda\subseteq M$ where $\lambda$ is defined as the $\omega^{th}$ $j$-iterate of the critical point. Although this axiom asserts the existence of a
class embedding with a very strong closure property, it is in fact equivalent to an embedding $j:V_\lambda\to V_\lambda$ with $j^+$ preserving well-founded relations on $V_\lambda$. So this axioms preserves some second-order structure of $V_\lambda$ and is in fact equivalent to $E_1(\lambda)$ in the hierarchy defined above. A specific property of $\text{I2}$ embeddings is that they are iterable (i.e. the direct limit of directed system of embeddings is well-founded). In the literature, $IE(\lambda)$ asserts that $j:V_\lambda\to V_\lambda$ is iterable and $IE(\lambda)$ falls strictly between $E_0(\lambda)$ and $E_1(\lambda)$.
As a result of the strong closure property of $\text{I2}$, the equivalence mentioned above cannot be through an analysis of some ultrapower embedding. Instead, the equivalence is established by constructing a directed system of embeddings of various ultrapowers and using reflection properties of the critical points of the embeddings. The direct limit is well-founded since well-founded relations are preserved by $j^+$. The use of both direct and indirect limits, in conjunction with reflection arguments, is typical for establishing the properties of rank into rank embeddings.
The $\text{I1}$ Axiom
$\text{I1}$ asserts the existence of a nontrivial elementary embedding $j:V_{\lambda+1}\to V_{\lambda+1}$. This axiom is sometimes denoted $\mathcal{E}(V_{\lambda+1})\neq\emptyset$. Any such embedding preserves all second-order properties of $V_\lambda$ and so is $\Sigma^1_n$ for all $n$. To emphasize the preservation of second-order properties, the axiom is also sometimes written as $E_\omega(\lambda)$. In this case, restricting the embedding to $V_\lambda$ and forming $j^+$ as above yields the original embedding.
Strengthening this axiom in a natural way leads the $\text{I0}$ axiom, i.e. asserting that embeddings of the form $j:L(V_{\lambda+1})\to L(V_{\lambda+1})$ exist. There are also other natural strengthenings of $\text{I1}$, though it is not entirely clear how they relate to the $\text{I0}$ axiom. For example, one can assume the existence of elementary embeddings satisfying $\text{I1}$ which extend to embeddings $j:M\to M$ where $M$ is a transitive class inner model and add various requirements to $M$. These requirements must not entail that $M$ satisfies the axiom of choice by the Kunen inconsistency. Requirements that have been considered include assuming $M$ contains $V_{\lambda+1}$, $M$ satisfies $DC_\lambda$, $M$ satisfies replacement for formulas containing $j$ as a parameter, $j(\mathrm{crit}(j))$ is arbitrarily large in $M$, etc.
Virtually rank-into-rank
(Information in this subsection from [4])
A cardinal $κ$ is
virtually rank-into-rank iff in a set-forcing extension it is the critical point of an elementary embedding $j : V_λ → V_λ$ for some $λ > κ$.
This notion does not require stratification, because Kunen’s Inconsistency does not hold for virtual embeddings.
Every virtually rank-into-rank cardinal is a virtually $n$-huge* limit of virtually $n$-huge* cardinals for every $n < ω$. The least ω-Erdős cardinal $η_ω$ is a limit of virtually rank-into-rank cardinals. Every virtually rank-into-rank cardinal is an $ω$-iterable limit of $ω$-iterable cardinals. Every element of a club $C$ witnessing that $κ$ is a Silver cardinal is virtually rank-into-rank. Large Cardinal Properties of Critical Points
The critical points of rank into rank embeddings have many strong reflection properties. They are measurable, $n$-huge for all $n$ (hence the terminology $\omega$-huge mentioned in the introduction) and partially supercompact.
Using $\kappa_0$ as a seed, one can form the ultrafilter $$U=\{X\subseteq\mathcal{P}(\kappa_0): j``\kappa_0\in j(X)\}.$$ Thus, $\kappa_0$ is a measurable cardinal.
In fact, for any $n$, $\kappa_0$ is also $n$-huge as witnessed by the ultrafilter $$U=\{X\subseteq\mathcal{P}(\kappa_n): j``\kappa_n\in j(X)\}.$$ This motivates the term $\omega$-huge cardinal mentioned in the introduction.
Letting $j^n$ denote the $n^{th}$ iteration of $j$, then $$V_\lambda\models ``\lambda\text{ is supercompact"}.$$ To see this, suppose $\kappa_0\leq \theta <\kappa_n$, then $$U=\{X\subseteq\mathcal{P}_{\kappa_0}(\theta): j^n``\theta\in j^n(X)\}$$ winesses the $\theta$-compactness of $\kappa_0$ (in $V_\lambda$). For this last claim, it is enough that $\kappa_0(j)$ is $<\lambda$-supercompact, i.e. not *fully* supercompact in $V$. In this case, however, $\kappa_0$ *could* be fully supercompact.
Critical points of rank-into-rank embeddings also exhibit some *upward* reflection properties. For example, if $\kappa$ is a critical point of some embedding witnessing $\text{I3}(\kappa,\lambda)$, then there must exist another embedding witnessing $\text{I3}(\kappa',\lambda)$ with critical point
above $\kappa$! This upward type of reflection is not exhibited by large cardinals below extendible cardinals in the large cardinal hierarchy. Algebras of elementary embeddings
If $j,k\in\mathcal{E}_{\lambda}$, then $j^+(k)\in\mathcal{E}_{\lambda}$ as well. We therefore define a binary operation $*$ on $\mathcal{E}_{\lambda}$ called application defined by $j*k=j^{+}(k)$. The binary operation $*$ together with composition $\circ$ satisfies the following identities:
1. $(j\circ k)\circ l=j\circ(k\circ l),\,j\circ k=(j*k)\circ j,\,j*(k*l)=(j\circ k)*l,\,j*(k\circ l)=(j*k)\circ(j*l)$
2. $j*(k*l)=(j*k)*(j*l)$ (self-distributivity).
Identity 2 is an algebraic consequence of the identities in 1.
If $j\in\mathcal{E}_{\lambda}$ is a nontrivial elementary embedding, then $j$ freely generates a subalgebra of $(\mathcal{E}_{\lambda},*,\circ)$ with respect to the identities in 1, and $j$ freely generates a subalgebra of $(\mathcal{E}_{\lambda},*)$ with respect to the identity 2.
If $j_{n}\in\mathcal{E}_{\lambda}$ for all $n\in\omega$, then $\sup\{\textrm{crit}(j_{0}*\dots*j_{n})\mid n\in\omega\}=\lambda$ where the implied parentheses a grouped on the left (for example, $j*k*l=(j*k)*l$).
Suppose now that $\gamma$ is a limit ordinal with $\gamma<\lambda$. Then define an equivalence relation $\equiv^{\gamma}$ on $\mathcal{E}_{\lambda}$ where $j\equiv^{\gamma}k$ if and only if $j(x)\cap V_{\gamma}=k(x)\cap V_{\gamma}$ for each $x\in V_{\gamma}$. Then the equivalence relation $\equiv^{\gamma}$ is a congruence on the algebra $(\mathcal{E}_{\lambda},*,\circ)$. In other words, if $j_{1},j_{2},k\in \mathcal{E}_{\lambda}$ and $j_{1}\equiv^{\gamma}j_{2}$ then $j_{1}\circ k\equiv^{\gamma} j_{2}\circ k$ and $j_{1}*k\equiv^{\gamma}j_{2}*k$, and if $j,k_{1},k_{2}\in\mathcal{E}_{\lambda}$ and $k_{1}\equiv^{\gamma}k_{2}$ then $j\circ k_{1}\equiv^{\gamma}j\circ k_{2}$ and $j*k_{1}\equiv^{j(\gamma)}j*k_{2}$.
If $\gamma<\lambda$, then every finitely generated subalgebra of $(\mathcal{E}_{\lambda}/\equiv^{\gamma},*,\circ)$ is finite.
$C^{(n)}$ variants
(section from [5])
$\mathrm{I3}$ and other $C^{(n)}$ variants:
Assumming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-superstrong, $C^{(n)}$-extendible and $C^{(n)}$-$m$-huge in $V_δ$, for all $n$ and $m$.
Definitions of $C^{(n)}$ variants of rank-into-rank cardinals:
$κ$ is called $C^{(n)}$-$\mathrm{I3}$ cardinalif it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $j(κ) ∈ C^{(n)}$. $κ$ is called $C^{(n)+}$-$\mathrm{I3}$ cardinalif it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $δ ∈ C^{(n)}$. $κ$ is called $C^{(n)}$-$\mathrm{I1}$ cardinalif it is an $\mathrm{I1}$ cardinal, witnessed by some embedding $j : V_{δ+1} → V_{δ+1}$, with $j(κ) ∈ C^{(n)}$.
Results:
If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then $κ ∈ C^{(n)}$. Every $\mathrm{I3}$-cardinal is $C^{(1)}$-$\mathrm{I3}$ and $C^{(1)+}$-$\mathrm{I3}$. By simple reflection arguments: The least $C^{(n)}$-$\mathrm{I3}$ cardinal is not $C^{(n+1)}$-$\mathrm{I1}$, for $n ≥ 1$. The least $C^{(n)}$-$\mathrm{I1}$ cardinal is not $C^{(n+1)}$-$\mathrm{I1}$, for $n ≥ 1$. Every $C^{(n)}$-$\mathrm{I1}$ cardinal is also $C^{(n)}$-$\mathrm{I3}$. For every $n ≥ 1$, if $δ$ is a limit ordinal and $j : V_δ → V_δ$ witnesses that $κ$ is $\mathrm{I3}$, then $(\forall_{m < ω}j^m (κ) ∈ C^{(n)}) \iff δ ∈ C^{(n)}$. If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then it is $C^{(n)}$-$m$-huge, for all $m$, and there is a normal ultrafilter $\mathcal{U}$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-$m$-huge for every $m\} ∈ \mathcal{U}$. If $κ$ is $C^{(n)}$-$\mathrm{I1}$, then the least $δ$ s.t. there is an elementary embedding $j : V_{δ+1} → V_{δ+1}$ with $crit(j) = κ$ and $j(κ) ∈ C^{(n)}$ is smaller than the first ordinal in $C^{(n+1)}$ greater than $κ$. Moreover, the least $C^{(n)}$-$\mathrm{I1}$ cardinal, if it exists, is smaller than the first ordinal in $C^{(n+1)}$, for all $n ≥ 1$.
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Laver, Richard. Implications between strong large cardinal axioms.Ann Math Logic 90(1--3):79--90, 1997. MR bibtex Corazza, Paul. The gap between ${\rm I}_3$ and the wholeness axiom.Fund Math 179(1):43--60, 2003. www DOI MR bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
|
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
|
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
If you know nothing about CMOS, you need ohms law, a data sheet, and some common sense.
If you know something about CMOS, then just the common sense will do.
Per the data sheet, the input leakage current for a 74HCT08 is \$1 \mathrm{\mu A}\$. Per ohms law, this means we need \$R < \mathrm{\frac{0.9V}{1\mu A} = 900k\Omega}\$.
Common sense says that unless we're designing something for insanely low current consumption, and for which we can guarantee an electrically quiet environment, using the maximum resistance here is silly. So choose something convenient, like \$\mathrm{10k\Omega}\$.
A basic knowledge of CMOS says that for all practical purposes a CMOS chip's input current is nothing -- so you fall back on common sense, and you're done.
Note: that this is not the case for a lot of modern microcontrollers -- they are often designed so that without the pins being programmed, they have built-in weak pull-ups, to reduce current, and for historical reasons (see my final note, below). But sometimes they have built-in weak pull-downs on some pins, for practical reasons. So always read the data sheet.
Final note: If you were designing circuits 30 years ago, then the gate in question may well have been TTL. In that case, you would want a pull- up resistor (because TTL inputs source a bit of current, but sink less). Then you'd use the same \$\mathrm{10k\Omega}\$, but it would be to VCC. There is no reason not to do this for CMOS, and it's oh-so-slightly more comfortable for old circuit designers when they see it.
|
When was the word "bias" coined to mean $\mathbb{E}[\hat{\theta}-\theta]$?
The reason why I'm thinking about this right now is because I seem to recall Jaynes, in his
Probability Theory text, criticizing the use of the word "bias" used to describe this formula, and suggesting an alternative.
From Jaynes'
Probability Theory, section 17.2 "Unbiased Estimators:"
Why do orthodoxians put such exaggerated emphasis on bias? We suspect that the main reason is simply that they are caught in a psychosemantic trap of their own making. When we call the quantity $(\langle\beta\rangle-\alpha)$ the 'bias', that makes it sound like something awfully reprehensible, which we must get rid of at all costs. If it had been called instead the 'component of error orthogonal to the variance', as suggested by the Pythagorean form of (17.2), it would have been clear that these two contributions to the error are on an equal footing; it is folly to decrease one at the expense of increasing the other. This is just the price one pays for choosing a technical terminology that carries an emotional load, implying value judgments; orthodoxy falls constantly into this tactical error.
|
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
|
We want to show that for $\succcurlyeq$ on $X$,
Def 1 $\iff$ Def 2
$\boxed \Longrightarrow$
Assume that $\succcurlyeq$ is continuous by
Def 1.
Let us say $x \succ y$. Denote our open-balls as $B(x, r)$, an open ball around $x$ of radius $r$. Suppose $\forall n, \ \exists \ x^n \in B(x, \frac{1}{n}), \ y^n \in B(y, \frac{1}{n})$ such that $y^n \succcurlyeq x^n$. But then we have constructed $\{x^n\} \rightarrow x$ and $\{y^n\} \rightarrow y$, and by
Def 1, $y \succcurlyeq x$, which is a contradiction.
$\boxed \Longleftarrow$
Assume that $\succcurlyeq$ is continuous by
Def 2.
Let us take sequences ${x^n} \subset X$ and ${y^n} \subset X$ where $\forall n, \quad x^n \succcurlyeq y^n$ and $\lim_{n \rightarrow \infty} x^n = x, \quad \lim_{n \rightarrow \infty} y^n = y \quad (\text{where $x, y \in X)$}$ for $n \in \mathbb{N}$,
BUT $x \succcurlyeq y$ is false instead of true. We want to show this leads to a contradiction.
If $x \succcurlyeq y$ is false, (so $y \succ x$) then $\exists B_x, B_y$ such that $\forall y' \in B_y, x' \in B_x$, we have $y' \succ x'$. Because $\{x^n\} \rightarrow x, \{y^n\} \rightarrow y$, there exists $N$ large enough such that $\forall n>N$, we have $y^n \in B_y, x^n \in B_x$.
Thus $\forall n > N$, we have $y^n \succ x^n$, which contradicts $\forall n, \ x^n \succcurlyeq y^n$.
|
[Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] need double save-excursion -- why?
From: Joe Corneli Subject: need double save-excursion -- why? Date: Thu, 25 Dec 2003 15:30:03 -0600 When I run the function below on a file containing things like this:
####################################################################
\section{{ lefschetz}}
(lefschetz number of a map) \defn{
\item $X$ smooth compact manifold
\item $g:X\rightarrow X$ continuous
\item $G: X\rightarrow X\times X$
\item $G(x)=(x,g(x))$
\item $\#(G,\text{diagonal}(M\times M))$}
(global lefschetz number of a map: see ``lefschetz number of a map'')
(global lefschetz number: fact: smooth lefschetz fixed point theorem) \defn{
\item $X$ compact orientable manifold
\item $f:X\rightarrow X$ smooth
\item $L(f)\neq 0$
\item $f$ has a fixed point}
(lefschetz fixed point) \defn{
\item $f:X\rightarrow X$
\item $f(x)=x$
\item $+1$ is not an eigenvalue of $df_x:T_x(X)\rightarrow T_x(X)$}
####################################################################
I don't get good results: the outermost save-excursion is ignored,
and the cursor winds up near the bottom of the document.
However, if I wrap another save-excursion around the currently
outermost one, the second one is heeded and point is restored
properly.
Can anyone tell me this is necessary? Something like this is
documented in the Emacs Lisp Intro info file, but I don't understand
why a double layer of save-excursions is needed here.
(defun xi-grab-matching-defn-by-tagline (regexp)
"Grab the defns whose taglines contain a match for this regexp."
(interactive "MRegex: ")
(save-excursion
(beginning-of-buffer) ;; start at the beginning
(while (not (eobp))
(let ((found (search-forward-regexp (concat "^(.*" regexp ".*)") nil t)))
(if found
(let* ((beg (save-excursion (beginning-of-line)
(point)))
(end (save-excursion (end-of-line)
(search-forward-regexp "^$" nil t)
(beginning-of-line)
(point))) ; cut right before that
(entry (buffer-substring beg end)))
(save-excursion (set-buffer (get-buffer-create "CONCAT.tex"))
(insert entry)))
(end-of-buffer))))))
need double save-excursion -- why?, Joe Corneli <=
|
First of all, $\sqrt{2}+\sqrt{3}$ most certainly
is a number. It is a real number, approximately equal to $3.14626$ Perhaps what you're asking is why the sum of two simple radicals isn't also a simple radical, when the sum of two integers is an integer, and the sum of two fractions is a fraction.
Roots of integers are examples of algebraic numbers - numbers that are the roots of polynomial equations with integer coefficients. The sum of two algebraic numbers is an algebraic number, but it doesn't have to be as simple as the addends are.
Something like this is already true with fractions: $\frac12$ and $\frac13$ are both
unit fractions, but their sum is a more complicated fraction, with larger numbers in both the numerator and denominator than the fractions we started with.
Similarly, the numbers $\sqrt2$ and $\sqrt3$ are roots of the polynomials $x^2-2$ and $x^2-3$, respectively. Their sum, which is most simply expressed as $\sqrt2+\sqrt3$, is a root of the polynomial $x^4-10x^2+1$.
This particular algebraic number can also be written as $\sqrt{5+2\sqrt{6}}$, where you can see both $2+3$ and $2\times 3$ playing a role. (To see this, write $x=\sqrt2+\sqrt3$, square both sides, and combine integers.)
Just like you can't expect reciprocals of integers to be closed under addition, you can't expect roots of integers to be closed under addition. That's because we're involving more complicated operations than addition, namely, division and taking of roots. On the other hand, you can expect the sum of two rational numbers (fractions) to be rational, and you can expect the sum of two algebraic numbers (roots of integer polynomials) to be an algebraic number.
Does this help at all?
As requested in the comments, I'll put this in the voice I would use to address a high school or middle school student asking this question:
"First of all, $\sqrt2 + \sqrt3$ is definitely a number. Here, let's calculate its value... [calculator].... as you can see, it's not a very pretty one, but we can see a few decimal places: $3.14626\ldots$. Huh, it's kind of close to $\pi$, but a little bigger.
"Anyway, let's see if we can express this number in a nicer form:
$$\begin{align}x &= \sqrt2 + \sqrt3\\x^2 &= (\sqrt2 + \sqrt3)^2\\x^2 &= 2 + 2\sqrt2\sqrt3 + 3\\x^2 &= 5 + 2\sqrt6\\x &= \sqrt{5+2\sqrt6}\end{align}$$
(I'd talk through the steps of that algebra, making sure it's clear after each line.)
"Ok, so it's a square root, but it's a square root of something more complicated that what we started with. I guess that's fair. After all, when you add two fractions like $\frac12$ and $\frac13$, which are pretty simple, you end up with $\frac56$, which is more complicated - it's not just a $1$ on top, and both numerator and denominator are bigger than what we started with.
"Actually, it's pretty interesting, that the numbers $5$ and $6$, which are $2+3$ and $2\times 3$, both show up in the fraction $\frac56$ and in the radical $\sqrt{5+2\sqrt{6}}$
"The reason adding fractions is more complicated than adding integers, and adding radicals more complicated still, is that fractions are made of division, and radicals are made of roots, both of which are more complicated than addition and subtraction in the first place."
Second edit:
One more run at this, just to see how succinctly I can get the main point.
"To see what something is the square root of, square it:
$$(\sqrt2 + \sqrt3)^2 = 2 + 2\sqrt6 + 3 = 5 + 2\sqrt6$$
"As you can see, we don't get a whole number, because in FOIL*, we have middle terms giving us the $2\sqrt6$ part.
"It's different from a fraction, because if you look at the sum $\frac12 + \frac13$, there is a common demonimator $6$ you can multiply by that makes it the sum of two whole numbers: another whole number. No cross terms arise, because there's no FOIL going on."
(* FOIL = distributive rule applied to binomials; mnemonic for "First terms, Outside terms, Inside terms, Last terms")
Also instructive are the cases where it does work. For example, $\sqrt2 + \sqrt8 = \sqrt{18}$. You can "FOIL" it out and see why.
|
Difference between revisions of "Superstrong"
(→Relation to other large cardinal notions: +1)
m (→Relation to other large cardinal notions: \{\})
(One intermediate revision by the same user not shown) Line 55: Line 55:
* Every $C^{(n)}$-superstrong cardinal belongs to $C^{(n)}$.
* Every $C^{(n)}$-superstrong cardinal belongs to $C^{(n)}$.
* Every superstrong cardinal is $C^{(1)}$-superstrong.
* Every superstrong cardinal is $C^{(1)}$-superstrong.
−
* For every $n ≥ 1$, if $κ$ is $C^{(n+1)}$-superstrong, then there is a $κ$-complete normal [[ultrafilter]] $U$ over $κ$ such that ${α < κ : α$ is $C^{(n)}$-superstrong$} ∈ U$. Hence, the first $C^{(n)}$-superstrong cardinal, if it exists, is not $C^{(n+1)}$-superstrong.
+
* For every $n ≥ 1$, if $κ$ is $C^{(n+1)}$-superstrong, then there is a $κ$-complete normal [[ultrafilter]] $U$ over $κ$ such that ${α < κ : α$ is $C^{(n)}$-superstrong$} ∈ U$. Hence, the first $C^{(n)}$-superstrong cardinal, if it exists, is not $C^{(n+1)}$-superstrong.
* If κ is $2^κ$-[[supercompact]] and belongs to $C^{(n)}$, then there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$.
* If κ is $2^κ$-[[supercompact]] and belongs to $C^{(n)}$, then there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$.
− +
* If $κ$ is $κ+1$-$C^{(n)}$-[[extendible]] and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U
+
$.
{{References}}
{{References}}
Revision as of 11:07, 17 September 2019
Superstrong cardinals were first utilized by Hugh Woodin in 1981 as an upper bound of consistency strength for the axiom of determinacy. However, Shelah had then discovered that Shelah cardinals were a weaker bound that still sufficed to imply the consistency strength of $\text{(ZF+)AD}$. After this, it was found that the existence of infinitely many Woodin cardinals was equiconsistent to $\text{AD}$. Woodin-ness is a significant weakening of superstrongness.
Most results in this article can be found in [1] unless indicated otherwise. Contents Definitions
There are, like most critical point variations on measurable cardinals, multiple equivalent definitions of superstrongness. In particular, there is an elementary embedding definition and an extender definition.
Elementary Embedding Definition
A cardinal $\kappa$ is
$n$-superstrong (or $n$-fold superstrong when referring to the $n$-fold variants) iff it is the critical point of some elementary embedding $j:V\rightarrow M$ such that $M$ is a transitive class and $V_{j^n(\kappa)}\subset M$ (in this case, $j^{n+1}(\kappa):=j(j^n(\kappa))$ and $j^0(\kappa):=\kappa$).
A cardinal is
superstrong iff it is $1$-superstrong.
The definition quite clearly shows that $\kappa$ is $j^n(\kappa)$-strong. However, the least superstrong cardinal is never strong.
Extender Definition
A cardinal $\kappa$ is
$n$-superstrong (or $n$-fold superstrong) iff there is a $(\kappa,\beta)$-extender $\mathcal{E}$ for a $\beta>\kappa$ with $V_{j^n_{\mathcal{E}}(\kappa)}\subseteq$ $Ult_{\mathcal{E}}(V)$ (where $j_{\mathcal{E}}$ is the canonical ultrapower embedding from $V$ into $Ult_{\mathcal{E}}(V)$).
A cardinal is
superstrong iff it is $1$-superstrong. Relation to other large cardinal notions measurable = $0$-superstrong = almost $0$-huge = super almost $0$-huge = $0$-huge = super $0$-huge $n$-superstrong $n$-fold supercompact $(n+1)$-fold strong, $n$-fold extendible $(n+1)$-fold Woodin, $n$-fold Vopěnka $(n+1)$-fold Shelah almost $n$-huge super almost $n$-huge $n$-huge super $n$-huge $(n+1)$-superstrong
Let $M$ be a transitive class $M$ such that there exists an elementary embedding $j:V\to M$ with $V_{j(\kappa)}\subseteq M$, and let $\kappa$ be its superstrong critical point. While $j(\kappa)$ need not be an inaccessible cardinal in $V$, it is always worldly and the rank model $V_{j(\kappa)}$ satisfies $\text{ZFC+}$"$\kappa$ is strong" (although $\kappa$ may not be strong in $V$).
Superstrong cardinals have strong upward reflection properties, in particular there are many measurable cardinals
above a superstrong cardinal. Every $n$-huge cardinal is $n$-superstrong, and so $n$-huge cardinals also have strong reflection properties. Remark however that if $\kappa$ is strong or supercompact, then it is consistent that there is no inaccessible cardinals larger than $\kappa$: this is because if $\lambda>\kappa$ is inaccessible, then $V_\lambda$ satisfies $\kappa$'s strongness/supercompactness. Thus it is clear that supercompact cardinals need not be superstrong, even though they have higher consistency strength. In fact, because of the downward reflection properties of strong/supercompact cardinals, if there is a superstrong above a strong/supercompact $\kappa$, then there are $\kappa$-many superstrong cardinals below $\kappa$; same with hugeness instead of superstrongness. In particular, the least superstrong is strictly smaller than the least strong (and thus smaller than the least supercompact).
Every $1$-extendible cardinal is superstrong and has a normal measure containing all of the superstrongs less than said $1$-extendible. This means that the set of all superstronges less than it is stationary. Similarly, every cardinal $\kappa$ which is $2^\kappa$-supercompact is larger than the least superstrong cardinal and has a normal measure containing all of the superstrongs less than it.
Every superstrong cardinal is Woodin and has a normal measure containing all of the Woodin cardinals less than it. Thus the set of all Woodin cardinals below it is stationary, and so is the set of all measurables smaller than it. Superstrongness is consistency-wise stronger than Hyper-Woodinness.
Letting $\kappa$ be superstrong, $\kappa$ can be forced to $\aleph_2$ with an $\omega$-distributive, $\kappa$-c.c. notion of forcing, and in this forcing extension there is a normal $\omega_2$-saturated ideal on $\omega_1$. [3]
Superstrongness is not Laver indestructible. [4]
Every $C^{(n)}$-superstrong cardinal belongs to $C^{(n)}$. Every superstrong cardinal is $C^{(1)}$-superstrong. For every $n ≥ 1$, if $κ$ is $C^{(n+1)}$-superstrong, then there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-superstrong$\} ∈ U$. Hence, the first $C^{(n)}$-superstrong cardinal, if it exists, is not $C^{(n+1)}$-superstrong. If κ is $2^κ$-supercompact and belongs to $C^{(n)}$, then there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. If $κ$ is $κ+1$-$C^{(n)}$-extendible and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-superstrong (inter alia) in $V_δ$, for all $n$ and $m$.
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
|
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.